Test Report: Docker_Windows 14995

                    
                      411d4579fd248fd57a4259437564c3e08f354535:2022-09-21:25810
                    
                

Test fail (146/224)

Order failed test Duration
20 TestOffline 54.25
22 TestAddons/Setup 49.97
23 TestCertOptions 54.73
24 TestCertExpiration 308.84
25 TestDockerFlags 54.06
26 TestForceSystemdFlag 54.22
27 TestForceSystemdEnv 53.73
32 TestErrorSpam/setup 48.48
41 TestFunctional/serial/StartWithProxy 50.05
43 TestFunctional/serial/SoftStart 76.39
44 TestFunctional/serial/KubeContext 1.09
45 TestFunctional/serial/KubectlGetPods 1.05
52 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 1.03
53 TestFunctional/serial/CacheCmd/cache/cache_reload 3.77
55 TestFunctional/serial/MinikubeKubectlCmd 1.42
56 TestFunctional/serial/MinikubeKubectlCmdDirectly 1.4
57 TestFunctional/serial/ExtraConfig 76.17
58 TestFunctional/serial/ComponentHealth 1.05
59 TestFunctional/serial/LogsCmd 1.65
60 TestFunctional/serial/LogsFileCmd 1.49
66 TestFunctional/parallel/StatusCmd 2.66
69 TestFunctional/parallel/ServiceCmd 1.98
70 TestFunctional/parallel/ServiceCmdConnect 1.98
72 TestFunctional/parallel/PersistentVolumeClaim 0.89
74 TestFunctional/parallel/SSHCmd 3.3
75 TestFunctional/parallel/CpCmd 4.59
76 TestFunctional/parallel/MySQL 1.14
77 TestFunctional/parallel/FileSync 2.09
78 TestFunctional/parallel/CertSync 7.63
82 TestFunctional/parallel/NodeLabels 1.08
84 TestFunctional/parallel/NonActiveRuntimeDisabled 1.24
89 TestFunctional/parallel/DockerEnv/powershell 2.85
91 TestFunctional/parallel/Version/components 1.14
92 TestFunctional/parallel/ImageCommands/ImageListShort 0.59
93 TestFunctional/parallel/ImageCommands/ImageListTable 0.62
94 TestFunctional/parallel/ImageCommands/ImageListJson 0.62
95 TestFunctional/parallel/ImageCommands/ImageListYaml 0.6
96 TestFunctional/parallel/ImageCommands/ImageBuild 2.3
97 TestFunctional/parallel/ImageCommands/Setup 0.38
98 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.41
102 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 0
108 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.38
109 TestFunctional/parallel/UpdateContextCmd/no_changes 1.15
110 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 1.11
111 TestFunctional/parallel/UpdateContextCmd/no_clusters 1.1
112 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 0.42
113 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.57
115 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.1
116 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.19
122 TestIngressAddonLegacy/StartLegacyK8sCluster 50.69
124 TestIngressAddonLegacy/serial/ValidateIngressAddonActivation 1.88
126 TestIngressAddonLegacy/serial/ValidateIngressAddons 0.84
129 TestJSONOutput/start/Command 48.91
132 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
133 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0.01
135 TestJSONOutput/pause/Command 1.07
141 TestJSONOutput/unpause/Command 1.08
147 TestJSONOutput/stop/Command 19.1
150 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
151 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
154 TestKicCustomNetwork/create_custom_network 199.33
156 TestKicExistingNetwork 0.85
157 TestKicCustomSubnet 204.2
159 TestMinikubeProfile 52.84
162 TestMountStart/serial/StartWithMountFirst 49.84
165 TestMultiNode/serial/FreshStart2Nodes 49.32
166 TestMultiNode/serial/DeployApp2Nodes 4.33
167 TestMultiNode/serial/PingHostFrom2Pods 1.35
168 TestMultiNode/serial/AddNode 1.9
169 TestMultiNode/serial/ProfileList 1.58
170 TestMultiNode/serial/CopyFile 1.44
171 TestMultiNode/serial/StopNode 2.76
172 TestMultiNode/serial/StartAfterStop 2.48
173 TestMultiNode/serial/RestartKeepsNodes 96.38
174 TestMultiNode/serial/DeleteNode 2.39
175 TestMultiNode/serial/StopMultiNode 20.97
176 TestMultiNode/serial/RestartMultiNode 76.75
177 TestMultiNode/serial/ValidateNameConflict 100.53
181 TestPreload 52.07
182 TestScheduledStopWindows 50.79
186 TestInsufficientStorage 11.18
187 TestRunningBinaryUpgrade 136.76
189 TestKubernetesUpgrade 72.08
190 TestMissingContainerUpgrade 128.26
194 TestStoppedBinaryUpgrade/Upgrade 107.98
195 TestNoKubernetes/serial/StartWithK8s 53.63
196 TestNoKubernetes/serial/StartWithStopK8s 78.92
216 TestPause/serial/Start 51.75
217 TestStoppedBinaryUpgrade/MinikubeLogs 1.59
218 TestNoKubernetes/serial/Start 77.51
221 TestNoKubernetes/serial/Stop 20.31
222 TestNoKubernetes/serial/StartNoArgs 64.84
224 TestStartStop/group/old-k8s-version/serial/FirstStart 51.4
226 TestStartStop/group/no-preload/serial/FirstStart 50.7
228 TestStartStop/group/embed-certs/serial/FirstStart 50.57
229 TestStartStop/group/old-k8s-version/serial/DeployApp 2.13
230 TestStartStop/group/no-preload/serial/DeployApp 2.01
231 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.76
232 TestStartStop/group/old-k8s-version/serial/Stop 20.21
233 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.72
234 TestStartStop/group/no-preload/serial/Stop 20.17
235 TestStartStop/group/embed-certs/serial/DeployApp 1.78
236 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.57
237 TestStartStop/group/embed-certs/serial/Stop 20.26
238 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 2.01
239 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 2
240 TestStartStop/group/old-k8s-version/serial/SecondStart 77.57
241 TestStartStop/group/no-preload/serial/SecondStart 78.37
242 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 2.02
243 TestStartStop/group/embed-certs/serial/SecondStart 77.78
244 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 0.86
245 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 1.07
246 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 1.96
247 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 0.91
248 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 1.08
249 TestStartStop/group/old-k8s-version/serial/Pause 2.85
250 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 2.03
251 TestStartStop/group/no-preload/serial/Pause 2.9
253 TestStartStop/group/default-k8s-different-port/serial/FirstStart 50.09
254 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 0.87
256 TestStartStop/group/newest-cni/serial/FirstStart 50.8
257 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 1.04
258 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 2.09
259 TestStartStop/group/embed-certs/serial/Pause 2.87
260 TestNetworkPlugins/group/auto/Start 49.29
261 TestNetworkPlugins/group/calico/Start 49.18
262 TestStartStop/group/default-k8s-different-port/serial/DeployApp 1.93
265 TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive 1.68
266 TestStartStop/group/newest-cni/serial/Stop 20.25
267 TestStartStop/group/default-k8s-different-port/serial/Stop 20.19
268 TestNetworkPlugins/group/cilium/Start 49.42
269 TestNetworkPlugins/group/false/Start 49.29
270 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 2.11
271 TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop 2.03
272 TestStartStop/group/newest-cni/serial/SecondStart 77.76
273 TestStartStop/group/default-k8s-different-port/serial/SecondStart 78.34
274 TestNetworkPlugins/group/bridge/Start 49.62
275 TestNetworkPlugins/group/enable-default-cni/Start 50
278 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 2.04
279 TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop 0.79
280 TestStartStop/group/newest-cni/serial/Pause 2.81
281 TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop 1.05
282 TestStartStop/group/default-k8s-different-port/serial/VerifyKubernetesImages 2.06
283 TestStartStop/group/default-k8s-different-port/serial/Pause 2.89
284 TestNetworkPlugins/group/kubenet/Start 49.2
285 TestNetworkPlugins/group/kindnet/Start 48.94
x
+
TestOffline (54.25s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-windows-amd64.exe start -p offline-docker-20220921220434-5916 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p offline-docker-20220921220434-5916 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker: exit status 60 (51.6194143s)

                                                
                                                
-- stdout --
	* [offline-docker-20220921220434-5916] minikube v1.27.0 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	  - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=14995
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting control plane node offline-docker-20220921220434-5916 in cluster offline-docker-20220921220434-5916
	* Pulling base image ...
	* Another minikube instance is downloading dependencies... 
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* docker "offline-docker-20220921220434-5916" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0921 22:04:34.767282    1144 out.go:296] Setting OutFile to fd 948 ...
	I0921 22:04:34.874650    1144 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0921 22:04:34.874650    1144 out.go:309] Setting ErrFile to fd 1016...
	I0921 22:04:34.874650    1144 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0921 22:04:34.897162    1144 out.go:303] Setting JSON to false
	I0921 22:04:34.900577    1144 start.go:115] hostinfo: {"hostname":"minikube2","uptime":3943,"bootTime":1663793931,"procs":151,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19044 Build 19044","kernelVersion":"10.0.19044 Build 19044","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"29dced62-21fb-45d8-a34e-472b66ced002"}
	W0921 22:04:34.900712    1144 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0921 22:04:34.909938    1144 out.go:177] * [offline-docker-20220921220434-5916] minikube v1.27.0 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	I0921 22:04:34.919103    1144 notify.go:214] Checking for updates...
	I0921 22:04:34.924949    1144 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	I0921 22:04:34.931942    1144 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	I0921 22:04:34.940179    1144 out.go:177]   - MINIKUBE_LOCATION=14995
	I0921 22:04:34.947279    1144 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0921 22:04:34.951928    1144 config.go:180] Loaded profile config "multinode-20220921215635-5916-m01": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.2
	I0921 22:04:34.951928    1144 driver.go:365] Setting default libvirt URI to qemu:///system
	I0921 22:04:35.315526    1144 docker.go:137] docker version: linux-20.10.17
	I0921 22:04:35.327133    1144 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0921 22:04:35.910461    1144 info.go:265] docker info: {ID:6VFI:N2PO:PLBP:KXLR:EN7R:OR7P:564S:UY7N:ZWDA:5V2U:KGL2:KJZ7 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:43 OomKillDisable:true NGoroutines:66 SystemTime:2022-09-21 22:04:35.4921389 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.10.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.9] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-
plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.19.0]] Warnings:<nil>}}
	I0921 22:04:35.914016    1144 out.go:177] * Using the docker driver based on user configuration
	I0921 22:04:35.917118    1144 start.go:284] selected driver: docker
	I0921 22:04:35.917118    1144 start.go:808] validating driver "docker" against <nil>
	I0921 22:04:35.917287    1144 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0921 22:04:35.999574    1144 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0921 22:04:36.615670    1144 info.go:265] docker info: {ID:6VFI:N2PO:PLBP:KXLR:EN7R:OR7P:564S:UY7N:ZWDA:5V2U:KGL2:KJZ7 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:43 OomKillDisable:true NGoroutines:66 SystemTime:2022-09-21 22:04:36.1932061 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.10.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.9] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-
plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.19.0]] Warnings:<nil>}}
	I0921 22:04:36.615670    1144 start_flags.go:302] no existing cluster config was found, will generate one from the flags 
	I0921 22:04:36.617369    1144 start_flags.go:867] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0921 22:04:36.622093    1144 out.go:177] * Using Docker Desktop driver with root privileges
	I0921 22:04:36.624658    1144 cni.go:95] Creating CNI manager for ""
	I0921 22:04:36.624658    1144 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0921 22:04:36.624658    1144 start_flags.go:316] config:
	{Name:offline-docker-20220921220434-5916 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.2 ClusterName:offline-docker-20220921220434-5916 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I0921 22:04:36.627024    1144 out.go:177] * Starting control plane node offline-docker-20220921220434-5916 in cluster offline-docker-20220921220434-5916
	I0921 22:04:36.631146    1144 cache.go:120] Beginning downloading kic base image for docker with docker
	I0921 22:04:36.634130    1144 out.go:177] * Pulling base image ...
	I0921 22:04:36.637539    1144 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c in local docker daemon
	I0921 22:04:36.637564    1144 preload.go:132] Checking if preload exists for k8s version v1.25.2 and runtime docker
	I0921 22:04:36.637979    1144 preload.go:148] Found local preload: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.25.2-docker-overlay2-amd64.tar.lz4
	I0921 22:04:36.637979    1144 cache.go:57] Caching tarball of preloaded images
	I0921 22:04:36.638408    1144 preload.go:174] Found C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.25.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0921 22:04:36.638408    1144 cache.go:60] Finished verifying existence of preloaded tar for  v1.25.2 on docker
	I0921 22:04:36.638408    1144 profile.go:148] Saving config to C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\offline-docker-20220921220434-5916\config.json ...
	I0921 22:04:36.639166    1144 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\offline-docker-20220921220434-5916\config.json: {Name:mk90cd8ceca68ac605f7fe370796fe51baf7a061 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0921 22:04:36.836776    1144 cache.go:147] Downloading gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c to local cache
	I0921 22:04:36.836776    1144 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase_v0.0.34@sha256_f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c.tar
	I0921 22:04:36.836776    1144 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase_v0.0.34@sha256_f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c.tar
	I0921 22:04:36.836776    1144 image.go:59] Checking for gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c in local cache directory
	I0921 22:04:36.836776    1144 image.go:62] Found gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c in local cache directory, skipping pull
	I0921 22:04:36.836776    1144 image.go:103] gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c exists in cache, skipping pull
	I0921 22:04:36.837783    1144 cache.go:150] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c as a tarball
	I0921 22:04:36.837783    1144 cache.go:161] Loading gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c from local cache
	I0921 22:04:36.837783    1144 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase_v0.0.34@sha256_f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c.tar
	I0921 22:04:39.304627    1144 cache.go:164] successfully loaded gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c from cached tarball
	I0921 22:04:39.304717    1144 cache.go:170] Downloading gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c to local daemon
	I0921 22:04:39.304772    1144 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c.d.lock -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase_v0.0.34@sha256_f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c.d.lock
	I0921 22:04:39.413907    1144 out.go:204] * Another minikube instance is downloading dependencies... 
	I0921 22:04:41.080111    1144 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c in local docker daemon
	I0921 22:04:41.358569    1144 image.go:243] Writing gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c to local daemon
	    > gcr.io/k8s-minikube/kicbase:  0 B [__________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase:  0 B [__________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase:  0 B [__________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase:  0 B [______________________] ?% ? p/s 800msI0921 22:04:42.845426    1144 cache.go:177] use image loaded from cache gcr.io/k8s-minikube/kicbase:v0.0.34
	W0921 22:04:42.845426    1144 out.go:239] ! minikube was unable to download gcr.io/k8s-minikube/kicbase:v0.0.34, but successfully downloaded gcr.io/k8s-minikube/kicbase:v0.0.34 as a fallback image
	! minikube was unable to download gcr.io/k8s-minikube/kicbase:v0.0.34, but successfully downloaded gcr.io/k8s-minikube/kicbase:v0.0.34 as a fallback image
	I0921 22:04:42.845426    1144 cache.go:208] Successfully downloaded all kic artifacts
	I0921 22:04:42.845426    1144 start.go:364] acquiring machines lock for offline-docker-20220921220434-5916: {Name:mk6d326239296bebe2abd9bf4f30176ccb3d7cab Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0921 22:04:42.846079    1144 start.go:368] acquired machines lock for "offline-docker-20220921220434-5916" in 652.7µs
	I0921 22:04:42.846398    1144 start.go:93] Provisioning new machine with config: &{Name:offline-docker-20220921220434-5916 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.34 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.2 ClusterName:offline-docker-20220921220434-5916 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loc
al ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.25.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: S
ocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet} &{Name: IP: Port:8443 KubernetesVersion:v1.25.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0921 22:04:42.846481    1144 start.go:125] createHost starting for "" (driver="docker")
	I0921 22:04:43.056519    1144 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0921 22:04:43.057522    1144 start.go:159] libmachine.API.Create for "offline-docker-20220921220434-5916" (driver="docker")
	I0921 22:04:43.057522    1144 client.go:168] LocalClient.Create starting
	I0921 22:04:43.057522    1144 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I0921 22:04:43.057522    1144 main.go:134] libmachine: Decoding PEM data...
	I0921 22:04:43.057522    1144 main.go:134] libmachine: Parsing certificate...
	I0921 22:04:43.057522    1144 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I0921 22:04:43.057522    1144 main.go:134] libmachine: Decoding PEM data...
	I0921 22:04:43.057522    1144 main.go:134] libmachine: Parsing certificate...
	I0921 22:04:43.065775    1144 cli_runner.go:164] Run: docker network inspect offline-docker-20220921220434-5916 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0921 22:04:43.363375    1144 cli_runner.go:211] docker network inspect offline-docker-20220921220434-5916 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0921 22:04:43.371779    1144 network_create.go:272] running [docker network inspect offline-docker-20220921220434-5916] to gather additional debugging logs...
	I0921 22:04:43.371779    1144 cli_runner.go:164] Run: docker network inspect offline-docker-20220921220434-5916
	W0921 22:04:43.640969    1144 cli_runner.go:211] docker network inspect offline-docker-20220921220434-5916 returned with exit code 1
	I0921 22:04:43.641045    1144 network_create.go:275] error running [docker network inspect offline-docker-20220921220434-5916]: docker network inspect offline-docker-20220921220434-5916: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: offline-docker-20220921220434-5916
	I0921 22:04:43.641045    1144 network_create.go:277] output of [docker network inspect offline-docker-20220921220434-5916]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: offline-docker-20220921220434-5916
	
	** /stderr **
	I0921 22:04:43.649759    1144 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0921 22:04:43.910711    1144 network.go:290] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc00014ad48] misses:0}
	I0921 22:04:43.911443    1144 network.go:236] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0921 22:04:43.911520    1144 network_create.go:115] attempt to create docker network offline-docker-20220921220434-5916 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0921 22:04:43.918234    1144 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=offline-docker-20220921220434-5916 offline-docker-20220921220434-5916
	W0921 22:04:44.120532    1144 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=offline-docker-20220921220434-5916 offline-docker-20220921220434-5916 returned with exit code 1
	E0921 22:04:44.120709    1144 network_create.go:104] error while trying to create docker network offline-docker-20220921220434-5916 192.168.49.0/24: create docker network offline-docker-20220921220434-5916 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=offline-docker-20220921220434-5916 offline-docker-20220921220434-5916: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 19ca13ae98424fada5e5caf81e16952f7d98a1fd6789b1fa05aea946ea4e8ae5 (br-19ca13ae9842): conflicts with network a04d36bfb3cf4b1b1ab643a2ec70fadfc4a485878b32a275b4c7653037d66dd0 (br-a04d36bfb3cf): networks have overlapping IPv4
	W0921 22:04:44.121083    1144 out.go:239] ! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network offline-docker-20220921220434-5916 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=offline-docker-20220921220434-5916 offline-docker-20220921220434-5916: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 19ca13ae98424fada5e5caf81e16952f7d98a1fd6789b1fa05aea946ea4e8ae5 (br-19ca13ae9842): conflicts with network a04d36bfb3cf4b1b1ab643a2ec70fadfc4a485878b32a275b4c7653037d66dd0 (br-a04d36bfb3cf): networks have overlapping IPv4
	
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network offline-docker-20220921220434-5916 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=offline-docker-20220921220434-5916 offline-docker-20220921220434-5916: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 19ca13ae98424fada5e5caf81e16952f7d98a1fd6789b1fa05aea946ea4e8ae5 (br-19ca13ae9842): conflicts with network a04d36bfb3cf4b1b1ab643a2ec70fadfc4a485878b32a275b4c7653037d66dd0 (br-a04d36bfb3cf): networks have overlapping IPv4
	
	I0921 22:04:44.138776    1144 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0921 22:04:44.355585    1144 cli_runner.go:164] Run: docker volume create offline-docker-20220921220434-5916 --label name.minikube.sigs.k8s.io=offline-docker-20220921220434-5916 --label created_by.minikube.sigs.k8s.io=true
	W0921 22:04:44.580644    1144 cli_runner.go:211] docker volume create offline-docker-20220921220434-5916 --label name.minikube.sigs.k8s.io=offline-docker-20220921220434-5916 --label created_by.minikube.sigs.k8s.io=true returned with exit code 1
	I0921 22:04:44.580701    1144 client.go:171] LocalClient.Create took 1.5231681s
	I0921 22:04:46.593258    1144 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0921 22:04:46.605760    1144 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20220921220434-5916
	W0921 22:04:46.799142    1144 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20220921220434-5916 returned with exit code 1
	I0921 22:04:46.799142    1144 retry.go:31] will retry after 276.165072ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-20220921220434-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20220921220434-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: offline-docker-20220921220434-5916
	I0921 22:04:47.085423    1144 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20220921220434-5916
	W0921 22:04:47.281454    1144 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20220921220434-5916 returned with exit code 1
	I0921 22:04:47.281454    1144 retry.go:31] will retry after 540.190908ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-20220921220434-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20220921220434-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: offline-docker-20220921220434-5916
	I0921 22:04:47.838334    1144 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20220921220434-5916
	W0921 22:04:48.035210    1144 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20220921220434-5916 returned with exit code 1
	W0921 22:04:48.035463    1144 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-20220921220434-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20220921220434-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: offline-docker-20220921220434-5916
	
	W0921 22:04:48.035463    1144 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-20220921220434-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20220921220434-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: offline-docker-20220921220434-5916
	I0921 22:04:48.045674    1144 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0921 22:04:48.052470    1144 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20220921220434-5916
	W0921 22:04:48.255188    1144 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20220921220434-5916 returned with exit code 1
	I0921 22:04:48.255188    1144 retry.go:31] will retry after 234.428547ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-20220921220434-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20220921220434-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: offline-docker-20220921220434-5916
	I0921 22:04:48.512319    1144 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20220921220434-5916
	W0921 22:04:48.704743    1144 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20220921220434-5916 returned with exit code 1
	I0921 22:04:48.704743    1144 retry.go:31] will retry after 346.739061ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-20220921220434-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20220921220434-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: offline-docker-20220921220434-5916
	I0921 22:04:49.069881    1144 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20220921220434-5916
	W0921 22:04:49.294546    1144 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20220921220434-5916 returned with exit code 1
	I0921 22:04:49.294546    1144 retry.go:31] will retry after 667.587979ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-20220921220434-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20220921220434-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: offline-docker-20220921220434-5916
	I0921 22:04:49.982867    1144 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20220921220434-5916
	W0921 22:04:50.259638    1144 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20220921220434-5916 returned with exit code 1
	W0921 22:04:50.259638    1144 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-20220921220434-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20220921220434-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: offline-docker-20220921220434-5916
	
	W0921 22:04:50.259638    1144 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-20220921220434-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20220921220434-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: offline-docker-20220921220434-5916
	I0921 22:04:50.259638    1144 start.go:128] duration metric: createHost completed in 7.4131015s
	I0921 22:04:50.259638    1144 start.go:83] releasing machines lock for "offline-docker-20220921220434-5916", held for 7.4135034s
	W0921 22:04:50.259638    1144 start.go:602] error starting host: creating host: create: creating: setting up container node: creating volume for offline-docker-20220921220434-5916 container: docker volume create offline-docker-20220921220434-5916 --label name.minikube.sigs.k8s.io=offline-docker-20220921220434-5916 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create offline-docker-20220921220434-5916: error while creating volume root path '/var/lib/docker/volumes/offline-docker-20220921220434-5916': mkdir /var/lib/docker/volumes/offline-docker-20220921220434-5916: read-only file system
	I0921 22:04:50.274119    1144 cli_runner.go:164] Run: docker container inspect offline-docker-20220921220434-5916 --format={{.State.Status}}
	W0921 22:04:50.462907    1144 cli_runner.go:211] docker container inspect offline-docker-20220921220434-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:04:50.462907    1144 delete.go:82] Unable to get host status for offline-docker-20220921220434-5916, assuming it has already been deleted: state: unknown state "offline-docker-20220921220434-5916": docker container inspect offline-docker-20220921220434-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: offline-docker-20220921220434-5916
	W0921 22:04:50.462907    1144 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: setting up container node: creating volume for offline-docker-20220921220434-5916 container: docker volume create offline-docker-20220921220434-5916 --label name.minikube.sigs.k8s.io=offline-docker-20220921220434-5916 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create offline-docker-20220921220434-5916: error while creating volume root path '/var/lib/docker/volumes/offline-docker-20220921220434-5916': mkdir /var/lib/docker/volumes/offline-docker-20220921220434-5916: read-only file system
	
	! StartHost failed, but will try again: creating host: create: creating: setting up container node: creating volume for offline-docker-20220921220434-5916 container: docker volume create offline-docker-20220921220434-5916 --label name.minikube.sigs.k8s.io=offline-docker-20220921220434-5916 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create offline-docker-20220921220434-5916: error while creating volume root path '/var/lib/docker/volumes/offline-docker-20220921220434-5916': mkdir /var/lib/docker/volumes/offline-docker-20220921220434-5916: read-only file system
	
	I0921 22:04:50.462907    1144 start.go:617] Will try again in 5 seconds ...
	I0921 22:04:55.470956    1144 start.go:364] acquiring machines lock for offline-docker-20220921220434-5916: {Name:mk6d326239296bebe2abd9bf4f30176ccb3d7cab Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0921 22:04:55.471389    1144 start.go:368] acquired machines lock for "offline-docker-20220921220434-5916" in 201µs
	I0921 22:04:55.471741    1144 start.go:96] Skipping create...Using existing machine configuration
	I0921 22:04:55.471807    1144 fix.go:55] fixHost starting: 
	I0921 22:04:55.486405    1144 cli_runner.go:164] Run: docker container inspect offline-docker-20220921220434-5916 --format={{.State.Status}}
	W0921 22:04:55.705383    1144 cli_runner.go:211] docker container inspect offline-docker-20220921220434-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:04:55.705535    1144 fix.go:103] recreateIfNeeded on offline-docker-20220921220434-5916: state= err=unknown state "offline-docker-20220921220434-5916": docker container inspect offline-docker-20220921220434-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: offline-docker-20220921220434-5916
	I0921 22:04:55.705535    1144 fix.go:108] machineExists: false. err=machine does not exist
	I0921 22:04:55.712076    1144 out.go:177] * docker "offline-docker-20220921220434-5916" container is missing, will recreate.
	I0921 22:04:55.713684    1144 delete.go:124] DEMOLISHING offline-docker-20220921220434-5916 ...
	I0921 22:04:55.728956    1144 cli_runner.go:164] Run: docker container inspect offline-docker-20220921220434-5916 --format={{.State.Status}}
	W0921 22:04:55.908315    1144 cli_runner.go:211] docker container inspect offline-docker-20220921220434-5916 --format={{.State.Status}} returned with exit code 1
	W0921 22:04:55.908315    1144 stop.go:75] unable to get state: unknown state "offline-docker-20220921220434-5916": docker container inspect offline-docker-20220921220434-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: offline-docker-20220921220434-5916
	I0921 22:04:55.908315    1144 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "offline-docker-20220921220434-5916": docker container inspect offline-docker-20220921220434-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: offline-docker-20220921220434-5916
	I0921 22:04:55.921330    1144 cli_runner.go:164] Run: docker container inspect offline-docker-20220921220434-5916 --format={{.State.Status}}
	W0921 22:04:56.125350    1144 cli_runner.go:211] docker container inspect offline-docker-20220921220434-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:04:56.125593    1144 delete.go:82] Unable to get host status for offline-docker-20220921220434-5916, assuming it has already been deleted: state: unknown state "offline-docker-20220921220434-5916": docker container inspect offline-docker-20220921220434-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: offline-docker-20220921220434-5916
	I0921 22:04:56.134354    1144 cli_runner.go:164] Run: docker container inspect -f {{.Id}} offline-docker-20220921220434-5916
	W0921 22:04:56.344595    1144 cli_runner.go:211] docker container inspect -f {{.Id}} offline-docker-20220921220434-5916 returned with exit code 1
	I0921 22:04:56.344657    1144 kic.go:356] could not find the container offline-docker-20220921220434-5916 to remove it. will try anyways
	I0921 22:04:56.352109    1144 cli_runner.go:164] Run: docker container inspect offline-docker-20220921220434-5916 --format={{.State.Status}}
	W0921 22:04:56.546124    1144 cli_runner.go:211] docker container inspect offline-docker-20220921220434-5916 --format={{.State.Status}} returned with exit code 1
	W0921 22:04:56.546210    1144 oci.go:84] error getting container status, will try to delete anyways: unknown state "offline-docker-20220921220434-5916": docker container inspect offline-docker-20220921220434-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: offline-docker-20220921220434-5916
	I0921 22:04:56.554271    1144 cli_runner.go:164] Run: docker exec --privileged -t offline-docker-20220921220434-5916 /bin/bash -c "sudo init 0"
	W0921 22:04:56.732176    1144 cli_runner.go:211] docker exec --privileged -t offline-docker-20220921220434-5916 /bin/bash -c "sudo init 0" returned with exit code 1
	I0921 22:04:56.732176    1144 oci.go:646] error shutdown offline-docker-20220921220434-5916: docker exec --privileged -t offline-docker-20220921220434-5916 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: offline-docker-20220921220434-5916
	I0921 22:04:57.752152    1144 cli_runner.go:164] Run: docker container inspect offline-docker-20220921220434-5916 --format={{.State.Status}}
	W0921 22:04:57.988026    1144 cli_runner.go:211] docker container inspect offline-docker-20220921220434-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:04:57.988026    1144 oci.go:658] temporary error verifying shutdown: unknown state "offline-docker-20220921220434-5916": docker container inspect offline-docker-20220921220434-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: offline-docker-20220921220434-5916
	I0921 22:04:57.988026    1144 oci.go:660] temporary error: container offline-docker-20220921220434-5916 status is  but expect it to be exited
	I0921 22:04:57.988026    1144 retry.go:31] will retry after 328.259627ms: couldn't verify container is exited. %v: unknown state "offline-docker-20220921220434-5916": docker container inspect offline-docker-20220921220434-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: offline-docker-20220921220434-5916
	I0921 22:04:58.337326    1144 cli_runner.go:164] Run: docker container inspect offline-docker-20220921220434-5916 --format={{.State.Status}}
	W0921 22:04:58.530982    1144 cli_runner.go:211] docker container inspect offline-docker-20220921220434-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:04:58.531327    1144 oci.go:658] temporary error verifying shutdown: unknown state "offline-docker-20220921220434-5916": docker container inspect offline-docker-20220921220434-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: offline-docker-20220921220434-5916
	I0921 22:04:58.531378    1144 oci.go:660] temporary error: container offline-docker-20220921220434-5916 status is  but expect it to be exited
	I0921 22:04:58.531413    1144 retry.go:31] will retry after 447.727139ms: couldn't verify container is exited. %v: unknown state "offline-docker-20220921220434-5916": docker container inspect offline-docker-20220921220434-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: offline-docker-20220921220434-5916
	I0921 22:04:58.989793    1144 cli_runner.go:164] Run: docker container inspect offline-docker-20220921220434-5916 --format={{.State.Status}}
	W0921 22:04:59.196370    1144 cli_runner.go:211] docker container inspect offline-docker-20220921220434-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:04:59.196370    1144 oci.go:658] temporary error verifying shutdown: unknown state "offline-docker-20220921220434-5916": docker container inspect offline-docker-20220921220434-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: offline-docker-20220921220434-5916
	I0921 22:04:59.196370    1144 oci.go:660] temporary error: container offline-docker-20220921220434-5916 status is  but expect it to be exited
	I0921 22:04:59.196370    1144 retry.go:31] will retry after 901.025843ms: couldn't verify container is exited. %v: unknown state "offline-docker-20220921220434-5916": docker container inspect offline-docker-20220921220434-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: offline-docker-20220921220434-5916
	I0921 22:05:00.105641    1144 cli_runner.go:164] Run: docker container inspect offline-docker-20220921220434-5916 --format={{.State.Status}}
	W0921 22:05:00.299242    1144 cli_runner.go:211] docker container inspect offline-docker-20220921220434-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:05:00.299242    1144 oci.go:658] temporary error verifying shutdown: unknown state "offline-docker-20220921220434-5916": docker container inspect offline-docker-20220921220434-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: offline-docker-20220921220434-5916
	I0921 22:05:00.299242    1144 oci.go:660] temporary error: container offline-docker-20220921220434-5916 status is  but expect it to be exited
	I0921 22:05:00.299242    1144 retry.go:31] will retry after 1.713171311s: couldn't verify container is exited. %v: unknown state "offline-docker-20220921220434-5916": docker container inspect offline-docker-20220921220434-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: offline-docker-20220921220434-5916
	I0921 22:05:02.033560    1144 cli_runner.go:164] Run: docker container inspect offline-docker-20220921220434-5916 --format={{.State.Status}}
	W0921 22:05:02.230056    1144 cli_runner.go:211] docker container inspect offline-docker-20220921220434-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:05:02.230088    1144 oci.go:658] temporary error verifying shutdown: unknown state "offline-docker-20220921220434-5916": docker container inspect offline-docker-20220921220434-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: offline-docker-20220921220434-5916
	I0921 22:05:02.230088    1144 oci.go:660] temporary error: container offline-docker-20220921220434-5916 status is  but expect it to be exited
	I0921 22:05:02.230088    1144 retry.go:31] will retry after 3.325151152s: couldn't verify container is exited. %v: unknown state "offline-docker-20220921220434-5916": docker container inspect offline-docker-20220921220434-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: offline-docker-20220921220434-5916
	I0921 22:05:05.572935    1144 cli_runner.go:164] Run: docker container inspect offline-docker-20220921220434-5916 --format={{.State.Status}}
	W0921 22:05:05.790152    1144 cli_runner.go:211] docker container inspect offline-docker-20220921220434-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:05:05.790432    1144 oci.go:658] temporary error verifying shutdown: unknown state "offline-docker-20220921220434-5916": docker container inspect offline-docker-20220921220434-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: offline-docker-20220921220434-5916
	I0921 22:05:05.790432    1144 oci.go:660] temporary error: container offline-docker-20220921220434-5916 status is  but expect it to be exited
	I0921 22:05:05.790432    1144 retry.go:31] will retry after 2.711970641s: couldn't verify container is exited. %v: unknown state "offline-docker-20220921220434-5916": docker container inspect offline-docker-20220921220434-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: offline-docker-20220921220434-5916
	I0921 22:05:08.524323    1144 cli_runner.go:164] Run: docker container inspect offline-docker-20220921220434-5916 --format={{.State.Status}}
	W0921 22:05:08.711042    1144 cli_runner.go:211] docker container inspect offline-docker-20220921220434-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:05:08.711042    1144 oci.go:658] temporary error verifying shutdown: unknown state "offline-docker-20220921220434-5916": docker container inspect offline-docker-20220921220434-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: offline-docker-20220921220434-5916
	I0921 22:05:08.711042    1144 oci.go:660] temporary error: container offline-docker-20220921220434-5916 status is  but expect it to be exited
	I0921 22:05:08.711042    1144 retry.go:31] will retry after 5.015617898s: couldn't verify container is exited. %v: unknown state "offline-docker-20220921220434-5916": docker container inspect offline-docker-20220921220434-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: offline-docker-20220921220434-5916
	I0921 22:05:13.741752    1144 cli_runner.go:164] Run: docker container inspect offline-docker-20220921220434-5916 --format={{.State.Status}}
	W0921 22:05:13.934850    1144 cli_runner.go:211] docker container inspect offline-docker-20220921220434-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:05:13.934850    1144 oci.go:658] temporary error verifying shutdown: unknown state "offline-docker-20220921220434-5916": docker container inspect offline-docker-20220921220434-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: offline-docker-20220921220434-5916
	I0921 22:05:13.934850    1144 oci.go:660] temporary error: container offline-docker-20220921220434-5916 status is  but expect it to be exited
	I0921 22:05:13.934850    1144 oci.go:88] couldn't shut down offline-docker-20220921220434-5916 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "offline-docker-20220921220434-5916": docker container inspect offline-docker-20220921220434-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: offline-docker-20220921220434-5916
	 
	I0921 22:05:13.942677    1144 cli_runner.go:164] Run: docker rm -f -v offline-docker-20220921220434-5916
	I0921 22:05:14.160339    1144 cli_runner.go:164] Run: docker container inspect -f {{.Id}} offline-docker-20220921220434-5916
	W0921 22:05:14.399796    1144 cli_runner.go:211] docker container inspect -f {{.Id}} offline-docker-20220921220434-5916 returned with exit code 1
	I0921 22:05:14.405837    1144 cli_runner.go:164] Run: docker network inspect offline-docker-20220921220434-5916 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0921 22:05:14.614740    1144 cli_runner.go:211] docker network inspect offline-docker-20220921220434-5916 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0921 22:05:14.626328    1144 network_create.go:272] running [docker network inspect offline-docker-20220921220434-5916] to gather additional debugging logs...
	I0921 22:05:14.626328    1144 cli_runner.go:164] Run: docker network inspect offline-docker-20220921220434-5916
	W0921 22:05:14.846488    1144 cli_runner.go:211] docker network inspect offline-docker-20220921220434-5916 returned with exit code 1
	I0921 22:05:14.846488    1144 network_create.go:275] error running [docker network inspect offline-docker-20220921220434-5916]: docker network inspect offline-docker-20220921220434-5916: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: offline-docker-20220921220434-5916
	I0921 22:05:14.846488    1144 network_create.go:277] output of [docker network inspect offline-docker-20220921220434-5916]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: offline-docker-20220921220434-5916
	
	** /stderr **
	W0921 22:05:14.847446    1144 delete.go:139] delete failed (probably ok) <nil>
	I0921 22:05:14.847446    1144 fix.go:115] Sleeping 1 second for extra luck!
	I0921 22:05:15.858364    1144 start.go:125] createHost starting for "" (driver="docker")
	I0921 22:05:15.877591    1144 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0921 22:05:15.877591    1144 start.go:159] libmachine.API.Create for "offline-docker-20220921220434-5916" (driver="docker")
	I0921 22:05:15.877591    1144 client.go:168] LocalClient.Create starting
	I0921 22:05:15.878644    1144 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I0921 22:05:15.879012    1144 main.go:134] libmachine: Decoding PEM data...
	I0921 22:05:15.879087    1144 main.go:134] libmachine: Parsing certificate...
	I0921 22:05:15.879178    1144 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I0921 22:05:15.879178    1144 main.go:134] libmachine: Decoding PEM data...
	I0921 22:05:15.879178    1144 main.go:134] libmachine: Parsing certificate...
	I0921 22:05:15.889040    1144 cli_runner.go:164] Run: docker network inspect offline-docker-20220921220434-5916 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0921 22:05:16.093585    1144 cli_runner.go:211] docker network inspect offline-docker-20220921220434-5916 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0921 22:05:16.103253    1144 network_create.go:272] running [docker network inspect offline-docker-20220921220434-5916] to gather additional debugging logs...
	I0921 22:05:16.103281    1144 cli_runner.go:164] Run: docker network inspect offline-docker-20220921220434-5916
	W0921 22:05:16.310997    1144 cli_runner.go:211] docker network inspect offline-docker-20220921220434-5916 returned with exit code 1
	I0921 22:05:16.310997    1144 network_create.go:275] error running [docker network inspect offline-docker-20220921220434-5916]: docker network inspect offline-docker-20220921220434-5916: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: offline-docker-20220921220434-5916
	I0921 22:05:16.310997    1144 network_create.go:277] output of [docker network inspect offline-docker-20220921220434-5916]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: offline-docker-20220921220434-5916
	
	** /stderr **
	I0921 22:05:16.320559    1144 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0921 22:05:16.518680    1144 network.go:281] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00014ad48] amended:false}} dirty:map[] misses:0}
	I0921 22:05:16.518680    1144 network.go:239] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0921 22:05:16.535294    1144 network.go:290] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00014ad48] amended:true}} dirty:map[192.168.49.0:0xc00014ad48 192.168.58.0:0xc0000dc0f8] misses:0}
	I0921 22:05:16.535294    1144 network.go:236] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0921 22:05:16.535294    1144 network_create.go:115] attempt to create docker network offline-docker-20220921220434-5916 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0921 22:05:16.543961    1144 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=offline-docker-20220921220434-5916 offline-docker-20220921220434-5916
	W0921 22:05:16.733480    1144 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=offline-docker-20220921220434-5916 offline-docker-20220921220434-5916 returned with exit code 1
	E0921 22:05:16.733480    1144 network_create.go:104] error while trying to create docker network offline-docker-20220921220434-5916 192.168.58.0/24: create docker network offline-docker-20220921220434-5916 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=offline-docker-20220921220434-5916 offline-docker-20220921220434-5916: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 70f5cdcbb229d7bc39f5d1c9cd1320e1cc4b3f3b75802639e5e2889dce14da6f (br-70f5cdcbb229): conflicts with network 8a3cd8d165a4af032276277dbee93b52b8ee4aed7ca6d8aceee6814901b31792 (br-8a3cd8d165a4): networks have overlapping IPv4
	W0921 22:05:16.733480    1144 out.go:239] ! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network offline-docker-20220921220434-5916 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=offline-docker-20220921220434-5916 offline-docker-20220921220434-5916: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 70f5cdcbb229d7bc39f5d1c9cd1320e1cc4b3f3b75802639e5e2889dce14da6f (br-70f5cdcbb229): conflicts with network 8a3cd8d165a4af032276277dbee93b52b8ee4aed7ca6d8aceee6814901b31792 (br-8a3cd8d165a4): networks have overlapping IPv4
	
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network offline-docker-20220921220434-5916 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=offline-docker-20220921220434-5916 offline-docker-20220921220434-5916: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 70f5cdcbb229d7bc39f5d1c9cd1320e1cc4b3f3b75802639e5e2889dce14da6f (br-70f5cdcbb229): conflicts with network 8a3cd8d165a4af032276277dbee93b52b8ee4aed7ca6d8aceee6814901b31792 (br-8a3cd8d165a4): networks have overlapping IPv4
	
	I0921 22:05:16.741479    1144 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0921 22:05:16.963423    1144 cli_runner.go:164] Run: docker volume create offline-docker-20220921220434-5916 --label name.minikube.sigs.k8s.io=offline-docker-20220921220434-5916 --label created_by.minikube.sigs.k8s.io=true
	W0921 22:05:17.161305    1144 cli_runner.go:211] docker volume create offline-docker-20220921220434-5916 --label name.minikube.sigs.k8s.io=offline-docker-20220921220434-5916 --label created_by.minikube.sigs.k8s.io=true returned with exit code 1
	I0921 22:05:17.161305    1144 client.go:171] LocalClient.Create took 1.2837042s
	I0921 22:05:19.185941    1144 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0921 22:05:19.194542    1144 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20220921220434-5916
	W0921 22:05:19.388182    1144 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20220921220434-5916 returned with exit code 1
	I0921 22:05:19.388182    1144 retry.go:31] will retry after 242.222461ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-20220921220434-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20220921220434-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: offline-docker-20220921220434-5916
	I0921 22:05:19.642816    1144 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20220921220434-5916
	W0921 22:05:19.870841    1144 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20220921220434-5916 returned with exit code 1
	I0921 22:05:19.870841    1144 retry.go:31] will retry after 293.637806ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-20220921220434-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20220921220434-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: offline-docker-20220921220434-5916
	I0921 22:05:20.186679    1144 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20220921220434-5916
	W0921 22:05:20.378786    1144 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20220921220434-5916 returned with exit code 1
	I0921 22:05:20.378914    1144 retry.go:31] will retry after 446.119795ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-20220921220434-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20220921220434-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: offline-docker-20220921220434-5916
	I0921 22:05:20.840351    1144 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20220921220434-5916
	W0921 22:05:21.009326    1144 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20220921220434-5916 returned with exit code 1
	W0921 22:05:21.009326    1144 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-20220921220434-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20220921220434-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: offline-docker-20220921220434-5916
	
	W0921 22:05:21.009326    1144 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-20220921220434-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20220921220434-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: offline-docker-20220921220434-5916
	I0921 22:05:21.018302    1144 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0921 22:05:21.025303    1144 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20220921220434-5916
	W0921 22:05:21.215623    1144 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20220921220434-5916 returned with exit code 1
	I0921 22:05:21.215623    1144 retry.go:31] will retry after 179.638263ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-20220921220434-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20220921220434-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: offline-docker-20220921220434-5916
	I0921 22:05:21.412966    1144 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20220921220434-5916
	W0921 22:05:21.584222    1144 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20220921220434-5916 returned with exit code 1
	I0921 22:05:21.584222    1144 retry.go:31] will retry after 263.695078ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-20220921220434-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20220921220434-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: offline-docker-20220921220434-5916
	I0921 22:05:21.861358    1144 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20220921220434-5916
	W0921 22:05:22.068800    1144 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20220921220434-5916 returned with exit code 1
	I0921 22:05:22.069020    1144 retry.go:31] will retry after 484.240172ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-20220921220434-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20220921220434-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: offline-docker-20220921220434-5916
	I0921 22:05:22.565230    1144 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20220921220434-5916
	W0921 22:05:22.775310    1144 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20220921220434-5916 returned with exit code 1
	W0921 22:05:22.775540    1144 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-20220921220434-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20220921220434-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: offline-docker-20220921220434-5916
	
	W0921 22:05:22.775589    1144 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-20220921220434-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20220921220434-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: offline-docker-20220921220434-5916
	I0921 22:05:22.775589    1144 start.go:128] duration metric: createHost completed in 6.9171734s
	I0921 22:05:22.788088    1144 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0921 22:05:22.794491    1144 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20220921220434-5916
	W0921 22:05:22.979523    1144 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20220921220434-5916 returned with exit code 1
	I0921 22:05:22.979523    1144 retry.go:31] will retry after 340.62286ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-20220921220434-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20220921220434-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: offline-docker-20220921220434-5916
	I0921 22:05:23.335575    1144 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20220921220434-5916
	W0921 22:05:23.527134    1144 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20220921220434-5916 returned with exit code 1
	I0921 22:05:23.527134    1144 retry.go:31] will retry after 297.417842ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-20220921220434-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20220921220434-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: offline-docker-20220921220434-5916
	I0921 22:05:23.835505    1144 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20220921220434-5916
	W0921 22:05:24.020805    1144 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20220921220434-5916 returned with exit code 1
	I0921 22:05:24.020805    1144 retry.go:31] will retry after 448.358942ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-20220921220434-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20220921220434-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: offline-docker-20220921220434-5916
	I0921 22:05:24.483557    1144 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20220921220434-5916
	W0921 22:05:24.676556    1144 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20220921220434-5916 returned with exit code 1
	W0921 22:05:24.676782    1144 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-20220921220434-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20220921220434-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: offline-docker-20220921220434-5916
	
	W0921 22:05:24.676834    1144 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-20220921220434-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20220921220434-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: offline-docker-20220921220434-5916
	I0921 22:05:24.691217    1144 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0921 22:05:24.703650    1144 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20220921220434-5916
	W0921 22:05:24.894510    1144 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20220921220434-5916 returned with exit code 1
	I0921 22:05:24.894510    1144 retry.go:31] will retry after 176.645665ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-20220921220434-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20220921220434-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: offline-docker-20220921220434-5916
	I0921 22:05:25.089034    1144 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20220921220434-5916
	W0921 22:05:25.334391    1144 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20220921220434-5916 returned with exit code 1
	I0921 22:05:25.334391    1144 retry.go:31] will retry after 512.00063ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-20220921220434-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20220921220434-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: offline-docker-20220921220434-5916
	I0921 22:05:25.860123    1144 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20220921220434-5916
	W0921 22:05:26.064951    1144 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20220921220434-5916 returned with exit code 1
	W0921 22:05:26.064951    1144 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-20220921220434-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20220921220434-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: offline-docker-20220921220434-5916
	
	W0921 22:05:26.064951    1144 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-20220921220434-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20220921220434-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: offline-docker-20220921220434-5916
	I0921 22:05:26.064951    1144 fix.go:57] fixHost completed within 30.5929148s
	I0921 22:05:26.064951    1144 start.go:83] releasing machines lock for "offline-docker-20220921220434-5916", held for 30.5933319s
	W0921 22:05:26.065888    1144 out.go:239] * Failed to start docker container. Running "minikube delete -p offline-docker-20220921220434-5916" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for offline-docker-20220921220434-5916 container: docker volume create offline-docker-20220921220434-5916 --label name.minikube.sigs.k8s.io=offline-docker-20220921220434-5916 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create offline-docker-20220921220434-5916: error while creating volume root path '/var/lib/docker/volumes/offline-docker-20220921220434-5916': mkdir /var/lib/docker/volumes/offline-docker-20220921220434-5916: read-only file system
	
	* Failed to start docker container. Running "minikube delete -p offline-docker-20220921220434-5916" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for offline-docker-20220921220434-5916 container: docker volume create offline-docker-20220921220434-5916 --label name.minikube.sigs.k8s.io=offline-docker-20220921220434-5916 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create offline-docker-20220921220434-5916: error while creating volume root path '/var/lib/docker/volumes/offline-docker-20220921220434-5916': mkdir /var/lib/docker/volumes/offline-docker-20220921220434-5916: read-only file system
	
	I0921 22:05:26.069944    1144 out.go:177] 
	W0921 22:05:26.072166    1144 out.go:239] X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for offline-docker-20220921220434-5916 container: docker volume create offline-docker-20220921220434-5916 --label name.minikube.sigs.k8s.io=offline-docker-20220921220434-5916 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create offline-docker-20220921220434-5916: error while creating volume root path '/var/lib/docker/volumes/offline-docker-20220921220434-5916': mkdir /var/lib/docker/volumes/offline-docker-20220921220434-5916: read-only file system
	
	X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for offline-docker-20220921220434-5916 container: docker volume create offline-docker-20220921220434-5916 --label name.minikube.sigs.k8s.io=offline-docker-20220921220434-5916 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create offline-docker-20220921220434-5916: error while creating volume root path '/var/lib/docker/volumes/offline-docker-20220921220434-5916': mkdir /var/lib/docker/volumes/offline-docker-20220921220434-5916: read-only file system
	
	W0921 22:05:26.072166    1144 out.go:239] * Suggestion: Restart Docker
	* Suggestion: Restart Docker
	W0921 22:05:26.072166    1144 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/6825
	* Related issue: https://github.com/kubernetes/minikube/issues/6825
	I0921 22:05:26.075818    1144 out.go:177] 

                                                
                                                
** /stderr **
aab_offline_test.go:58: out/minikube-windows-amd64.exe start -p offline-docker-20220921220434-5916 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker failed: exit status 60
panic.go:522: *** TestOffline FAILED at 2022-09-21 22:05:26.2224271 +0000 GMT m=+2133.774105301
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestOffline]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect offline-docker-20220921220434-5916

                                                
                                                
=== CONT  TestOffline
helpers_test.go:231: (dbg) Non-zero exit: docker inspect offline-docker-20220921220434-5916: exit status 1 (243.1114ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: offline-docker-20220921220434-5916

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p offline-docker-20220921220434-5916 -n offline-docker-20220921220434-5916

                                                
                                                
=== CONT  TestOffline
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p offline-docker-20220921220434-5916 -n offline-docker-20220921220434-5916: exit status 7 (562.5516ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0921 22:05:27.019879    7884 status.go:247] status error: host: state: unknown state "offline-docker-20220921220434-5916": docker container inspect offline-docker-20220921220434-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: offline-docker-20220921220434-5916

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "offline-docker-20220921220434-5916" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:175: Cleaning up "offline-docker-20220921220434-5916" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p offline-docker-20220921220434-5916

                                                
                                                
=== CONT  TestOffline
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p offline-docker-20220921220434-5916: (1.7066258s)
--- FAIL: TestOffline (54.25s)

                                                
                                    
x
+
TestAddons/Setup (49.97s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:76: (dbg) Run:  out/minikube-windows-amd64.exe start -p addons-20220921213059-5916 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --driver=docker --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:76: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p addons-20220921213059-5916 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --driver=docker --addons=ingress --addons=ingress-dns --addons=helm-tiller: exit status 60 (49.8663306s)

                                                
                                                
-- stdout --
	* [addons-20220921213059-5916] minikube v1.27.0 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	  - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=14995
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting control plane node addons-20220921213059-5916 in cluster addons-20220921213059-5916
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=4000MB) ...
	* docker "addons-20220921213059-5916" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=4000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0921 21:30:59.945715    5424 out.go:296] Setting OutFile to fd 612 ...
	I0921 21:31:00.004895    5424 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0921 21:31:00.004895    5424 out.go:309] Setting ErrFile to fd 580...
	I0921 21:31:00.004895    5424 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0921 21:31:00.035747    5424 out.go:303] Setting JSON to false
	I0921 21:31:00.038740    5424 start.go:115] hostinfo: {"hostname":"minikube2","uptime":1928,"bootTime":1663793932,"procs":150,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19044 Build 19044","kernelVersion":"10.0.19044 Build 19044","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"29dced62-21fb-45d8-a34e-472b66ced002"}
	W0921 21:31:00.038740    5424 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0921 21:31:00.043553    5424 out.go:177] * [addons-20220921213059-5916] minikube v1.27.0 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	I0921 21:31:00.046548    5424 notify.go:214] Checking for updates...
	I0921 21:31:00.053425    5424 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	I0921 21:31:00.056257    5424 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	I0921 21:31:00.058993    5424 out.go:177]   - MINIKUBE_LOCATION=14995
	I0921 21:31:00.061484    5424 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0921 21:31:00.064233    5424 driver.go:365] Setting default libvirt URI to qemu:///system
	I0921 21:31:00.330373    5424 docker.go:137] docker version: linux-20.10.17
	I0921 21:31:00.338722    5424 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0921 21:31:00.861421    5424 info.go:265] docker info: {ID:6VFI:N2PO:PLBP:KXLR:EN7R:OR7P:564S:UY7N:ZWDA:5V2U:KGL2:KJZ7 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:43 OomKillDisable:true NGoroutines:44 SystemTime:2022-09-21 21:31:00.4829694 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.10.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.9] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-
plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.19.0]] Warnings:<nil>}}
	I0921 21:31:00.868437    5424 out.go:177] * Using the docker driver based on user configuration
	I0921 21:31:00.886092    5424 start.go:284] selected driver: docker
	I0921 21:31:00.886092    5424 start.go:808] validating driver "docker" against <nil>
	I0921 21:31:00.886092    5424 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0921 21:31:00.948712    5424 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0921 21:31:01.481141    5424 info.go:265] docker info: {ID:6VFI:N2PO:PLBP:KXLR:EN7R:OR7P:564S:UY7N:ZWDA:5V2U:KGL2:KJZ7 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:43 OomKillDisable:true NGoroutines:44 SystemTime:2022-09-21 21:31:01.0983956 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.10.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.9] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-
plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.19.0]] Warnings:<nil>}}
	I0921 21:31:01.481472    5424 start_flags.go:302] no existing cluster config was found, will generate one from the flags 
	I0921 21:31:01.481979    5424 start_flags.go:867] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0921 21:31:01.484928    5424 out.go:177] * Using Docker Desktop driver with root privileges
	I0921 21:31:01.487490    5424 cni.go:95] Creating CNI manager for ""
	I0921 21:31:01.487490    5424 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0921 21:31:01.487490    5424 start_flags.go:316] config:
	{Name:addons-20220921213059-5916 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.2 ClusterName:addons-20220921213059-5916 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker
CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I0921 21:31:01.490686    5424 out.go:177] * Starting control plane node addons-20220921213059-5916 in cluster addons-20220921213059-5916
	I0921 21:31:01.492746    5424 cache.go:120] Beginning downloading kic base image for docker with docker
	I0921 21:31:01.494809    5424 out.go:177] * Pulling base image ...
	I0921 21:31:01.497888    5424 preload.go:132] Checking if preload exists for k8s version v1.25.2 and runtime docker
	I0921 21:31:01.497888    5424 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c in local docker daemon
	I0921 21:31:01.498831    5424 preload.go:148] Found local preload: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.25.2-docker-overlay2-amd64.tar.lz4
	I0921 21:31:01.498831    5424 cache.go:57] Caching tarball of preloaded images
	I0921 21:31:01.498831    5424 preload.go:174] Found C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.25.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0921 21:31:01.498831    5424 cache.go:60] Finished verifying existence of preloaded tar for  v1.25.2 on docker
	I0921 21:31:01.499842    5424 profile.go:148] Saving config to C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\addons-20220921213059-5916\config.json ...
	I0921 21:31:01.499842    5424 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\addons-20220921213059-5916\config.json: {Name:mk034b06a8fb420a28a1ea7d83e12b882866a132 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0921 21:31:01.698825    5424 cache.go:147] Downloading gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c to local cache
	I0921 21:31:01.698893    5424 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase_v0.0.34@sha256_f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c.tar
	I0921 21:31:01.698893    5424 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase_v0.0.34@sha256_f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c.tar
	I0921 21:31:01.698893    5424 image.go:59] Checking for gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c in local cache directory
	I0921 21:31:01.698893    5424 image.go:62] Found gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c in local cache directory, skipping pull
	I0921 21:31:01.698893    5424 image.go:103] gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c exists in cache, skipping pull
	I0921 21:31:01.699654    5424 cache.go:150] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c as a tarball
	I0921 21:31:01.699681    5424 cache.go:161] Loading gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c from local cache
	I0921 21:31:01.699681    5424 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase_v0.0.34@sha256_f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c.tar
	I0921 21:31:03.991638    5424 cache.go:164] successfully loaded gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c from cached tarball
	I0921 21:31:03.991789    5424 cache.go:170] Downloading gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c to local daemon
	I0921 21:31:03.991836    5424 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c.d.lock -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase_v0.0.34@sha256_f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c.d.lock
	I0921 21:31:03.991915    5424 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c in local docker daemon
	I0921 21:31:04.208638    5424 image.go:243] Writing gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c to local daemon
	    > gcr.io/k8s-minikube/kicbase:  0 B [__________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase:  0 B [__________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase:  0 B [__________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase:  0 B [__________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase:  0 B [______________________] ?% ? p/s 900msI0921 21:31:05.859716    5424 cache.go:177] use image loaded from cache gcr.io/k8s-minikube/kicbase:v0.0.34
	W0921 21:31:05.859716    5424 out.go:239] ! minikube was unable to download gcr.io/k8s-minikube/kicbase:v0.0.34, but successfully downloaded gcr.io/k8s-minikube/kicbase:v0.0.34 as a fallback image
	! minikube was unable to download gcr.io/k8s-minikube/kicbase:v0.0.34, but successfully downloaded gcr.io/k8s-minikube/kicbase:v0.0.34 as a fallback image
	I0921 21:31:05.859716    5424 cache.go:208] Successfully downloaded all kic artifacts
	I0921 21:31:05.860714    5424 start.go:364] acquiring machines lock for addons-20220921213059-5916: {Name:mkac5adb40a8f54b09c7c8314ae3ae6a236be444 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0921 21:31:05.860714    5424 start.go:368] acquired machines lock for "addons-20220921213059-5916" in 0s
	I0921 21:31:05.861867    5424 start.go:93] Provisioning new machine with config: &{Name:addons-20220921213059-5916 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.34 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.2 ClusterName:addons-20220921213059-5916 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunt
ime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.25.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClient
Path:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet} &{Name: IP: Port:8443 KubernetesVersion:v1.25.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0921 21:31:05.862136    5424 start.go:125] createHost starting for "" (driver="docker")
	I0921 21:31:05.877549    5424 out.go:204] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0921 21:31:05.878511    5424 start.go:159] libmachine.API.Create for "addons-20220921213059-5916" (driver="docker")
	I0921 21:31:05.878571    5424 client.go:168] LocalClient.Create starting
	I0921 21:31:05.879550    5424 main.go:134] libmachine: Creating CA: C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I0921 21:31:06.111349    5424 main.go:134] libmachine: Creating client certificate: C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I0921 21:31:06.247807    5424 cli_runner.go:164] Run: docker network inspect addons-20220921213059-5916 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0921 21:31:06.443478    5424 cli_runner.go:211] docker network inspect addons-20220921213059-5916 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0921 21:31:06.451964    5424 network_create.go:272] running [docker network inspect addons-20220921213059-5916] to gather additional debugging logs...
	I0921 21:31:06.451964    5424 cli_runner.go:164] Run: docker network inspect addons-20220921213059-5916
	W0921 21:31:06.631873    5424 cli_runner.go:211] docker network inspect addons-20220921213059-5916 returned with exit code 1
	I0921 21:31:06.631947    5424 network_create.go:275] error running [docker network inspect addons-20220921213059-5916]: docker network inspect addons-20220921213059-5916: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: addons-20220921213059-5916
	I0921 21:31:06.631975    5424 network_create.go:277] output of [docker network inspect addons-20220921213059-5916]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: addons-20220921213059-5916
	
	** /stderr **
	I0921 21:31:06.639340    5424 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0921 21:31:06.867289    5424 network.go:290] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc00012a340] misses:0}
	I0921 21:31:06.867289    5424 network.go:236] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0921 21:31:06.867289    5424 network_create.go:115] attempt to create docker network addons-20220921213059-5916 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0921 21:31:06.895799    5424 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-20220921213059-5916 addons-20220921213059-5916
	W0921 21:31:07.130635    5424 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-20220921213059-5916 addons-20220921213059-5916 returned with exit code 1
	E0921 21:31:07.130635    5424 network_create.go:104] error while trying to create docker network addons-20220921213059-5916 192.168.49.0/24: create docker network addons-20220921213059-5916 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-20220921213059-5916 addons-20220921213059-5916: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: failed to update bridge store for object type *bridge.networkConfiguration: open /var/lib/docker/network/files/local-kv.db: read-only file system
	W0921 21:31:07.130635    5424 out.go:239] ! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network addons-20220921213059-5916 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-20220921213059-5916 addons-20220921213059-5916: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: failed to update bridge store for object type *bridge.networkConfiguration: open /var/lib/docker/network/files/local-kv.db: read-only file system
	
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network addons-20220921213059-5916 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-20220921213059-5916 addons-20220921213059-5916: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: failed to update bridge store for object type *bridge.networkConfiguration: open /var/lib/docker/network/files/local-kv.db: read-only file system
	
	I0921 21:31:07.143837    5424 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0921 21:31:07.324476    5424 cli_runner.go:164] Run: docker volume create addons-20220921213059-5916 --label name.minikube.sigs.k8s.io=addons-20220921213059-5916 --label created_by.minikube.sigs.k8s.io=true
	W0921 21:31:07.539702    5424 cli_runner.go:211] docker volume create addons-20220921213059-5916 --label name.minikube.sigs.k8s.io=addons-20220921213059-5916 --label created_by.minikube.sigs.k8s.io=true returned with exit code 1
	I0921 21:31:07.539702    5424 client.go:171] LocalClient.Create took 1.6611229s
	I0921 21:31:09.556798    5424 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0921 21:31:09.562818    5424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220921213059-5916
	W0921 21:31:09.748145    5424 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220921213059-5916 returned with exit code 1
	I0921 21:31:09.748620    5424 retry.go:31] will retry after 276.165072ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "addons-20220921213059-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220921213059-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: addons-20220921213059-5916
	I0921 21:31:10.037595    5424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220921213059-5916
	W0921 21:31:10.215009    5424 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220921213059-5916 returned with exit code 1
	I0921 21:31:10.215430    5424 retry.go:31] will retry after 540.190908ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "addons-20220921213059-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220921213059-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: addons-20220921213059-5916
	I0921 21:31:10.771036    5424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220921213059-5916
	W0921 21:31:10.964824    5424 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220921213059-5916 returned with exit code 1
	W0921 21:31:10.965268    5424 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "addons-20220921213059-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220921213059-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: addons-20220921213059-5916
	
	W0921 21:31:10.965268    5424 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "addons-20220921213059-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220921213059-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: addons-20220921213059-5916
	I0921 21:31:10.974961    5424 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0921 21:31:10.982062    5424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220921213059-5916
	W0921 21:31:11.180929    5424 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220921213059-5916 returned with exit code 1
	I0921 21:31:11.181235    5424 retry.go:31] will retry after 234.428547ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "addons-20220921213059-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220921213059-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: addons-20220921213059-5916
	I0921 21:31:11.437190    5424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220921213059-5916
	W0921 21:31:11.646576    5424 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220921213059-5916 returned with exit code 1
	I0921 21:31:11.646807    5424 retry.go:31] will retry after 346.739061ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "addons-20220921213059-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220921213059-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: addons-20220921213059-5916
	I0921 21:31:12.008169    5424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220921213059-5916
	W0921 21:31:12.222863    5424 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220921213059-5916 returned with exit code 1
	I0921 21:31:12.222863    5424 retry.go:31] will retry after 667.587979ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "addons-20220921213059-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220921213059-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: addons-20220921213059-5916
	I0921 21:31:12.905737    5424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220921213059-5916
	W0921 21:31:13.103044    5424 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220921213059-5916 returned with exit code 1
	W0921 21:31:13.103200    5424 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "addons-20220921213059-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220921213059-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: addons-20220921213059-5916
	
	W0921 21:31:13.103200    5424 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "addons-20220921213059-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220921213059-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: addons-20220921213059-5916
	I0921 21:31:13.103200    5424 start.go:128] duration metric: createHost completed in 7.2410284s
	I0921 21:31:13.103200    5424 start.go:83] releasing machines lock for "addons-20220921213059-5916", held for 7.2424505s
	W0921 21:31:13.103200    5424 start.go:602] error starting host: creating host: create: creating: setting up container node: creating volume for addons-20220921213059-5916 container: docker volume create addons-20220921213059-5916 --label name.minikube.sigs.k8s.io=addons-20220921213059-5916 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create addons-20220921213059-5916: error while creating volume root path '/var/lib/docker/volumes/addons-20220921213059-5916': mkdir /var/lib/docker/volumes/addons-20220921213059-5916: read-only file system
	I0921 21:31:13.118627    5424 cli_runner.go:164] Run: docker container inspect addons-20220921213059-5916 --format={{.State.Status}}
	W0921 21:31:13.309261    5424 cli_runner.go:211] docker container inspect addons-20220921213059-5916 --format={{.State.Status}} returned with exit code 1
	I0921 21:31:13.309261    5424 delete.go:82] Unable to get host status for addons-20220921213059-5916, assuming it has already been deleted: state: unknown state "addons-20220921213059-5916": docker container inspect addons-20220921213059-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: addons-20220921213059-5916
	W0921 21:31:13.309261    5424 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: setting up container node: creating volume for addons-20220921213059-5916 container: docker volume create addons-20220921213059-5916 --label name.minikube.sigs.k8s.io=addons-20220921213059-5916 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create addons-20220921213059-5916: error while creating volume root path '/var/lib/docker/volumes/addons-20220921213059-5916': mkdir /var/lib/docker/volumes/addons-20220921213059-5916: read-only file system
	
	! StartHost failed, but will try again: creating host: create: creating: setting up container node: creating volume for addons-20220921213059-5916 container: docker volume create addons-20220921213059-5916 --label name.minikube.sigs.k8s.io=addons-20220921213059-5916 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create addons-20220921213059-5916: error while creating volume root path '/var/lib/docker/volumes/addons-20220921213059-5916': mkdir /var/lib/docker/volumes/addons-20220921213059-5916: read-only file system
	
	I0921 21:31:13.309261    5424 start.go:617] Will try again in 5 seconds ...
	I0921 21:31:18.321868    5424 start.go:364] acquiring machines lock for addons-20220921213059-5916: {Name:mkac5adb40a8f54b09c7c8314ae3ae6a236be444 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0921 21:31:18.322094    5424 start.go:368] acquired machines lock for "addons-20220921213059-5916" in 225.8µs
	I0921 21:31:18.322431    5424 start.go:96] Skipping create...Using existing machine configuration
	I0921 21:31:18.322500    5424 fix.go:55] fixHost starting: 
	I0921 21:31:18.336660    5424 cli_runner.go:164] Run: docker container inspect addons-20220921213059-5916 --format={{.State.Status}}
	W0921 21:31:18.541335    5424 cli_runner.go:211] docker container inspect addons-20220921213059-5916 --format={{.State.Status}} returned with exit code 1
	I0921 21:31:18.541335    5424 fix.go:103] recreateIfNeeded on addons-20220921213059-5916: state= err=unknown state "addons-20220921213059-5916": docker container inspect addons-20220921213059-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: addons-20220921213059-5916
	I0921 21:31:18.541335    5424 fix.go:108] machineExists: false. err=machine does not exist
	I0921 21:31:18.558107    5424 out.go:177] * docker "addons-20220921213059-5916" container is missing, will recreate.
	I0921 21:31:18.567506    5424 delete.go:124] DEMOLISHING addons-20220921213059-5916 ...
	I0921 21:31:18.588799    5424 cli_runner.go:164] Run: docker container inspect addons-20220921213059-5916 --format={{.State.Status}}
	W0921 21:31:18.779009    5424 cli_runner.go:211] docker container inspect addons-20220921213059-5916 --format={{.State.Status}} returned with exit code 1
	W0921 21:31:18.779160    5424 stop.go:75] unable to get state: unknown state "addons-20220921213059-5916": docker container inspect addons-20220921213059-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: addons-20220921213059-5916
	I0921 21:31:18.779160    5424 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "addons-20220921213059-5916": docker container inspect addons-20220921213059-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: addons-20220921213059-5916
	I0921 21:31:18.792252    5424 cli_runner.go:164] Run: docker container inspect addons-20220921213059-5916 --format={{.State.Status}}
	W0921 21:31:18.996475    5424 cli_runner.go:211] docker container inspect addons-20220921213059-5916 --format={{.State.Status}} returned with exit code 1
	I0921 21:31:18.996475    5424 delete.go:82] Unable to get host status for addons-20220921213059-5916, assuming it has already been deleted: state: unknown state "addons-20220921213059-5916": docker container inspect addons-20220921213059-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: addons-20220921213059-5916
	I0921 21:31:19.004554    5424 cli_runner.go:164] Run: docker container inspect -f {{.Id}} addons-20220921213059-5916
	W0921 21:31:19.198283    5424 cli_runner.go:211] docker container inspect -f {{.Id}} addons-20220921213059-5916 returned with exit code 1
	I0921 21:31:19.198429    5424 kic.go:356] could not find the container addons-20220921213059-5916 to remove it. will try anyways
	I0921 21:31:19.206426    5424 cli_runner.go:164] Run: docker container inspect addons-20220921213059-5916 --format={{.State.Status}}
	W0921 21:31:19.400541    5424 cli_runner.go:211] docker container inspect addons-20220921213059-5916 --format={{.State.Status}} returned with exit code 1
	W0921 21:31:19.400615    5424 oci.go:84] error getting container status, will try to delete anyways: unknown state "addons-20220921213059-5916": docker container inspect addons-20220921213059-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: addons-20220921213059-5916
	I0921 21:31:19.412784    5424 cli_runner.go:164] Run: docker exec --privileged -t addons-20220921213059-5916 /bin/bash -c "sudo init 0"
	W0921 21:31:19.600885    5424 cli_runner.go:211] docker exec --privileged -t addons-20220921213059-5916 /bin/bash -c "sudo init 0" returned with exit code 1
	I0921 21:31:19.600885    5424 oci.go:646] error shutdown addons-20220921213059-5916: docker exec --privileged -t addons-20220921213059-5916 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: addons-20220921213059-5916
	I0921 21:31:20.619256    5424 cli_runner.go:164] Run: docker container inspect addons-20220921213059-5916 --format={{.State.Status}}
	W0921 21:31:20.815804    5424 cli_runner.go:211] docker container inspect addons-20220921213059-5916 --format={{.State.Status}} returned with exit code 1
	I0921 21:31:20.815804    5424 oci.go:658] temporary error verifying shutdown: unknown state "addons-20220921213059-5916": docker container inspect addons-20220921213059-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: addons-20220921213059-5916
	I0921 21:31:20.815804    5424 oci.go:660] temporary error: container addons-20220921213059-5916 status is  but expect it to be exited
	I0921 21:31:20.815804    5424 retry.go:31] will retry after 328.259627ms: couldn't verify container is exited. %v: unknown state "addons-20220921213059-5916": docker container inspect addons-20220921213059-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: addons-20220921213059-5916
	I0921 21:31:21.162635    5424 cli_runner.go:164] Run: docker container inspect addons-20220921213059-5916 --format={{.State.Status}}
	W0921 21:31:21.357610    5424 cli_runner.go:211] docker container inspect addons-20220921213059-5916 --format={{.State.Status}} returned with exit code 1
	I0921 21:31:21.357746    5424 oci.go:658] temporary error verifying shutdown: unknown state "addons-20220921213059-5916": docker container inspect addons-20220921213059-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: addons-20220921213059-5916
	I0921 21:31:21.357746    5424 oci.go:660] temporary error: container addons-20220921213059-5916 status is  but expect it to be exited
	I0921 21:31:21.357746    5424 retry.go:31] will retry after 447.727139ms: couldn't verify container is exited. %v: unknown state "addons-20220921213059-5916": docker container inspect addons-20220921213059-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: addons-20220921213059-5916
	I0921 21:31:21.827915    5424 cli_runner.go:164] Run: docker container inspect addons-20220921213059-5916 --format={{.State.Status}}
	W0921 21:31:22.007054    5424 cli_runner.go:211] docker container inspect addons-20220921213059-5916 --format={{.State.Status}} returned with exit code 1
	I0921 21:31:22.007412    5424 oci.go:658] temporary error verifying shutdown: unknown state "addons-20220921213059-5916": docker container inspect addons-20220921213059-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: addons-20220921213059-5916
	I0921 21:31:22.007467    5424 oci.go:660] temporary error: container addons-20220921213059-5916 status is  but expect it to be exited
	I0921 21:31:22.007548    5424 retry.go:31] will retry after 901.025843ms: couldn't verify container is exited. %v: unknown state "addons-20220921213059-5916": docker container inspect addons-20220921213059-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: addons-20220921213059-5916
	I0921 21:31:22.919033    5424 cli_runner.go:164] Run: docker container inspect addons-20220921213059-5916 --format={{.State.Status}}
	W0921 21:31:23.128338    5424 cli_runner.go:211] docker container inspect addons-20220921213059-5916 --format={{.State.Status}} returned with exit code 1
	I0921 21:31:23.128488    5424 oci.go:658] temporary error verifying shutdown: unknown state "addons-20220921213059-5916": docker container inspect addons-20220921213059-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: addons-20220921213059-5916
	I0921 21:31:23.128488    5424 oci.go:660] temporary error: container addons-20220921213059-5916 status is  but expect it to be exited
	I0921 21:31:23.128567    5424 retry.go:31] will retry after 1.713171311s: couldn't verify container is exited. %v: unknown state "addons-20220921213059-5916": docker container inspect addons-20220921213059-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: addons-20220921213059-5916
	I0921 21:31:24.853534    5424 cli_runner.go:164] Run: docker container inspect addons-20220921213059-5916 --format={{.State.Status}}
	W0921 21:31:25.046380    5424 cli_runner.go:211] docker container inspect addons-20220921213059-5916 --format={{.State.Status}} returned with exit code 1
	I0921 21:31:25.046380    5424 oci.go:658] temporary error verifying shutdown: unknown state "addons-20220921213059-5916": docker container inspect addons-20220921213059-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: addons-20220921213059-5916
	I0921 21:31:25.046674    5424 oci.go:660] temporary error: container addons-20220921213059-5916 status is  but expect it to be exited
	I0921 21:31:25.046754    5424 retry.go:31] will retry after 3.325151152s: couldn't verify container is exited. %v: unknown state "addons-20220921213059-5916": docker container inspect addons-20220921213059-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: addons-20220921213059-5916
	I0921 21:31:28.382314    5424 cli_runner.go:164] Run: docker container inspect addons-20220921213059-5916 --format={{.State.Status}}
	W0921 21:31:28.575624    5424 cli_runner.go:211] docker container inspect addons-20220921213059-5916 --format={{.State.Status}} returned with exit code 1
	I0921 21:31:28.575975    5424 oci.go:658] temporary error verifying shutdown: unknown state "addons-20220921213059-5916": docker container inspect addons-20220921213059-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: addons-20220921213059-5916
	I0921 21:31:28.576048    5424 oci.go:660] temporary error: container addons-20220921213059-5916 status is  but expect it to be exited
	I0921 21:31:28.576133    5424 retry.go:31] will retry after 2.711970641s: couldn't verify container is exited. %v: unknown state "addons-20220921213059-5916": docker container inspect addons-20220921213059-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: addons-20220921213059-5916
	I0921 21:31:31.312179    5424 cli_runner.go:164] Run: docker container inspect addons-20220921213059-5916 --format={{.State.Status}}
	W0921 21:31:31.489400    5424 cli_runner.go:211] docker container inspect addons-20220921213059-5916 --format={{.State.Status}} returned with exit code 1
	I0921 21:31:31.489400    5424 oci.go:658] temporary error verifying shutdown: unknown state "addons-20220921213059-5916": docker container inspect addons-20220921213059-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: addons-20220921213059-5916
	I0921 21:31:31.489400    5424 oci.go:660] temporary error: container addons-20220921213059-5916 status is  but expect it to be exited
	I0921 21:31:31.489400    5424 retry.go:31] will retry after 5.015617898s: couldn't verify container is exited. %v: unknown state "addons-20220921213059-5916": docker container inspect addons-20220921213059-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: addons-20220921213059-5916
	I0921 21:31:36.520162    5424 cli_runner.go:164] Run: docker container inspect addons-20220921213059-5916 --format={{.State.Status}}
	W0921 21:31:36.700769    5424 cli_runner.go:211] docker container inspect addons-20220921213059-5916 --format={{.State.Status}} returned with exit code 1
	I0921 21:31:36.700769    5424 oci.go:658] temporary error verifying shutdown: unknown state "addons-20220921213059-5916": docker container inspect addons-20220921213059-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: addons-20220921213059-5916
	I0921 21:31:36.700769    5424 oci.go:660] temporary error: container addons-20220921213059-5916 status is  but expect it to be exited
	I0921 21:31:36.700769    5424 oci.go:88] couldn't shut down addons-20220921213059-5916 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "addons-20220921213059-5916": docker container inspect addons-20220921213059-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: addons-20220921213059-5916
	 
	I0921 21:31:36.708166    5424 cli_runner.go:164] Run: docker rm -f -v addons-20220921213059-5916
	I0921 21:31:36.909236    5424 cli_runner.go:164] Run: docker container inspect -f {{.Id}} addons-20220921213059-5916
	W0921 21:31:37.091278    5424 cli_runner.go:211] docker container inspect -f {{.Id}} addons-20220921213059-5916 returned with exit code 1
	I0921 21:31:37.099233    5424 cli_runner.go:164] Run: docker network inspect addons-20220921213059-5916 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0921 21:31:37.283236    5424 cli_runner.go:211] docker network inspect addons-20220921213059-5916 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0921 21:31:37.290488    5424 network_create.go:272] running [docker network inspect addons-20220921213059-5916] to gather additional debugging logs...
	I0921 21:31:37.290488    5424 cli_runner.go:164] Run: docker network inspect addons-20220921213059-5916
	W0921 21:31:37.469051    5424 cli_runner.go:211] docker network inspect addons-20220921213059-5916 returned with exit code 1
	I0921 21:31:37.469105    5424 network_create.go:275] error running [docker network inspect addons-20220921213059-5916]: docker network inspect addons-20220921213059-5916: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: addons-20220921213059-5916
	I0921 21:31:37.469149    5424 network_create.go:277] output of [docker network inspect addons-20220921213059-5916]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: addons-20220921213059-5916
	
	** /stderr **
	W0921 21:31:37.470347    5424 delete.go:139] delete failed (probably ok) <nil>
	I0921 21:31:37.470347    5424 fix.go:115] Sleeping 1 second for extra luck!
	I0921 21:31:38.471891    5424 start.go:125] createHost starting for "" (driver="docker")
	I0921 21:31:38.475971    5424 out.go:204] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0921 21:31:38.476751    5424 start.go:159] libmachine.API.Create for "addons-20220921213059-5916" (driver="docker")
	I0921 21:31:38.476751    5424 client.go:168] LocalClient.Create starting
	I0921 21:31:38.477370    5424 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I0921 21:31:38.477628    5424 main.go:134] libmachine: Decoding PEM data...
	I0921 21:31:38.477628    5424 main.go:134] libmachine: Parsing certificate...
	I0921 21:31:38.477628    5424 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I0921 21:31:38.477628    5424 main.go:134] libmachine: Decoding PEM data...
	I0921 21:31:38.477628    5424 main.go:134] libmachine: Parsing certificate...
	I0921 21:31:38.486167    5424 cli_runner.go:164] Run: docker network inspect addons-20220921213059-5916 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0921 21:31:38.673524    5424 cli_runner.go:211] docker network inspect addons-20220921213059-5916 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0921 21:31:38.681881    5424 network_create.go:272] running [docker network inspect addons-20220921213059-5916] to gather additional debugging logs...
	I0921 21:31:38.681881    5424 cli_runner.go:164] Run: docker network inspect addons-20220921213059-5916
	W0921 21:31:38.882764    5424 cli_runner.go:211] docker network inspect addons-20220921213059-5916 returned with exit code 1
	I0921 21:31:38.882764    5424 network_create.go:275] error running [docker network inspect addons-20220921213059-5916]: docker network inspect addons-20220921213059-5916: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: addons-20220921213059-5916
	I0921 21:31:38.882992    5424 network_create.go:277] output of [docker network inspect addons-20220921213059-5916]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: addons-20220921213059-5916
	
	** /stderr **
	I0921 21:31:38.892777    5424 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0921 21:31:39.132241    5424 network.go:281] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00012a340] amended:false}} dirty:map[] misses:0}
	I0921 21:31:39.132304    5424 network.go:239] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0921 21:31:39.148842    5424 network.go:290] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00012a340] amended:true}} dirty:map[192.168.49.0:0xc00012a340 192.168.58.0:0xc00062e2d0] misses:0}
	I0921 21:31:39.148842    5424 network.go:236] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0921 21:31:39.149325    5424 network_create.go:115] attempt to create docker network addons-20220921213059-5916 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0921 21:31:39.157810    5424 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-20220921213059-5916 addons-20220921213059-5916
	W0921 21:31:39.428092    5424 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-20220921213059-5916 addons-20220921213059-5916 returned with exit code 1
	E0921 21:31:39.428180    5424 network_create.go:104] error while trying to create docker network addons-20220921213059-5916 192.168.58.0/24: create docker network addons-20220921213059-5916 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-20220921213059-5916 addons-20220921213059-5916: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: failed to update bridge store for object type *bridge.networkConfiguration: open /var/lib/docker/network/files/local-kv.db: read-only file system
	W0921 21:31:39.428633    5424 out.go:239] ! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network addons-20220921213059-5916 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-20220921213059-5916 addons-20220921213059-5916: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: failed to update bridge store for object type *bridge.networkConfiguration: open /var/lib/docker/network/files/local-kv.db: read-only file system
	
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network addons-20220921213059-5916 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-20220921213059-5916 addons-20220921213059-5916: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: failed to update bridge store for object type *bridge.networkConfiguration: open /var/lib/docker/network/files/local-kv.db: read-only file system
	
	I0921 21:31:39.443657    5424 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0921 21:31:39.637842    5424 cli_runner.go:164] Run: docker volume create addons-20220921213059-5916 --label name.minikube.sigs.k8s.io=addons-20220921213059-5916 --label created_by.minikube.sigs.k8s.io=true
	W0921 21:31:39.819072    5424 cli_runner.go:211] docker volume create addons-20220921213059-5916 --label name.minikube.sigs.k8s.io=addons-20220921213059-5916 --label created_by.minikube.sigs.k8s.io=true returned with exit code 1
	I0921 21:31:39.819072    5424 client.go:171] LocalClient.Create took 1.3423149s
	I0921 21:31:41.843204    5424 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0921 21:31:41.849307    5424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220921213059-5916
	W0921 21:31:42.032087    5424 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220921213059-5916 returned with exit code 1
	I0921 21:31:42.032087    5424 retry.go:31] will retry after 242.222461ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "addons-20220921213059-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220921213059-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: addons-20220921213059-5916
	I0921 21:31:42.285550    5424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220921213059-5916
	W0921 21:31:42.486962    5424 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220921213059-5916 returned with exit code 1
	I0921 21:31:42.487077    5424 retry.go:31] will retry after 293.637806ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "addons-20220921213059-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220921213059-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: addons-20220921213059-5916
	I0921 21:31:42.805328    5424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220921213059-5916
	W0921 21:31:43.028157    5424 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220921213059-5916 returned with exit code 1
	I0921 21:31:43.028157    5424 retry.go:31] will retry after 446.119795ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "addons-20220921213059-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220921213059-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: addons-20220921213059-5916
	I0921 21:31:43.496602    5424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220921213059-5916
	W0921 21:31:43.692585    5424 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220921213059-5916 returned with exit code 1
	W0921 21:31:43.692585    5424 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "addons-20220921213059-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220921213059-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: addons-20220921213059-5916
	
	W0921 21:31:43.692585    5424 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "addons-20220921213059-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220921213059-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: addons-20220921213059-5916
	I0921 21:31:43.703075    5424 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0921 21:31:43.710317    5424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220921213059-5916
	W0921 21:31:43.892436    5424 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220921213059-5916 returned with exit code 1
	I0921 21:31:43.892436    5424 retry.go:31] will retry after 179.638263ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "addons-20220921213059-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220921213059-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: addons-20220921213059-5916
	I0921 21:31:44.090672    5424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220921213059-5916
	W0921 21:31:44.271053    5424 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220921213059-5916 returned with exit code 1
	I0921 21:31:44.271053    5424 retry.go:31] will retry after 263.695078ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "addons-20220921213059-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220921213059-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: addons-20220921213059-5916
	I0921 21:31:44.547729    5424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220921213059-5916
	W0921 21:31:44.745015    5424 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220921213059-5916 returned with exit code 1
	I0921 21:31:44.745015    5424 retry.go:31] will retry after 484.240172ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "addons-20220921213059-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220921213059-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: addons-20220921213059-5916
	I0921 21:31:45.247603    5424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220921213059-5916
	W0921 21:31:45.440463    5424 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220921213059-5916 returned with exit code 1
	W0921 21:31:45.440745    5424 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "addons-20220921213059-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220921213059-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: addons-20220921213059-5916
	
	W0921 21:31:45.440745    5424 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "addons-20220921213059-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220921213059-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: addons-20220921213059-5916
	I0921 21:31:45.440745    5424 start.go:128] duration metric: createHost completed in 6.9686164s
	I0921 21:31:45.451433    5424 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0921 21:31:45.458244    5424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220921213059-5916
	W0921 21:31:45.640721    5424 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220921213059-5916 returned with exit code 1
	I0921 21:31:45.640768    5424 retry.go:31] will retry after 340.62286ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "addons-20220921213059-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220921213059-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: addons-20220921213059-5916
	I0921 21:31:45.998662    5424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220921213059-5916
	W0921 21:31:46.192922    5424 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220921213059-5916 returned with exit code 1
	I0921 21:31:46.192978    5424 retry.go:31] will retry after 297.417842ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "addons-20220921213059-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220921213059-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: addons-20220921213059-5916
	I0921 21:31:46.504696    5424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220921213059-5916
	W0921 21:31:46.696917    5424 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220921213059-5916 returned with exit code 1
	I0921 21:31:46.697445    5424 retry.go:31] will retry after 448.358942ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "addons-20220921213059-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220921213059-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: addons-20220921213059-5916
	I0921 21:31:47.164739    5424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220921213059-5916
	W0921 21:31:47.358846    5424 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220921213059-5916 returned with exit code 1
	W0921 21:31:47.358846    5424 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "addons-20220921213059-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220921213059-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: addons-20220921213059-5916
	
	W0921 21:31:47.358846    5424 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "addons-20220921213059-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220921213059-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: addons-20220921213059-5916
	I0921 21:31:47.371219    5424 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0921 21:31:47.378918    5424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220921213059-5916
	W0921 21:31:47.582891    5424 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220921213059-5916 returned with exit code 1
	I0921 21:31:47.582891    5424 retry.go:31] will retry after 176.645665ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "addons-20220921213059-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220921213059-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: addons-20220921213059-5916
	I0921 21:31:47.773680    5424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220921213059-5916
	W0921 21:31:47.967068    5424 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220921213059-5916 returned with exit code 1
	I0921 21:31:47.967222    5424 retry.go:31] will retry after 512.00063ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "addons-20220921213059-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220921213059-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: addons-20220921213059-5916
	I0921 21:31:48.490513    5424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220921213059-5916
	W0921 21:31:48.670480    5424 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220921213059-5916 returned with exit code 1
	I0921 21:31:48.670480    5424 retry.go:31] will retry after 673.154531ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "addons-20220921213059-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220921213059-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: addons-20220921213059-5916
	I0921 21:31:49.358019    5424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220921213059-5916
	W0921 21:31:49.550042    5424 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220921213059-5916 returned with exit code 1
	W0921 21:31:49.550042    5424 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "addons-20220921213059-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220921213059-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: addons-20220921213059-5916
	
	W0921 21:31:49.550042    5424 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "addons-20220921213059-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220921213059-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: addons-20220921213059-5916
	I0921 21:31:49.550042    5424 fix.go:57] fixHost completed within 31.2273929s
	I0921 21:31:49.550042    5424 start.go:83] releasing machines lock for "addons-20220921213059-5916", held for 31.2277987s
	W0921 21:31:49.551350    5424 out.go:239] * Failed to start docker container. Running "minikube delete -p addons-20220921213059-5916" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for addons-20220921213059-5916 container: docker volume create addons-20220921213059-5916 --label name.minikube.sigs.k8s.io=addons-20220921213059-5916 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create addons-20220921213059-5916: error while creating volume root path '/var/lib/docker/volumes/addons-20220921213059-5916': mkdir /var/lib/docker/volumes/addons-20220921213059-5916: read-only file system
	
	* Failed to start docker container. Running "minikube delete -p addons-20220921213059-5916" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for addons-20220921213059-5916 container: docker volume create addons-20220921213059-5916 --label name.minikube.sigs.k8s.io=addons-20220921213059-5916 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create addons-20220921213059-5916: error while creating volume root path '/var/lib/docker/volumes/addons-20220921213059-5916': mkdir /var/lib/docker/volumes/addons-20220921213059-5916: read-only file system
	
	I0921 21:31:49.556348    5424 out.go:177] 
	W0921 21:31:49.558790    5424 out.go:239] X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for addons-20220921213059-5916 container: docker volume create addons-20220921213059-5916 --label name.minikube.sigs.k8s.io=addons-20220921213059-5916 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create addons-20220921213059-5916: error while creating volume root path '/var/lib/docker/volumes/addons-20220921213059-5916': mkdir /var/lib/docker/volumes/addons-20220921213059-5916: read-only file system
	
	X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for addons-20220921213059-5916 container: docker volume create addons-20220921213059-5916 --label name.minikube.sigs.k8s.io=addons-20220921213059-5916 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create addons-20220921213059-5916: error while creating volume root path '/var/lib/docker/volumes/addons-20220921213059-5916': mkdir /var/lib/docker/volumes/addons-20220921213059-5916: read-only file system
	
	W0921 21:31:49.558790    5424 out.go:239] * Suggestion: Restart Docker
	* Suggestion: Restart Docker
	W0921 21:31:49.558790    5424 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/6825
	* Related issue: https://github.com/kubernetes/minikube/issues/6825
	I0921 21:31:49.562453    5424 out.go:177] 

                                                
                                                
** /stderr **
addons_test.go:78: out/minikube-windows-amd64.exe start -p addons-20220921213059-5916 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --driver=docker --addons=ingress --addons=ingress-dns --addons=helm-tiller failed: exit status 60
--- FAIL: TestAddons/Setup (49.97s)

                                                
                                    
x
+
TestCertOptions (54.73s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-windows-amd64.exe start -p cert-options-20220921220839-5916 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker --apiserver-name=localhost

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p cert-options-20220921220839-5916 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker --apiserver-name=localhost: exit status 60 (49.8662645s)

                                                
                                                
-- stdout --
	* [cert-options-20220921220839-5916] minikube v1.27.0 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	  - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=14995
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting control plane node cert-options-20220921220839-5916 in cluster cert-options-20220921220839-5916
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* docker "cert-options-20220921220839-5916" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	    > gcr.io/k8s-minikube/kicbase:  0 B [__________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase:  0 B [__________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase:  0 B [__________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase:  0 B [__________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase:  0 B [______________________] ?% ? p/s 800ms! minikube was unable to download gcr.io/k8s-minikube/kicbase:v0.0.34, but successfully downloaded gcr.io/k8s-minikube/kicbase:v0.0.34 as a fallback image
	E0921 22:08:46.626356    8656 network_create.go:104] error while trying to create docker network cert-options-20220921220839-5916 192.168.49.0/24: create docker network cert-options-20220921220839-5916 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=cert-options-20220921220839-5916 cert-options-20220921220839-5916: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network c1e4d1afe179b396a79e736eff9599d1f87e929cedb0f6f7734b8b9e21efee4b (br-c1e4d1afe179): conflicts with network a04d36bfb3cf4b1b1ab643a2ec70fadfc4a485878b32a275b4c7653037d66dd0 (br-a04d36bfb3cf): networks have overlapping IPv4
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network cert-options-20220921220839-5916 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=cert-options-20220921220839-5916 cert-options-20220921220839-5916: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network c1e4d1afe179b396a79e736eff9599d1f87e929cedb0f6f7734b8b9e21efee4b (br-c1e4d1afe179): conflicts with network a04d36bfb3cf4b1b1ab643a2ec70fadfc4a485878b32a275b4c7653037d66dd0 (br-a04d36bfb3cf): networks have overlapping IPv4
	
	! StartHost failed, but will try again: creating host: create: creating: setting up container node: creating volume for cert-options-20220921220839-5916 container: docker volume create cert-options-20220921220839-5916 --label name.minikube.sigs.k8s.io=cert-options-20220921220839-5916 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create cert-options-20220921220839-5916: error while creating volume root path '/var/lib/docker/volumes/cert-options-20220921220839-5916': mkdir /var/lib/docker/volumes/cert-options-20220921220839-5916: read-only file system
	
	E0921 22:09:18.981659    8656 network_create.go:104] error while trying to create docker network cert-options-20220921220839-5916 192.168.58.0/24: create docker network cert-options-20220921220839-5916 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=cert-options-20220921220839-5916 cert-options-20220921220839-5916: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network cec59b1670e9a9a59df25183fce06c19762d54e74aa0b457a49969ba414b1997 (br-cec59b1670e9): conflicts with network 8a3cd8d165a4af032276277dbee93b52b8ee4aed7ca6d8aceee6814901b31792 (br-8a3cd8d165a4): networks have overlapping IPv4
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network cert-options-20220921220839-5916 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=cert-options-20220921220839-5916 cert-options-20220921220839-5916: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network cec59b1670e9a9a59df25183fce06c19762d54e74aa0b457a49969ba414b1997 (br-cec59b1670e9): conflicts with network 8a3cd8d165a4af032276277dbee93b52b8ee4aed7ca6d8aceee6814901b31792 (br-8a3cd8d165a4): networks have overlapping IPv4
	
	* Failed to start docker container. Running "minikube delete -p cert-options-20220921220839-5916" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for cert-options-20220921220839-5916 container: docker volume create cert-options-20220921220839-5916 --label name.minikube.sigs.k8s.io=cert-options-20220921220839-5916 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create cert-options-20220921220839-5916: error while creating volume root path '/var/lib/docker/volumes/cert-options-20220921220839-5916': mkdir /var/lib/docker/volumes/cert-options-20220921220839-5916: read-only file system
	
	X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for cert-options-20220921220839-5916 container: docker volume create cert-options-20220921220839-5916 --label name.minikube.sigs.k8s.io=cert-options-20220921220839-5916 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create cert-options-20220921220839-5916: error while creating volume root path '/var/lib/docker/volumes/cert-options-20220921220839-5916': mkdir /var/lib/docker/volumes/cert-options-20220921220839-5916: read-only file system
	
	* Suggestion: Restart Docker
	* Related issue: https://github.com/kubernetes/minikube/issues/6825

                                                
                                                
** /stderr **
cert_options_test.go:51: failed to start minikube with args: "out/minikube-windows-amd64.exe start -p cert-options-20220921220839-5916 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker --apiserver-name=localhost" : exit status 60
cert_options_test.go:60: (dbg) Run:  out/minikube-windows-amd64.exe -p cert-options-20220921220839-5916 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:60: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p cert-options-20220921220839-5916 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt": exit status 80 (1.0923663s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "cert-options-20220921220839-5916": docker container inspect cert-options-20220921220839-5916 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cert-options-20220921220839-5916
	
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                        │
	│    * If the above advice does not help, please let us know:                                                            │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                          │
	│                                                                                                                        │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                               │
	│    * Please also attach the following file to the GitHub issue:                                                        │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_stop_eeafc14e343686f0df1f1d4295ac2d3042636ff8_153.log    │
	│                                                                                                                        │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:62: failed to read apiserver cert inside minikube. args "out/minikube-windows-amd64.exe -p cert-options-20220921220839-5916 ssh \"openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt\"": exit status 80
cert_options_test.go:69: apiserver cert does not include 127.0.0.1 in SAN.
cert_options_test.go:69: apiserver cert does not include 192.168.15.15 in SAN.
cert_options_test.go:69: apiserver cert does not include localhost in SAN.
cert_options_test.go:69: apiserver cert does not include www.google.com in SAN.
cert_options_test.go:82: failed to inspect container for the port get port 8555 for "cert-options-20220921220839-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8555/tcp") 0).HostPort}}'" cert-options-20220921220839-5916: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: cert-options-20220921220839-5916
cert_options_test.go:85: expected to get a non-zero forwarded port but got 0
cert_options_test.go:100: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p cert-options-20220921220839-5916 -- "sudo cat /etc/kubernetes/admin.conf"
cert_options_test.go:100: (dbg) Non-zero exit: out/minikube-windows-amd64.exe ssh -p cert-options-20220921220839-5916 -- "sudo cat /etc/kubernetes/admin.conf": exit status 80 (1.0612786s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "cert-options-20220921220839-5916": docker container inspect cert-options-20220921220839-5916 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cert-options-20220921220839-5916
	
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                        │
	│    * If the above advice does not help, please let us know:                                                            │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                          │
	│                                                                                                                        │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                               │
	│    * Please also attach the following file to the GitHub issue:                                                        │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_stop_eeafc14e343686f0df1f1d4295ac2d3042636ff8_153.log    │
	│                                                                                                                        │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:102: failed to SSH to minikube with args: "out/minikube-windows-amd64.exe ssh -p cert-options-20220921220839-5916 -- \"sudo cat /etc/kubernetes/admin.conf\"" : exit status 80
cert_options_test.go:106: Internal minikube kubeconfig (admin.conf) does not contains the right api port. 
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "cert-options-20220921220839-5916": docker container inspect cert-options-20220921220839-5916 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cert-options-20220921220839-5916
	
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                        │
	│    * If the above advice does not help, please let us know:                                                            │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                          │
	│                                                                                                                        │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                               │
	│    * Please also attach the following file to the GitHub issue:                                                        │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_stop_eeafc14e343686f0df1f1d4295ac2d3042636ff8_153.log    │
	│                                                                                                                        │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:109: *** TestCertOptions FAILED at 2022-09-21 22:09:31.7661498 +0000 GMT m=+2379.315969301
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestCertOptions]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect cert-options-20220921220839-5916
helpers_test.go:231: (dbg) Non-zero exit: docker inspect cert-options-20220921220839-5916: exit status 1 (240.0044ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: cert-options-20220921220839-5916

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p cert-options-20220921220839-5916 -n cert-options-20220921220839-5916
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p cert-options-20220921220839-5916 -n cert-options-20220921220839-5916: exit status 7 (572.7421ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0921 22:09:32.557189    9108 status.go:247] status error: host: state: unknown state "cert-options-20220921220839-5916": docker container inspect cert-options-20220921220839-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cert-options-20220921220839-5916

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-options-20220921220839-5916" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:175: Cleaning up "cert-options-20220921220839-5916" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p cert-options-20220921220839-5916
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p cert-options-20220921220839-5916: (1.6831525s)
--- FAIL: TestCertOptions (54.73s)

                                                
                                    
x
+
TestCertExpiration (308.84s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-windows-amd64.exe start -p cert-expiration-20220921220719-5916 --memory=2048 --cert-expiration=3m --driver=docker

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p cert-expiration-20220921220719-5916 --memory=2048 --cert-expiration=3m --driver=docker: exit status 60 (50.0811251s)

                                                
                                                
-- stdout --
	* [cert-expiration-20220921220719-5916] minikube v1.27.0 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	  - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=14995
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting control plane node cert-expiration-20220921220719-5916 in cluster cert-expiration-20220921220719-5916
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* docker "cert-expiration-20220921220719-5916" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	    > gcr.io/k8s-minikube/kicbase:  0 B [__________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase:  0 B [__________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase:  0 B [__________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase:  0 B [______________________] ?% ? p/s 800ms! minikube was unable to download gcr.io/k8s-minikube/kicbase:v0.0.34, but successfully downloaded gcr.io/k8s-minikube/kicbase:v0.0.34 as a fallback image
	E0921 22:07:26.185617    6384 network_create.go:104] error while trying to create docker network cert-expiration-20220921220719-5916 192.168.49.0/24: create docker network cert-expiration-20220921220719-5916 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=cert-expiration-20220921220719-5916 cert-expiration-20220921220719-5916: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network a7d1c6380d4c9286e816236531e58a71bde517b3600f6f4bce2da9e15bfd1ad2 (br-a7d1c6380d4c): conflicts with network a04d36bfb3cf4b1b1ab643a2ec70fadfc4a485878b32a275b4c7653037d66dd0 (br-a04d36bfb3cf): networks have overlapping IPv4
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network cert-expiration-20220921220719-5916 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=cert-expiration-20220921220719-5916 cert-expiration-20220921220719-5916: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network a7d1c6380d4c9286e816236531e58a71bde517b3600f6f4bce2da9e15bfd1ad2 (br-a7d1c6380d4c): conflicts with network a04d36bfb3cf4b1b1ab643a2ec70fadfc4a485878b32a275b4c7653037d66dd0 (br-a04d36bfb3cf): networks have overlapping IPv4
	
	! StartHost failed, but will try again: creating host: create: creating: setting up container node: creating volume for cert-expiration-20220921220719-5916 container: docker volume create cert-expiration-20220921220719-5916 --label name.minikube.sigs.k8s.io=cert-expiration-20220921220719-5916 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create cert-expiration-20220921220719-5916: error while creating volume root path '/var/lib/docker/volumes/cert-expiration-20220921220719-5916': mkdir /var/lib/docker/volumes/cert-expiration-20220921220719-5916: read-only file system
	
	E0921 22:07:58.858119    6384 network_create.go:104] error while trying to create docker network cert-expiration-20220921220719-5916 192.168.58.0/24: create docker network cert-expiration-20220921220719-5916 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=cert-expiration-20220921220719-5916 cert-expiration-20220921220719-5916: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 134945fbfaa5ba91289533ad127b2cff3e1a7ce776db3d5d0f62e16f6e457963 (br-134945fbfaa5): conflicts with network 8a3cd8d165a4af032276277dbee93b52b8ee4aed7ca6d8aceee6814901b31792 (br-8a3cd8d165a4): networks have overlapping IPv4
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network cert-expiration-20220921220719-5916 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=cert-expiration-20220921220719-5916 cert-expiration-20220921220719-5916: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 134945fbfaa5ba91289533ad127b2cff3e1a7ce776db3d5d0f62e16f6e457963 (br-134945fbfaa5): conflicts with network 8a3cd8d165a4af032276277dbee93b52b8ee4aed7ca6d8aceee6814901b31792 (br-8a3cd8d165a4): networks have overlapping IPv4
	
	* Failed to start docker container. Running "minikube delete -p cert-expiration-20220921220719-5916" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for cert-expiration-20220921220719-5916 container: docker volume create cert-expiration-20220921220719-5916 --label name.minikube.sigs.k8s.io=cert-expiration-20220921220719-5916 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create cert-expiration-20220921220719-5916: error while creating volume root path '/var/lib/docker/volumes/cert-expiration-20220921220719-5916': mkdir /var/lib/docker/volumes/cert-expiration-20220921220719-5916: read-only file system
	
	X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for cert-expiration-20220921220719-5916 container: docker volume create cert-expiration-20220921220719-5916 --label name.minikube.sigs.k8s.io=cert-expiration-20220921220719-5916 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create cert-expiration-20220921220719-5916: error while creating volume root path '/var/lib/docker/volumes/cert-expiration-20220921220719-5916': mkdir /var/lib/docker/volumes/cert-expiration-20220921220719-5916: read-only file system
	
	* Suggestion: Restart Docker
	* Related issue: https://github.com/kubernetes/minikube/issues/6825

                                                
                                                
** /stderr **
cert_options_test.go:125: failed to start minikube with args: "out/minikube-windows-amd64.exe start -p cert-expiration-20220921220719-5916 --memory=2048 --cert-expiration=3m --driver=docker" : exit status 60

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:131: (dbg) Run:  out/minikube-windows-amd64.exe start -p cert-expiration-20220921220719-5916 --memory=2048 --cert-expiration=8760h --driver=docker

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:131: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p cert-expiration-20220921220719-5916 --memory=2048 --cert-expiration=8760h --driver=docker: exit status 60 (1m16.1258438s)

                                                
                                                
-- stdout --
	* [cert-expiration-20220921220719-5916] minikube v1.27.0 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	  - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=14995
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on existing profile
	* Starting control plane node cert-expiration-20220921220719-5916 in cluster cert-expiration-20220921220719-5916
	* Pulling base image ...
	* docker "cert-expiration-20220921220719-5916" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* docker "cert-expiration-20220921220719-5916" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	    > gcr.io/k8s-minikube/kicbase:  0 B [__________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase:  0 B [__________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase:  0 B [__________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase:  0 B [__________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase:  0 B [______________________] ?% ? p/s 900ms! minikube was unable to download gcr.io/k8s-minikube/kicbase:v0.0.34, but successfully downloaded gcr.io/k8s-minikube/kicbase:v0.0.34 as a fallback image
	E0921 22:11:36.629747    8760 network_create.go:104] error while trying to create docker network cert-expiration-20220921220719-5916 192.168.49.0/24: create docker network cert-expiration-20220921220719-5916 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=cert-expiration-20220921220719-5916 cert-expiration-20220921220719-5916: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 4f49ddbae118aed0ab0ee7797975943ddcef0b07a6c9426d9a94f587511988d9 (br-4f49ddbae118): conflicts with network a04d36bfb3cf4b1b1ab643a2ec70fadfc4a485878b32a275b4c7653037d66dd0 (br-a04d36bfb3cf): networks have overlapping IPv4
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network cert-expiration-20220921220719-5916 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=cert-expiration-20220921220719-5916 cert-expiration-20220921220719-5916: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 4f49ddbae118aed0ab0ee7797975943ddcef0b07a6c9426d9a94f587511988d9 (br-4f49ddbae118): conflicts with network a04d36bfb3cf4b1b1ab643a2ec70fadfc4a485878b32a275b4c7653037d66dd0 (br-a04d36bfb3cf): networks have overlapping IPv4
	
	! StartHost failed, but will try again: recreate: creating host: create: creating: setting up container node: creating volume for cert-expiration-20220921220719-5916 container: docker volume create cert-expiration-20220921220719-5916 --label name.minikube.sigs.k8s.io=cert-expiration-20220921220719-5916 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create cert-expiration-20220921220719-5916: error while creating volume root path '/var/lib/docker/volumes/cert-expiration-20220921220719-5916': mkdir /var/lib/docker/volumes/cert-expiration-20220921220719-5916: read-only file system
	
	E0921 22:12:15.981417    8760 network_create.go:104] error while trying to create docker network cert-expiration-20220921220719-5916 192.168.58.0/24: create docker network cert-expiration-20220921220719-5916 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=cert-expiration-20220921220719-5916 cert-expiration-20220921220719-5916: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network ac1ebde7c638c55acd78ef5fa30a3d259ac78c38b156abe8b304c36db7972fc5 (br-ac1ebde7c638): conflicts with network 8a3cd8d165a4af032276277dbee93b52b8ee4aed7ca6d8aceee6814901b31792 (br-8a3cd8d165a4): networks have overlapping IPv4
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network cert-expiration-20220921220719-5916 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=cert-expiration-20220921220719-5916 cert-expiration-20220921220719-5916: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network ac1ebde7c638c55acd78ef5fa30a3d259ac78c38b156abe8b304c36db7972fc5 (br-ac1ebde7c638): conflicts with network 8a3cd8d165a4af032276277dbee93b52b8ee4aed7ca6d8aceee6814901b31792 (br-8a3cd8d165a4): networks have overlapping IPv4
	
	* Failed to start docker container. Running "minikube delete -p cert-expiration-20220921220719-5916" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for cert-expiration-20220921220719-5916 container: docker volume create cert-expiration-20220921220719-5916 --label name.minikube.sigs.k8s.io=cert-expiration-20220921220719-5916 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create cert-expiration-20220921220719-5916: error while creating volume root path '/var/lib/docker/volumes/cert-expiration-20220921220719-5916': mkdir /var/lib/docker/volumes/cert-expiration-20220921220719-5916: read-only file system
	
	X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for cert-expiration-20220921220719-5916 container: docker volume create cert-expiration-20220921220719-5916 --label name.minikube.sigs.k8s.io=cert-expiration-20220921220719-5916 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create cert-expiration-20220921220719-5916: error while creating volume root path '/var/lib/docker/volumes/cert-expiration-20220921220719-5916': mkdir /var/lib/docker/volumes/cert-expiration-20220921220719-5916: read-only file system
	
	* Suggestion: Restart Docker
	* Related issue: https://github.com/kubernetes/minikube/issues/6825

                                                
                                                
** /stderr **
cert_options_test.go:133: failed to start minikube after cert expiration: "out/minikube-windows-amd64.exe start -p cert-expiration-20220921220719-5916 --memory=2048 --cert-expiration=8760h --driver=docker" : exit status 60

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:136: minikube start output did not warn about expired certs: 
-- stdout --
	* [cert-expiration-20220921220719-5916] minikube v1.27.0 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	  - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=14995
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on existing profile
	* Starting control plane node cert-expiration-20220921220719-5916 in cluster cert-expiration-20220921220719-5916
	* Pulling base image ...
	* docker "cert-expiration-20220921220719-5916" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* docker "cert-expiration-20220921220719-5916" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	    > gcr.io/k8s-minikube/kicbase:  0 B [__________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase:  0 B [__________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase:  0 B [__________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase:  0 B [__________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase:  0 B [______________________] ?% ? p/s 900ms! minikube was unable to download gcr.io/k8s-minikube/kicbase:v0.0.34, but successfully downloaded gcr.io/k8s-minikube/kicbase:v0.0.34 as a fallback image
	E0921 22:11:36.629747    8760 network_create.go:104] error while trying to create docker network cert-expiration-20220921220719-5916 192.168.49.0/24: create docker network cert-expiration-20220921220719-5916 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=cert-expiration-20220921220719-5916 cert-expiration-20220921220719-5916: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 4f49ddbae118aed0ab0ee7797975943ddcef0b07a6c9426d9a94f587511988d9 (br-4f49ddbae118): conflicts with network a04d36bfb3cf4b1b1ab643a2ec70fadfc4a485878b32a275b4c7653037d66dd0 (br-a04d36bfb3cf): networks have overlapping IPv4
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network cert-expiration-20220921220719-5916 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=cert-expiration-20220921220719-5916 cert-expiration-20220921220719-5916: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 4f49ddbae118aed0ab0ee7797975943ddcef0b07a6c9426d9a94f587511988d9 (br-4f49ddbae118): conflicts with network a04d36bfb3cf4b1b1ab643a2ec70fadfc4a485878b32a275b4c7653037d66dd0 (br-a04d36bfb3cf): networks have overlapping IPv4
	
	! StartHost failed, but will try again: recreate: creating host: create: creating: setting up container node: creating volume for cert-expiration-20220921220719-5916 container: docker volume create cert-expiration-20220921220719-5916 --label name.minikube.sigs.k8s.io=cert-expiration-20220921220719-5916 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create cert-expiration-20220921220719-5916: error while creating volume root path '/var/lib/docker/volumes/cert-expiration-20220921220719-5916': mkdir /var/lib/docker/volumes/cert-expiration-20220921220719-5916: read-only file system
	
	E0921 22:12:15.981417    8760 network_create.go:104] error while trying to create docker network cert-expiration-20220921220719-5916 192.168.58.0/24: create docker network cert-expiration-20220921220719-5916 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=cert-expiration-20220921220719-5916 cert-expiration-20220921220719-5916: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network ac1ebde7c638c55acd78ef5fa30a3d259ac78c38b156abe8b304c36db7972fc5 (br-ac1ebde7c638): conflicts with network 8a3cd8d165a4af032276277dbee93b52b8ee4aed7ca6d8aceee6814901b31792 (br-8a3cd8d165a4): networks have overlapping IPv4
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network cert-expiration-20220921220719-5916 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=cert-expiration-20220921220719-5916 cert-expiration-20220921220719-5916: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network ac1ebde7c638c55acd78ef5fa30a3d259ac78c38b156abe8b304c36db7972fc5 (br-ac1ebde7c638): conflicts with network 8a3cd8d165a4af032276277dbee93b52b8ee4aed7ca6d8aceee6814901b31792 (br-8a3cd8d165a4): networks have overlapping IPv4
	
	* Failed to start docker container. Running "minikube delete -p cert-expiration-20220921220719-5916" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for cert-expiration-20220921220719-5916 container: docker volume create cert-expiration-20220921220719-5916 --label name.minikube.sigs.k8s.io=cert-expiration-20220921220719-5916 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create cert-expiration-20220921220719-5916: error while creating volume root path '/var/lib/docker/volumes/cert-expiration-20220921220719-5916': mkdir /var/lib/docker/volumes/cert-expiration-20220921220719-5916: read-only file system
	
	X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for cert-expiration-20220921220719-5916 container: docker volume create cert-expiration-20220921220719-5916 --label name.minikube.sigs.k8s.io=cert-expiration-20220921220719-5916 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create cert-expiration-20220921220719-5916: error while creating volume root path '/var/lib/docker/volumes/cert-expiration-20220921220719-5916': mkdir /var/lib/docker/volumes/cert-expiration-20220921220719-5916: read-only file system
	
	* Suggestion: Restart Docker
	* Related issue: https://github.com/kubernetes/minikube/issues/6825

                                                
                                                
** /stderr **
cert_options_test.go:138: *** TestCertExpiration FAILED at 2022-09-21 22:12:25.4222256 +0000 GMT m=+2552.970700301
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestCertExpiration]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect cert-expiration-20220921220719-5916
helpers_test.go:231: (dbg) Non-zero exit: docker inspect cert-expiration-20220921220719-5916: exit status 1 (267.8254ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: cert-expiration-20220921220719-5916

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p cert-expiration-20220921220719-5916 -n cert-expiration-20220921220719-5916
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p cert-expiration-20220921220719-5916 -n cert-expiration-20220921220719-5916: exit status 7 (611.6987ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0921 22:12:26.281323    8124 status.go:247] status error: host: state: unknown state "cert-expiration-20220921220719-5916": docker container inspect cert-expiration-20220921220719-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cert-expiration-20220921220719-5916

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-expiration-20220921220719-5916" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:175: Cleaning up "cert-expiration-20220921220719-5916" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p cert-expiration-20220921220719-5916

                                                
                                                
=== CONT  TestCertExpiration
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p cert-expiration-20220921220719-5916: (1.7328483s)
--- FAIL: TestCertExpiration (308.84s)

                                                
                                    
x
+
TestDockerFlags (54.06s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:45: (dbg) Run:  out/minikube-windows-amd64.exe start -p docker-flags-20220921220745-5916 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:45: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p docker-flags-20220921220745-5916 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker: exit status 60 (48.9431945s)

                                                
                                                
-- stdout --
	* [docker-flags-20220921220745-5916] minikube v1.27.0 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	  - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=14995
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting control plane node docker-flags-20220921220745-5916 in cluster docker-flags-20220921220745-5916
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* docker "docker-flags-20220921220745-5916" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0921 22:07:45.702059    6788 out.go:296] Setting OutFile to fd 1000 ...
	I0921 22:07:45.758442    6788 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0921 22:07:45.758442    6788 out.go:309] Setting ErrFile to fd 1724...
	I0921 22:07:45.758442    6788 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0921 22:07:45.780283    6788 out.go:303] Setting JSON to false
	I0921 22:07:45.784250    6788 start.go:115] hostinfo: {"hostname":"minikube2","uptime":4134,"bootTime":1663793931,"procs":154,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19044 Build 19044","kernelVersion":"10.0.19044 Build 19044","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"29dced62-21fb-45d8-a34e-472b66ced002"}
	W0921 22:07:45.784313    6788 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0921 22:07:45.790282    6788 out.go:177] * [docker-flags-20220921220745-5916] minikube v1.27.0 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	I0921 22:07:45.794441    6788 notify.go:214] Checking for updates...
	I0921 22:07:45.797719    6788 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	I0921 22:07:45.800630    6788 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	I0921 22:07:45.803289    6788 out.go:177]   - MINIKUBE_LOCATION=14995
	I0921 22:07:45.805650    6788 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0921 22:07:45.809023    6788 config.go:180] Loaded profile config "NoKubernetes-20220921220434-5916": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v0.0.0
	I0921 22:07:45.809023    6788 config.go:180] Loaded profile config "cert-expiration-20220921220719-5916": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.2
	I0921 22:07:45.809023    6788 config.go:180] Loaded profile config "missing-upgrade-20220921220627-5916": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.0
	I0921 22:07:45.810186    6788 config.go:180] Loaded profile config "multinode-20220921215635-5916-m01": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.2
	I0921 22:07:45.810186    6788 driver.go:365] Setting default libvirt URI to qemu:///system
	I0921 22:07:46.093268    6788 docker.go:137] docker version: linux-20.10.17
	I0921 22:07:46.100815    6788 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0921 22:07:46.621999    6788 info.go:265] docker info: {ID:6VFI:N2PO:PLBP:KXLR:EN7R:OR7P:564S:UY7N:ZWDA:5V2U:KGL2:KJZ7 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:43 OomKillDisable:true NGoroutines:74 SystemTime:2022-09-21 22:07:46.2464997 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.10.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.9] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-
plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.19.0]] Warnings:<nil>}}
	I0921 22:07:46.626524    6788 out.go:177] * Using the docker driver based on user configuration
	I0921 22:07:46.626524    6788 start.go:284] selected driver: docker
	I0921 22:07:46.626524    6788 start.go:808] validating driver "docker" against <nil>
	I0921 22:07:46.626524    6788 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0921 22:07:46.691822    6788 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0921 22:07:47.245473    6788 info.go:265] docker info: {ID:6VFI:N2PO:PLBP:KXLR:EN7R:OR7P:564S:UY7N:ZWDA:5V2U:KGL2:KJZ7 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:43 OomKillDisable:true NGoroutines:74 SystemTime:2022-09-21 22:07:46.8560801 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.10.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.9] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-
plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.19.0]] Warnings:<nil>}}
	I0921 22:07:47.245473    6788 start_flags.go:302] no existing cluster config was found, will generate one from the flags 
	I0921 22:07:47.246493    6788 start_flags.go:862] Waiting for no components: map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false]
	I0921 22:07:47.251479    6788 out.go:177] * Using Docker Desktop driver with root privileges
	I0921 22:07:47.252476    6788 cni.go:95] Creating CNI manager for ""
	I0921 22:07:47.252476    6788 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0921 22:07:47.252476    6788 start_flags.go:316] config:
	{Name:docker-flags-20220921220745-5916 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.2 ClusterName:docker-flags-20220921220745-5916 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDom
ain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath
:/var/run/socket_vmnet}
	I0921 22:07:47.256478    6788 out.go:177] * Starting control plane node docker-flags-20220921220745-5916 in cluster docker-flags-20220921220745-5916
	I0921 22:07:47.259474    6788 cache.go:120] Beginning downloading kic base image for docker with docker
	I0921 22:07:47.262474    6788 out.go:177] * Pulling base image ...
	I0921 22:07:47.265474    6788 preload.go:132] Checking if preload exists for k8s version v1.25.2 and runtime docker
	I0921 22:07:47.265474    6788 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c in local docker daemon
	I0921 22:07:47.265474    6788 preload.go:148] Found local preload: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.25.2-docker-overlay2-amd64.tar.lz4
	I0921 22:07:47.265474    6788 cache.go:57] Caching tarball of preloaded images
	I0921 22:07:47.266479    6788 preload.go:174] Found C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.25.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0921 22:07:47.266479    6788 cache.go:60] Finished verifying existence of preloaded tar for  v1.25.2 on docker
	I0921 22:07:47.266479    6788 profile.go:148] Saving config to C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\docker-flags-20220921220745-5916\config.json ...
	I0921 22:07:47.266479    6788 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\docker-flags-20220921220745-5916\config.json: {Name:mk1390fc9704bd8f48846e1324516c9648648afb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0921 22:07:47.450266    6788 cache.go:147] Downloading gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c to local cache
	I0921 22:07:47.450266    6788 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase_v0.0.34@sha256_f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c.tar
	I0921 22:07:47.450266    6788 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase_v0.0.34@sha256_f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c.tar
	I0921 22:07:47.450266    6788 image.go:59] Checking for gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c in local cache directory
	I0921 22:07:47.450266    6788 image.go:62] Found gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c in local cache directory, skipping pull
	I0921 22:07:47.450266    6788 image.go:103] gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c exists in cache, skipping pull
	I0921 22:07:47.450266    6788 cache.go:150] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c as a tarball
	I0921 22:07:47.450266    6788 cache.go:161] Loading gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c from local cache
	I0921 22:07:47.451235    6788 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase_v0.0.34@sha256_f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c.tar
	I0921 22:07:49.711051    6788 image.go:219] response: {"errorDetail":{"message":"mkdir /var/lib/docker/tmp/docker-import-3355544392: read-only file system"},"error":"mkdir /var/lib/docker/tmp/docker-import-3355544392: read-only file system"}
	I0921 22:07:49.711051    6788 cache.go:164] successfully loaded gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c from cached tarball
	I0921 22:07:49.711051    6788 cache.go:170] Downloading gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c to local daemon
	I0921 22:07:49.711051    6788 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c.d.lock -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase_v0.0.34@sha256_f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c.d.lock
	I0921 22:07:49.711051    6788 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c in local docker daemon
	I0921 22:07:49.948602    6788 image.go:243] Writing gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c to local daemon
	I0921 22:07:49.948602    6788 image.go:258] Getting image gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c
	I0921 22:07:50.312384    6788 image.go:272] Writing image gcr.io/k8s-minikube/kicbase:v0.0.34
	    > gcr.io/k8s-minikube/kicbase:  0 B [__________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase:  0 B [__________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase:  0 B [__________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase:  0 B [__________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase:  0 B [______________________] ?% ? p/s 900msI0921 22:07:51.179875    6788 image.go:306] Pulling image gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c
	I0921 22:07:51.521808    6788 cache.go:177] use image loaded from cache gcr.io/k8s-minikube/kicbase:v0.0.34
	W0921 22:07:51.521808    6788 out.go:239] ! minikube was unable to download gcr.io/k8s-minikube/kicbase:v0.0.34, but successfully downloaded gcr.io/k8s-minikube/kicbase:v0.0.34 as a fallback image
	! minikube was unable to download gcr.io/k8s-minikube/kicbase:v0.0.34, but successfully downloaded gcr.io/k8s-minikube/kicbase:v0.0.34 as a fallback image
	I0921 22:07:51.521808    6788 cache.go:208] Successfully downloaded all kic artifacts
	I0921 22:07:51.521808    6788 start.go:364] acquiring machines lock for docker-flags-20220921220745-5916: {Name:mkf5499f4870bfcd550f2699249fd53ab5cfe252 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0921 22:07:51.521808    6788 start.go:368] acquired machines lock for "docker-flags-20220921220745-5916" in 0s
	I0921 22:07:51.522536    6788 start.go:93] Provisioning new machine with config: &{Name:docker-flags-20220921220745-5916 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.34 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.2 ClusterName:docker-flags-20220921220745-5916 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs
:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.25.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetric
s:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet} &{Name: IP: Port:8443 KubernetesVersion:v1.25.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0921 22:07:51.522719    6788 start.go:125] createHost starting for "" (driver="docker")
	I0921 22:07:51.527620    6788 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0921 22:07:51.527675    6788 start.go:159] libmachine.API.Create for "docker-flags-20220921220745-5916" (driver="docker")
	I0921 22:07:51.527675    6788 client.go:168] LocalClient.Create starting
	I0921 22:07:51.528388    6788 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I0921 22:07:51.528388    6788 main.go:134] libmachine: Decoding PEM data...
	I0921 22:07:51.528388    6788 main.go:134] libmachine: Parsing certificate...
	I0921 22:07:51.528388    6788 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I0921 22:07:51.529014    6788 main.go:134] libmachine: Decoding PEM data...
	I0921 22:07:51.529088    6788 main.go:134] libmachine: Parsing certificate...
	I0921 22:07:51.536916    6788 cli_runner.go:164] Run: docker network inspect docker-flags-20220921220745-5916 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0921 22:07:51.724492    6788 cli_runner.go:211] docker network inspect docker-flags-20220921220745-5916 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0921 22:07:51.732023    6788 network_create.go:272] running [docker network inspect docker-flags-20220921220745-5916] to gather additional debugging logs...
	I0921 22:07:51.732023    6788 cli_runner.go:164] Run: docker network inspect docker-flags-20220921220745-5916
	W0921 22:07:51.915856    6788 cli_runner.go:211] docker network inspect docker-flags-20220921220745-5916 returned with exit code 1
	I0921 22:07:51.915856    6788 network_create.go:275] error running [docker network inspect docker-flags-20220921220745-5916]: docker network inspect docker-flags-20220921220745-5916: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: docker-flags-20220921220745-5916
	I0921 22:07:51.915856    6788 network_create.go:277] output of [docker network inspect docker-flags-20220921220745-5916]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: docker-flags-20220921220745-5916
	
	** /stderr **
	I0921 22:07:51.922829    6788 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0921 22:07:52.163538    6788 network.go:290] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc000588110] misses:0}
	I0921 22:07:52.164325    6788 network.go:236] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0921 22:07:52.164325    6788 network_create.go:115] attempt to create docker network docker-flags-20220921220745-5916 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0921 22:07:52.172185    6788 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=docker-flags-20220921220745-5916 docker-flags-20220921220745-5916
	W0921 22:07:52.356199    6788 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=docker-flags-20220921220745-5916 docker-flags-20220921220745-5916 returned with exit code 1
	E0921 22:07:52.356199    6788 network_create.go:104] error while trying to create docker network docker-flags-20220921220745-5916 192.168.49.0/24: create docker network docker-flags-20220921220745-5916 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=docker-flags-20220921220745-5916 docker-flags-20220921220745-5916: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 6bef8d44862a4b258c6ca5b9d36da297317f90080e058260a7f86406ebd6a5f0 (br-6bef8d44862a): conflicts with network a04d36bfb3cf4b1b1ab643a2ec70fadfc4a485878b32a275b4c7653037d66dd0 (br-a04d36bfb3cf): networks have overlapping IPv4
	W0921 22:07:52.356199    6788 out.go:239] ! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network docker-flags-20220921220745-5916 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=docker-flags-20220921220745-5916 docker-flags-20220921220745-5916: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 6bef8d44862a4b258c6ca5b9d36da297317f90080e058260a7f86406ebd6a5f0 (br-6bef8d44862a): conflicts with network a04d36bfb3cf4b1b1ab643a2ec70fadfc4a485878b32a275b4c7653037d66dd0 (br-a04d36bfb3cf): networks have overlapping IPv4
	
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network docker-flags-20220921220745-5916 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=docker-flags-20220921220745-5916 docker-flags-20220921220745-5916: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 6bef8d44862a4b258c6ca5b9d36da297317f90080e058260a7f86406ebd6a5f0 (br-6bef8d44862a): conflicts with network a04d36bfb3cf4b1b1ab643a2ec70fadfc4a485878b32a275b4c7653037d66dd0 (br-a04d36bfb3cf): networks have overlapping IPv4
	
	I0921 22:07:52.373297    6788 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0921 22:07:52.595872    6788 cli_runner.go:164] Run: docker volume create docker-flags-20220921220745-5916 --label name.minikube.sigs.k8s.io=docker-flags-20220921220745-5916 --label created_by.minikube.sigs.k8s.io=true
	W0921 22:07:52.764631    6788 cli_runner.go:211] docker volume create docker-flags-20220921220745-5916 --label name.minikube.sigs.k8s.io=docker-flags-20220921220745-5916 --label created_by.minikube.sigs.k8s.io=true returned with exit code 1
	I0921 22:07:52.764631    6788 client.go:171] LocalClient.Create took 1.2369463s
	I0921 22:07:54.788901    6788 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0921 22:07:54.802452    6788 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20220921220745-5916
	W0921 22:07:55.012115    6788 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20220921220745-5916 returned with exit code 1
	I0921 22:07:55.012445    6788 retry.go:31] will retry after 276.165072ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-20220921220745-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20220921220745-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-20220921220745-5916
	I0921 22:07:55.303337    6788 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20220921220745-5916
	W0921 22:07:55.505545    6788 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20220921220745-5916 returned with exit code 1
	I0921 22:07:55.505545    6788 retry.go:31] will retry after 540.190908ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-20220921220745-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20220921220745-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-20220921220745-5916
	I0921 22:07:56.060274    6788 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20220921220745-5916
	W0921 22:07:56.254250    6788 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20220921220745-5916 returned with exit code 1
	W0921 22:07:56.254513    6788 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-20220921220745-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20220921220745-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-20220921220745-5916
	
	W0921 22:07:56.254513    6788 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-20220921220745-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20220921220745-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-20220921220745-5916
	I0921 22:07:56.266774    6788 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0921 22:07:56.275601    6788 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20220921220745-5916
	W0921 22:07:56.487242    6788 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20220921220745-5916 returned with exit code 1
	I0921 22:07:56.487242    6788 retry.go:31] will retry after 234.428547ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-20220921220745-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20220921220745-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-20220921220745-5916
	I0921 22:07:56.742305    6788 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20220921220745-5916
	W0921 22:07:56.924507    6788 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20220921220745-5916 returned with exit code 1
	I0921 22:07:56.924507    6788 retry.go:31] will retry after 346.739061ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-20220921220745-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20220921220745-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-20220921220745-5916
	I0921 22:07:57.291726    6788 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20220921220745-5916
	W0921 22:07:57.485674    6788 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20220921220745-5916 returned with exit code 1
	I0921 22:07:57.485674    6788 retry.go:31] will retry after 667.587979ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-20220921220745-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20220921220745-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-20220921220745-5916
	I0921 22:07:58.164086    6788 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20220921220745-5916
	W0921 22:07:58.374719    6788 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20220921220745-5916 returned with exit code 1
	W0921 22:07:58.374719    6788 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-20220921220745-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20220921220745-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-20220921220745-5916
	
	W0921 22:07:58.374719    6788 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-20220921220745-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20220921220745-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-20220921220745-5916
	I0921 22:07:58.374719    6788 start.go:128] duration metric: createHost completed in 6.8519303s
	I0921 22:07:58.374719    6788 start.go:83] releasing machines lock for "docker-flags-20220921220745-5916", held for 6.8522971s
	W0921 22:07:58.374719    6788 start.go:602] error starting host: creating host: create: creating: setting up container node: creating volume for docker-flags-20220921220745-5916 container: docker volume create docker-flags-20220921220745-5916 --label name.minikube.sigs.k8s.io=docker-flags-20220921220745-5916 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create docker-flags-20220921220745-5916: error while creating volume root path '/var/lib/docker/volumes/docker-flags-20220921220745-5916': mkdir /var/lib/docker/volumes/docker-flags-20220921220745-5916: read-only file system
	I0921 22:07:58.389719    6788 cli_runner.go:164] Run: docker container inspect docker-flags-20220921220745-5916 --format={{.State.Status}}
	W0921 22:07:58.624045    6788 cli_runner.go:211] docker container inspect docker-flags-20220921220745-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:07:58.624045    6788 delete.go:82] Unable to get host status for docker-flags-20220921220745-5916, assuming it has already been deleted: state: unknown state "docker-flags-20220921220745-5916": docker container inspect docker-flags-20220921220745-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-20220921220745-5916
	W0921 22:07:58.624045    6788 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: setting up container node: creating volume for docker-flags-20220921220745-5916 container: docker volume create docker-flags-20220921220745-5916 --label name.minikube.sigs.k8s.io=docker-flags-20220921220745-5916 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create docker-flags-20220921220745-5916: error while creating volume root path '/var/lib/docker/volumes/docker-flags-20220921220745-5916': mkdir /var/lib/docker/volumes/docker-flags-20220921220745-5916: read-only file system
	
	! StartHost failed, but will try again: creating host: create: creating: setting up container node: creating volume for docker-flags-20220921220745-5916 container: docker volume create docker-flags-20220921220745-5916 --label name.minikube.sigs.k8s.io=docker-flags-20220921220745-5916 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create docker-flags-20220921220745-5916: error while creating volume root path '/var/lib/docker/volumes/docker-flags-20220921220745-5916': mkdir /var/lib/docker/volumes/docker-flags-20220921220745-5916: read-only file system
	
	I0921 22:07:58.624045    6788 start.go:617] Will try again in 5 seconds ...
	I0921 22:08:03.638618    6788 start.go:364] acquiring machines lock for docker-flags-20220921220745-5916: {Name:mkf5499f4870bfcd550f2699249fd53ab5cfe252 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0921 22:08:03.638937    6788 start.go:368] acquired machines lock for "docker-flags-20220921220745-5916" in 248µs
	I0921 22:08:03.639112    6788 start.go:96] Skipping create...Using existing machine configuration
	I0921 22:08:03.639149    6788 fix.go:55] fixHost starting: 
	I0921 22:08:03.655743    6788 cli_runner.go:164] Run: docker container inspect docker-flags-20220921220745-5916 --format={{.State.Status}}
	W0921 22:08:03.870272    6788 cli_runner.go:211] docker container inspect docker-flags-20220921220745-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:08:03.870272    6788 fix.go:103] recreateIfNeeded on docker-flags-20220921220745-5916: state= err=unknown state "docker-flags-20220921220745-5916": docker container inspect docker-flags-20220921220745-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-20220921220745-5916
	I0921 22:08:03.870272    6788 fix.go:108] machineExists: false. err=machine does not exist
	I0921 22:08:03.873263    6788 out.go:177] * docker "docker-flags-20220921220745-5916" container is missing, will recreate.
	I0921 22:08:03.875263    6788 delete.go:124] DEMOLISHING docker-flags-20220921220745-5916 ...
	I0921 22:08:03.888270    6788 cli_runner.go:164] Run: docker container inspect docker-flags-20220921220745-5916 --format={{.State.Status}}
	W0921 22:08:04.123836    6788 cli_runner.go:211] docker container inspect docker-flags-20220921220745-5916 --format={{.State.Status}} returned with exit code 1
	W0921 22:08:04.123836    6788 stop.go:75] unable to get state: unknown state "docker-flags-20220921220745-5916": docker container inspect docker-flags-20220921220745-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-20220921220745-5916
	I0921 22:08:04.123836    6788 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "docker-flags-20220921220745-5916": docker container inspect docker-flags-20220921220745-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-20220921220745-5916
	I0921 22:08:04.139824    6788 cli_runner.go:164] Run: docker container inspect docker-flags-20220921220745-5916 --format={{.State.Status}}
	W0921 22:08:04.345942    6788 cli_runner.go:211] docker container inspect docker-flags-20220921220745-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:08:04.345942    6788 delete.go:82] Unable to get host status for docker-flags-20220921220745-5916, assuming it has already been deleted: state: unknown state "docker-flags-20220921220745-5916": docker container inspect docker-flags-20220921220745-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-20220921220745-5916
	I0921 22:08:04.352878    6788 cli_runner.go:164] Run: docker container inspect -f {{.Id}} docker-flags-20220921220745-5916
	W0921 22:08:04.537246    6788 cli_runner.go:211] docker container inspect -f {{.Id}} docker-flags-20220921220745-5916 returned with exit code 1
	I0921 22:08:04.537246    6788 kic.go:356] could not find the container docker-flags-20220921220745-5916 to remove it. will try anyways
	I0921 22:08:04.543239    6788 cli_runner.go:164] Run: docker container inspect docker-flags-20220921220745-5916 --format={{.State.Status}}
	W0921 22:08:04.739374    6788 cli_runner.go:211] docker container inspect docker-flags-20220921220745-5916 --format={{.State.Status}} returned with exit code 1
	W0921 22:08:04.739374    6788 oci.go:84] error getting container status, will try to delete anyways: unknown state "docker-flags-20220921220745-5916": docker container inspect docker-flags-20220921220745-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-20220921220745-5916
	I0921 22:08:04.746358    6788 cli_runner.go:164] Run: docker exec --privileged -t docker-flags-20220921220745-5916 /bin/bash -c "sudo init 0"
	W0921 22:08:04.941248    6788 cli_runner.go:211] docker exec --privileged -t docker-flags-20220921220745-5916 /bin/bash -c "sudo init 0" returned with exit code 1
	I0921 22:08:04.941248    6788 oci.go:646] error shutdown docker-flags-20220921220745-5916: docker exec --privileged -t docker-flags-20220921220745-5916 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: docker-flags-20220921220745-5916
	I0921 22:08:05.950575    6788 cli_runner.go:164] Run: docker container inspect docker-flags-20220921220745-5916 --format={{.State.Status}}
	W0921 22:08:06.144800    6788 cli_runner.go:211] docker container inspect docker-flags-20220921220745-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:08:06.145009    6788 oci.go:658] temporary error verifying shutdown: unknown state "docker-flags-20220921220745-5916": docker container inspect docker-flags-20220921220745-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-20220921220745-5916
	I0921 22:08:06.145079    6788 oci.go:660] temporary error: container docker-flags-20220921220745-5916 status is  but expect it to be exited
	I0921 22:08:06.145124    6788 retry.go:31] will retry after 328.259627ms: couldn't verify container is exited. %v: unknown state "docker-flags-20220921220745-5916": docker container inspect docker-flags-20220921220745-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-20220921220745-5916
	I0921 22:08:06.494477    6788 cli_runner.go:164] Run: docker container inspect docker-flags-20220921220745-5916 --format={{.State.Status}}
	W0921 22:08:06.688246    6788 cli_runner.go:211] docker container inspect docker-flags-20220921220745-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:08:06.688517    6788 oci.go:658] temporary error verifying shutdown: unknown state "docker-flags-20220921220745-5916": docker container inspect docker-flags-20220921220745-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-20220921220745-5916
	I0921 22:08:06.688553    6788 oci.go:660] temporary error: container docker-flags-20220921220745-5916 status is  but expect it to be exited
	I0921 22:08:06.688581    6788 retry.go:31] will retry after 447.727139ms: couldn't verify container is exited. %v: unknown state "docker-flags-20220921220745-5916": docker container inspect docker-flags-20220921220745-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-20220921220745-5916
	I0921 22:08:07.147008    6788 cli_runner.go:164] Run: docker container inspect docker-flags-20220921220745-5916 --format={{.State.Status}}
	W0921 22:08:07.372230    6788 cli_runner.go:211] docker container inspect docker-flags-20220921220745-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:08:07.372230    6788 oci.go:658] temporary error verifying shutdown: unknown state "docker-flags-20220921220745-5916": docker container inspect docker-flags-20220921220745-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-20220921220745-5916
	I0921 22:08:07.372230    6788 oci.go:660] temporary error: container docker-flags-20220921220745-5916 status is  but expect it to be exited
	I0921 22:08:07.372230    6788 retry.go:31] will retry after 901.025843ms: couldn't verify container is exited. %v: unknown state "docker-flags-20220921220745-5916": docker container inspect docker-flags-20220921220745-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-20220921220745-5916
	I0921 22:08:08.287190    6788 cli_runner.go:164] Run: docker container inspect docker-flags-20220921220745-5916 --format={{.State.Status}}
	W0921 22:08:08.494903    6788 cli_runner.go:211] docker container inspect docker-flags-20220921220745-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:08:08.494903    6788 oci.go:658] temporary error verifying shutdown: unknown state "docker-flags-20220921220745-5916": docker container inspect docker-flags-20220921220745-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-20220921220745-5916
	I0921 22:08:08.494903    6788 oci.go:660] temporary error: container docker-flags-20220921220745-5916 status is  but expect it to be exited
	I0921 22:08:08.494903    6788 retry.go:31] will retry after 1.713171311s: couldn't verify container is exited. %v: unknown state "docker-flags-20220921220745-5916": docker container inspect docker-flags-20220921220745-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-20220921220745-5916
	I0921 22:08:10.227107    6788 cli_runner.go:164] Run: docker container inspect docker-flags-20220921220745-5916 --format={{.State.Status}}
	W0921 22:08:10.421244    6788 cli_runner.go:211] docker container inspect docker-flags-20220921220745-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:08:10.421244    6788 oci.go:658] temporary error verifying shutdown: unknown state "docker-flags-20220921220745-5916": docker container inspect docker-flags-20220921220745-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-20220921220745-5916
	I0921 22:08:10.421244    6788 oci.go:660] temporary error: container docker-flags-20220921220745-5916 status is  but expect it to be exited
	I0921 22:08:10.421244    6788 retry.go:31] will retry after 3.325151152s: couldn't verify container is exited. %v: unknown state "docker-flags-20220921220745-5916": docker container inspect docker-flags-20220921220745-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-20220921220745-5916
	I0921 22:08:13.756070    6788 cli_runner.go:164] Run: docker container inspect docker-flags-20220921220745-5916 --format={{.State.Status}}
	W0921 22:08:13.964441    6788 cli_runner.go:211] docker container inspect docker-flags-20220921220745-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:08:13.964530    6788 oci.go:658] temporary error verifying shutdown: unknown state "docker-flags-20220921220745-5916": docker container inspect docker-flags-20220921220745-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-20220921220745-5916
	I0921 22:08:13.964585    6788 oci.go:660] temporary error: container docker-flags-20220921220745-5916 status is  but expect it to be exited
	I0921 22:08:13.964646    6788 retry.go:31] will retry after 2.711970641s: couldn't verify container is exited. %v: unknown state "docker-flags-20220921220745-5916": docker container inspect docker-flags-20220921220745-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-20220921220745-5916
	I0921 22:08:16.689280    6788 cli_runner.go:164] Run: docker container inspect docker-flags-20220921220745-5916 --format={{.State.Status}}
	W0921 22:08:16.898983    6788 cli_runner.go:211] docker container inspect docker-flags-20220921220745-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:08:16.898983    6788 oci.go:658] temporary error verifying shutdown: unknown state "docker-flags-20220921220745-5916": docker container inspect docker-flags-20220921220745-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-20220921220745-5916
	I0921 22:08:16.898983    6788 oci.go:660] temporary error: container docker-flags-20220921220745-5916 status is  but expect it to be exited
	I0921 22:08:16.898983    6788 retry.go:31] will retry after 5.015617898s: couldn't verify container is exited. %v: unknown state "docker-flags-20220921220745-5916": docker container inspect docker-flags-20220921220745-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-20220921220745-5916
	I0921 22:08:21.929855    6788 cli_runner.go:164] Run: docker container inspect docker-flags-20220921220745-5916 --format={{.State.Status}}
	W0921 22:08:22.121878    6788 cli_runner.go:211] docker container inspect docker-flags-20220921220745-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:08:22.121878    6788 oci.go:658] temporary error verifying shutdown: unknown state "docker-flags-20220921220745-5916": docker container inspect docker-flags-20220921220745-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-20220921220745-5916
	I0921 22:08:22.121878    6788 oci.go:660] temporary error: container docker-flags-20220921220745-5916 status is  but expect it to be exited
	I0921 22:08:22.121878    6788 oci.go:88] couldn't shut down docker-flags-20220921220745-5916 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "docker-flags-20220921220745-5916": docker container inspect docker-flags-20220921220745-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-20220921220745-5916
	 
	I0921 22:08:22.128891    6788 cli_runner.go:164] Run: docker rm -f -v docker-flags-20220921220745-5916
	I0921 22:08:22.344340    6788 cli_runner.go:164] Run: docker container inspect -f {{.Id}} docker-flags-20220921220745-5916
	W0921 22:08:22.531267    6788 cli_runner.go:211] docker container inspect -f {{.Id}} docker-flags-20220921220745-5916 returned with exit code 1
	I0921 22:08:22.538944    6788 cli_runner.go:164] Run: docker network inspect docker-flags-20220921220745-5916 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0921 22:08:22.750300    6788 cli_runner.go:211] docker network inspect docker-flags-20220921220745-5916 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0921 22:08:22.756304    6788 network_create.go:272] running [docker network inspect docker-flags-20220921220745-5916] to gather additional debugging logs...
	I0921 22:08:22.756304    6788 cli_runner.go:164] Run: docker network inspect docker-flags-20220921220745-5916
	W0921 22:08:22.982544    6788 cli_runner.go:211] docker network inspect docker-flags-20220921220745-5916 returned with exit code 1
	I0921 22:08:22.982544    6788 network_create.go:275] error running [docker network inspect docker-flags-20220921220745-5916]: docker network inspect docker-flags-20220921220745-5916: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: docker-flags-20220921220745-5916
	I0921 22:08:22.982544    6788 network_create.go:277] output of [docker network inspect docker-flags-20220921220745-5916]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: docker-flags-20220921220745-5916
	
	** /stderr **
	W0921 22:08:22.983259    6788 delete.go:139] delete failed (probably ok) <nil>
	I0921 22:08:22.983259    6788 fix.go:115] Sleeping 1 second for extra luck!
	I0921 22:08:23.991031    6788 start.go:125] createHost starting for "" (driver="docker")
	I0921 22:08:23.995544    6788 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0921 22:08:23.995832    6788 start.go:159] libmachine.API.Create for "docker-flags-20220921220745-5916" (driver="docker")
	I0921 22:08:23.995832    6788 client.go:168] LocalClient.Create starting
	I0921 22:08:23.996505    6788 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I0921 22:08:23.996808    6788 main.go:134] libmachine: Decoding PEM data...
	I0921 22:08:23.996883    6788 main.go:134] libmachine: Parsing certificate...
	I0921 22:08:23.996996    6788 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I0921 22:08:23.996996    6788 main.go:134] libmachine: Decoding PEM data...
	I0921 22:08:23.996996    6788 main.go:134] libmachine: Parsing certificate...
	I0921 22:08:24.005584    6788 cli_runner.go:164] Run: docker network inspect docker-flags-20220921220745-5916 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0921 22:08:24.195455    6788 cli_runner.go:211] docker network inspect docker-flags-20220921220745-5916 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0921 22:08:24.201459    6788 network_create.go:272] running [docker network inspect docker-flags-20220921220745-5916] to gather additional debugging logs...
	I0921 22:08:24.201459    6788 cli_runner.go:164] Run: docker network inspect docker-flags-20220921220745-5916
	W0921 22:08:24.382493    6788 cli_runner.go:211] docker network inspect docker-flags-20220921220745-5916 returned with exit code 1
	I0921 22:08:24.382845    6788 network_create.go:275] error running [docker network inspect docker-flags-20220921220745-5916]: docker network inspect docker-flags-20220921220745-5916: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: docker-flags-20220921220745-5916
	I0921 22:08:24.382977    6788 network_create.go:277] output of [docker network inspect docker-flags-20220921220745-5916]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: docker-flags-20220921220745-5916
	
	** /stderr **
	I0921 22:08:24.408870    6788 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0921 22:08:24.618223    6788 network.go:281] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000588110] amended:false}} dirty:map[] misses:0}
	I0921 22:08:24.619233    6788 network.go:239] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0921 22:08:24.633921    6788 network.go:290] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000588110] amended:true}} dirty:map[192.168.49.0:0xc000588110 192.168.58.0:0xc000488640] misses:0}
	I0921 22:08:24.634452    6788 network.go:236] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0921 22:08:24.634452    6788 network_create.go:115] attempt to create docker network docker-flags-20220921220745-5916 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0921 22:08:24.641954    6788 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=docker-flags-20220921220745-5916 docker-flags-20220921220745-5916
	W0921 22:08:24.834411    6788 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=docker-flags-20220921220745-5916 docker-flags-20220921220745-5916 returned with exit code 1
	E0921 22:08:24.834519    6788 network_create.go:104] error while trying to create docker network docker-flags-20220921220745-5916 192.168.58.0/24: create docker network docker-flags-20220921220745-5916 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=docker-flags-20220921220745-5916 docker-flags-20220921220745-5916: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network f6afe4743e12fb6bb16c87355c9ccc16447f89d55c04fe455f54ba1498069831 (br-f6afe4743e12): conflicts with network 8a3cd8d165a4af032276277dbee93b52b8ee4aed7ca6d8aceee6814901b31792 (br-8a3cd8d165a4): networks have overlapping IPv4
	W0921 22:08:24.834519    6788 out.go:239] ! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network docker-flags-20220921220745-5916 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=docker-flags-20220921220745-5916 docker-flags-20220921220745-5916: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network f6afe4743e12fb6bb16c87355c9ccc16447f89d55c04fe455f54ba1498069831 (br-f6afe4743e12): conflicts with network 8a3cd8d165a4af032276277dbee93b52b8ee4aed7ca6d8aceee6814901b31792 (br-8a3cd8d165a4): networks have overlapping IPv4
	
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network docker-flags-20220921220745-5916 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=docker-flags-20220921220745-5916 docker-flags-20220921220745-5916: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network f6afe4743e12fb6bb16c87355c9ccc16447f89d55c04fe455f54ba1498069831 (br-f6afe4743e12): conflicts with network 8a3cd8d165a4af032276277dbee93b52b8ee4aed7ca6d8aceee6814901b31792 (br-8a3cd8d165a4): networks have overlapping IPv4
	
	I0921 22:08:24.848705    6788 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0921 22:08:25.062613    6788 cli_runner.go:164] Run: docker volume create docker-flags-20220921220745-5916 --label name.minikube.sigs.k8s.io=docker-flags-20220921220745-5916 --label created_by.minikube.sigs.k8s.io=true
	W0921 22:08:25.273360    6788 cli_runner.go:211] docker volume create docker-flags-20220921220745-5916 --label name.minikube.sigs.k8s.io=docker-flags-20220921220745-5916 --label created_by.minikube.sigs.k8s.io=true returned with exit code 1
	I0921 22:08:25.273360    6788 client.go:171] LocalClient.Create took 1.2775184s
	I0921 22:08:27.294559    6788 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0921 22:08:27.302094    6788 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20220921220745-5916
	W0921 22:08:27.498051    6788 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20220921220745-5916 returned with exit code 1
	I0921 22:08:27.498218    6788 retry.go:31] will retry after 242.222461ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-20220921220745-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20220921220745-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-20220921220745-5916
	I0921 22:08:27.754716    6788 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20220921220745-5916
	W0921 22:08:27.950111    6788 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20220921220745-5916 returned with exit code 1
	I0921 22:08:27.950111    6788 retry.go:31] will retry after 293.637806ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-20220921220745-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20220921220745-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-20220921220745-5916
	I0921 22:08:28.260221    6788 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20220921220745-5916
	W0921 22:08:28.469782    6788 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20220921220745-5916 returned with exit code 1
	I0921 22:08:28.469782    6788 retry.go:31] will retry after 446.119795ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-20220921220745-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20220921220745-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-20220921220745-5916
	I0921 22:08:28.930335    6788 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20220921220745-5916
	W0921 22:08:29.150973    6788 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20220921220745-5916 returned with exit code 1
	W0921 22:08:29.151104    6788 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-20220921220745-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20220921220745-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-20220921220745-5916
	
	W0921 22:08:29.151104    6788 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-20220921220745-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20220921220745-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-20220921220745-5916
	I0921 22:08:29.166068    6788 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0921 22:08:29.172471    6788 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20220921220745-5916
	W0921 22:08:29.382587    6788 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20220921220745-5916 returned with exit code 1
	I0921 22:08:29.382587    6788 retry.go:31] will retry after 179.638263ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-20220921220745-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20220921220745-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-20220921220745-5916
	I0921 22:08:29.572848    6788 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20220921220745-5916
	W0921 22:08:29.797185    6788 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20220921220745-5916 returned with exit code 1
	I0921 22:08:29.797185    6788 retry.go:31] will retry after 263.695078ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-20220921220745-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20220921220745-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-20220921220745-5916
	I0921 22:08:30.070105    6788 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20220921220745-5916
	W0921 22:08:30.326591    6788 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20220921220745-5916 returned with exit code 1
	I0921 22:08:30.326591    6788 retry.go:31] will retry after 484.240172ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-20220921220745-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20220921220745-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-20220921220745-5916
	I0921 22:08:30.820887    6788 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20220921220745-5916
	W0921 22:08:31.032060    6788 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20220921220745-5916 returned with exit code 1
	W0921 22:08:31.032060    6788 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-20220921220745-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20220921220745-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-20220921220745-5916
	
	W0921 22:08:31.032060    6788 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-20220921220745-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20220921220745-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-20220921220745-5916
	I0921 22:08:31.032060    6788 start.go:128] duration metric: createHost completed in 7.0408266s
	I0921 22:08:31.043064    6788 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0921 22:08:31.050042    6788 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20220921220745-5916
	W0921 22:08:31.251132    6788 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20220921220745-5916 returned with exit code 1
	I0921 22:08:31.251132    6788 retry.go:31] will retry after 340.62286ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-20220921220745-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20220921220745-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-20220921220745-5916
	I0921 22:08:31.603843    6788 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20220921220745-5916
	W0921 22:08:31.811144    6788 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20220921220745-5916 returned with exit code 1
	I0921 22:08:31.811144    6788 retry.go:31] will retry after 297.417842ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-20220921220745-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20220921220745-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-20220921220745-5916
	I0921 22:08:32.129758    6788 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20220921220745-5916
	W0921 22:08:32.328469    6788 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20220921220745-5916 returned with exit code 1
	I0921 22:08:32.328469    6788 retry.go:31] will retry after 448.358942ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-20220921220745-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20220921220745-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-20220921220745-5916
	I0921 22:08:32.797602    6788 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20220921220745-5916
	W0921 22:08:33.011178    6788 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20220921220745-5916 returned with exit code 1
	W0921 22:08:33.011178    6788 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-20220921220745-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20220921220745-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-20220921220745-5916
	
	W0921 22:08:33.011178    6788 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-20220921220745-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20220921220745-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-20220921220745-5916
	I0921 22:08:33.025131    6788 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0921 22:08:33.035107    6788 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20220921220745-5916
	W0921 22:08:33.231804    6788 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20220921220745-5916 returned with exit code 1
	I0921 22:08:33.231804    6788 retry.go:31] will retry after 176.645665ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-20220921220745-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20220921220745-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-20220921220745-5916
	I0921 22:08:33.428339    6788 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20220921220745-5916
	W0921 22:08:33.624453    6788 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20220921220745-5916 returned with exit code 1
	I0921 22:08:33.624453    6788 retry.go:31] will retry after 512.00063ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-20220921220745-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20220921220745-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-20220921220745-5916
	I0921 22:08:34.150296    6788 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20220921220745-5916
	W0921 22:08:34.358194    6788 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20220921220745-5916 returned with exit code 1
	W0921 22:08:34.358194    6788 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-20220921220745-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20220921220745-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-20220921220745-5916
	
	W0921 22:08:34.358194    6788 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-20220921220745-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20220921220745-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-20220921220745-5916
	I0921 22:08:34.358194    6788 fix.go:57] fixHost completed within 30.7188136s
	I0921 22:08:34.358194    6788 start.go:83] releasing machines lock for "docker-flags-20220921220745-5916", held for 30.7189906s
	W0921 22:08:34.358194    6788 out.go:239] * Failed to start docker container. Running "minikube delete -p docker-flags-20220921220745-5916" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for docker-flags-20220921220745-5916 container: docker volume create docker-flags-20220921220745-5916 --label name.minikube.sigs.k8s.io=docker-flags-20220921220745-5916 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create docker-flags-20220921220745-5916: error while creating volume root path '/var/lib/docker/volumes/docker-flags-20220921220745-5916': mkdir /var/lib/docker/volumes/docker-flags-20220921220745-5916: read-only file system
	
	* Failed to start docker container. Running "minikube delete -p docker-flags-20220921220745-5916" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for docker-flags-20220921220745-5916 container: docker volume create docker-flags-20220921220745-5916 --label name.minikube.sigs.k8s.io=docker-flags-20220921220745-5916 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create docker-flags-20220921220745-5916: error while creating volume root path '/var/lib/docker/volumes/docker-flags-20220921220745-5916': mkdir /var/lib/docker/volumes/docker-flags-20220921220745-5916: read-only file system
	
	I0921 22:08:34.367187    6788 out.go:177] 
	W0921 22:08:34.370186    6788 out.go:239] X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for docker-flags-20220921220745-5916 container: docker volume create docker-flags-20220921220745-5916 --label name.minikube.sigs.k8s.io=docker-flags-20220921220745-5916 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create docker-flags-20220921220745-5916: error while creating volume root path '/var/lib/docker/volumes/docker-flags-20220921220745-5916': mkdir /var/lib/docker/volumes/docker-flags-20220921220745-5916: read-only file system
	
	X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for docker-flags-20220921220745-5916 container: docker volume create docker-flags-20220921220745-5916 --label name.minikube.sigs.k8s.io=docker-flags-20220921220745-5916 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create docker-flags-20220921220745-5916: error while creating volume root path '/var/lib/docker/volumes/docker-flags-20220921220745-5916': mkdir /var/lib/docker/volumes/docker-flags-20220921220745-5916: read-only file system
	
	W0921 22:08:34.370186    6788 out.go:239] * Suggestion: Restart Docker
	* Suggestion: Restart Docker
	W0921 22:08:34.370186    6788 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/6825
	* Related issue: https://github.com/kubernetes/minikube/issues/6825
	I0921 22:08:34.374198    6788 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:47: failed to start minikube with args: "out/minikube-windows-amd64.exe start -p docker-flags-20220921220745-5916 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker" : exit status 60
docker_test.go:50: (dbg) Run:  out/minikube-windows-amd64.exe -p docker-flags-20220921220745-5916 ssh "sudo systemctl show docker --property=Environment --no-pager"

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:50: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p docker-flags-20220921220745-5916 ssh "sudo systemctl show docker --property=Environment --no-pager": exit status 80 (1.1645224s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "docker-flags-20220921220745-5916": docker container inspect docker-flags-20220921220745-5916 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-20220921220745-5916
	
	* 
	╭──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                          │
	│    * If the above advice does not help, please let us know:                                                              │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                            │
	│                                                                                                                          │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                 │
	│    * Please also attach the following file to the GitHub issue:                                                          │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_delete_05e3a674b6e518bcc2eafc8a77eb8b77017a009c_466.log    │
	│                                                                                                                          │
	╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
docker_test.go:52: failed to 'systemctl show docker' inside minikube. args "out/minikube-windows-amd64.exe -p docker-flags-20220921220745-5916 ssh \"sudo systemctl show docker --property=Environment --no-pager\"": exit status 80
docker_test.go:57: expected env key/value "FOO=BAR" to be passed to minikube's docker and be included in: *"\n\n"*.
docker_test.go:57: expected env key/value "BAZ=BAT" to be passed to minikube's docker and be included in: *"\n\n"*.
docker_test.go:61: (dbg) Run:  out/minikube-windows-amd64.exe -p docker-flags-20220921220745-5916 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
docker_test.go:61: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p docker-flags-20220921220745-5916 ssh "sudo systemctl show docker --property=ExecStart --no-pager": exit status 80 (1.2064379s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "docker-flags-20220921220745-5916": docker container inspect docker-flags-20220921220745-5916 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-20220921220745-5916
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                     │
	│    * If the above advice does not help, please let us know:                                                         │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                       │
	│                                                                                                                     │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                            │
	│    * Please also attach the following file to the GitHub issue:                                                     │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_ssh_e7205990054f4366ee7f5bb530c13b1f3df973dc_3.log    │
	│                                                                                                                     │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
docker_test.go:63: failed on the second 'systemctl show docker' inside minikube. args "out/minikube-windows-amd64.exe -p docker-flags-20220921220745-5916 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"": exit status 80
docker_test.go:67: expected "out/minikube-windows-amd64.exe -p docker-flags-20220921220745-5916 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"" output to have include *--debug* . output: "\n\n"
panic.go:522: *** TestDockerFlags FAILED at 2022-09-21 22:08:36.90414 +0000 GMT m=+2324.454376501
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestDockerFlags]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect docker-flags-20220921220745-5916
helpers_test.go:231: (dbg) Non-zero exit: docker inspect docker-flags-20220921220745-5916: exit status 1 (266.7132ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: docker-flags-20220921220745-5916

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p docker-flags-20220921220745-5916 -n docker-flags-20220921220745-5916
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p docker-flags-20220921220745-5916 -n docker-flags-20220921220745-5916: exit status 7 (614.2221ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0921 22:08:37.761094    9128 status.go:247] status error: host: state: unknown state "docker-flags-20220921220745-5916": docker container inspect docker-flags-20220921220745-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-20220921220745-5916

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "docker-flags-20220921220745-5916" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:175: Cleaning up "docker-flags-20220921220745-5916" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p docker-flags-20220921220745-5916
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p docker-flags-20220921220745-5916: (1.7457784s)
--- FAIL: TestDockerFlags (54.06s)

                                                
                                    
x
+
TestForceSystemdFlag (54.22s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:85: (dbg) Run:  out/minikube-windows-amd64.exe start -p force-systemd-flag-20220921220434-5916 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:85: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p force-systemd-flag-20220921220434-5916 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker: exit status 60 (50.2466766s)

                                                
                                                
-- stdout --
	* [force-systemd-flag-20220921220434-5916] minikube v1.27.0 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	  - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=14995
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting control plane node force-systemd-flag-20220921220434-5916 in cluster force-systemd-flag-20220921220434-5916
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* docker "force-systemd-flag-20220921220434-5916" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0921 22:04:34.767282    6820 out.go:296] Setting OutFile to fd 976 ...
	I0921 22:04:34.884629    6820 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0921 22:04:34.884629    6820 out.go:309] Setting ErrFile to fd 844...
	I0921 22:04:34.884629    6820 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0921 22:04:34.910845    6820 out.go:303] Setting JSON to false
	I0921 22:04:34.913512    6820 start.go:115] hostinfo: {"hostname":"minikube2","uptime":3943,"bootTime":1663793931,"procs":151,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19044 Build 19044","kernelVersion":"10.0.19044 Build 19044","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"29dced62-21fb-45d8-a34e-472b66ced002"}
	W0921 22:04:34.913684    6820 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0921 22:04:34.919103    6820 out.go:177] * [force-systemd-flag-20220921220434-5916] minikube v1.27.0 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	I0921 22:04:34.922748    6820 notify.go:214] Checking for updates...
	I0921 22:04:34.925973    6820 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	I0921 22:04:34.934267    6820 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	I0921 22:04:34.943077    6820 out.go:177]   - MINIKUBE_LOCATION=14995
	I0921 22:04:34.948934    6820 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0921 22:04:34.956939    6820 config.go:180] Loaded profile config "multinode-20220921215635-5916-m01": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.2
	I0921 22:04:34.956939    6820 driver.go:365] Setting default libvirt URI to qemu:///system
	I0921 22:04:35.298527    6820 docker.go:137] docker version: linux-20.10.17
	I0921 22:04:35.306545    6820 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0921 22:04:35.940628    6820 info.go:265] docker info: {ID:6VFI:N2PO:PLBP:KXLR:EN7R:OR7P:564S:UY7N:ZWDA:5V2U:KGL2:KJZ7 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:43 OomKillDisable:true NGoroutines:66 SystemTime:2022-09-21 22:04:35.5098534 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.10.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.9] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-
plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.19.0]] Warnings:<nil>}}
	I0921 22:04:35.943822    6820 out.go:177] * Using the docker driver based on user configuration
	I0921 22:04:35.946926    6820 start.go:284] selected driver: docker
	I0921 22:04:35.947068    6820 start.go:808] validating driver "docker" against <nil>
	I0921 22:04:35.947097    6820 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0921 22:04:36.013594    6820 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0921 22:04:36.631252    6820 info.go:265] docker info: {ID:6VFI:N2PO:PLBP:KXLR:EN7R:OR7P:564S:UY7N:ZWDA:5V2U:KGL2:KJZ7 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:43 OomKillDisable:true NGoroutines:66 SystemTime:2022-09-21 22:04:36.2163119 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.10.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.9] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-
plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.19.0]] Warnings:<nil>}}
	I0921 22:04:36.631632    6820 start_flags.go:302] no existing cluster config was found, will generate one from the flags 
	I0921 22:04:36.632351    6820 start_flags.go:849] Wait components to verify : map[apiserver:true system_pods:true]
	I0921 22:04:36.635753    6820 out.go:177] * Using Docker Desktop driver with root privileges
	I0921 22:04:36.638408    6820 cni.go:95] Creating CNI manager for ""
	I0921 22:04:36.638408    6820 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0921 22:04:36.638408    6820 start_flags.go:316] config:
	{Name:force-systemd-flag-20220921220434-5916 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.2 ClusterName:force-systemd-flag-20220921220434-5916 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I0921 22:04:36.642305    6820 out.go:177] * Starting control plane node force-systemd-flag-20220921220434-5916 in cluster force-systemd-flag-20220921220434-5916
	I0921 22:04:36.645204    6820 cache.go:120] Beginning downloading kic base image for docker with docker
	I0921 22:04:36.647228    6820 out.go:177] * Pulling base image ...
	I0921 22:04:36.650057    6820 preload.go:132] Checking if preload exists for k8s version v1.25.2 and runtime docker
	I0921 22:04:36.650057    6820 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c in local docker daemon
	I0921 22:04:36.650699    6820 preload.go:148] Found local preload: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.25.2-docker-overlay2-amd64.tar.lz4
	I0921 22:04:36.650743    6820 cache.go:57] Caching tarball of preloaded images
	I0921 22:04:36.651189    6820 preload.go:174] Found C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.25.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0921 22:04:36.651189    6820 cache.go:60] Finished verifying existence of preloaded tar for  v1.25.2 on docker
	I0921 22:04:36.651189    6820 profile.go:148] Saving config to C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\force-systemd-flag-20220921220434-5916\config.json ...
	I0921 22:04:36.651850    6820 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\force-systemd-flag-20220921220434-5916\config.json: {Name:mke5698d7666a1fe44d445cf6c620fa6d4266621 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0921 22:04:36.852764    6820 cache.go:147] Downloading gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c to local cache
	I0921 22:04:36.852764    6820 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase_v0.0.34@sha256_f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c.tar
	I0921 22:04:36.852764    6820 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase_v0.0.34@sha256_f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c.tar
	I0921 22:04:36.852764    6820 image.go:59] Checking for gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c in local cache directory
	I0921 22:04:36.852764    6820 image.go:62] Found gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c in local cache directory, skipping pull
	I0921 22:04:36.852764    6820 image.go:103] gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c exists in cache, skipping pull
	I0921 22:04:36.852764    6820 cache.go:150] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c as a tarball
	I0921 22:04:36.852764    6820 cache.go:161] Loading gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c from local cache
	I0921 22:04:36.852764    6820 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase_v0.0.34@sha256_f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c.tar
	I0921 22:04:39.297406    6820 image.go:219] response: {"errorDetail":{"message":"mkdir /var/lib/docker/tmp/docker-import-3135164269: read-only file system"},"error":"mkdir /var/lib/docker/tmp/docker-import-3135164269: read-only file system"}
	I0921 22:04:39.297406    6820 cache.go:164] successfully loaded gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c from cached tarball
	I0921 22:04:39.297406    6820 cache.go:170] Downloading gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c to local daemon
	I0921 22:04:39.297406    6820 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c.d.lock -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase_v0.0.34@sha256_f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c.d.lock
	I0921 22:04:39.297406    6820 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c in local docker daemon
	I0921 22:04:39.535555    6820 image.go:243] Writing gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c to local daemon
	I0921 22:04:39.535555    6820 image.go:258] Getting image gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c
	I0921 22:04:39.779644    6820 image.go:272] Writing image gcr.io/k8s-minikube/kicbase:v0.0.34
	    > gcr.io/k8s-minikube/kicbase:  0 B [__________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase:  0 B [__________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase:  0 B [__________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase:  0 B [______________________] ?% ? p/s 800msI0921 22:04:40.551005    6820 image.go:306] Pulling image gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c
	I0921 22:04:41.080111    6820 cache.go:177] use image loaded from cache gcr.io/k8s-minikube/kicbase:v0.0.34
	W0921 22:04:41.080111    6820 out.go:239] ! minikube was unable to download gcr.io/k8s-minikube/kicbase:v0.0.34, but successfully downloaded gcr.io/k8s-minikube/kicbase:v0.0.34 as a fallback image
	! minikube was unable to download gcr.io/k8s-minikube/kicbase:v0.0.34, but successfully downloaded gcr.io/k8s-minikube/kicbase:v0.0.34 as a fallback image
	I0921 22:04:41.080111    6820 cache.go:208] Successfully downloaded all kic artifacts
	I0921 22:04:41.080111    6820 start.go:364] acquiring machines lock for force-systemd-flag-20220921220434-5916: {Name:mka0512089e1370a11e99ec6c4bf03673acd6173 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0921 22:04:41.080793    6820 start.go:368] acquired machines lock for "force-systemd-flag-20220921220434-5916" in 682.1µs
	I0921 22:04:41.081096    6820 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-20220921220434-5916 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.34 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.2 ClusterName:force-systemd-flag-20220921220434-5916 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:clu
ster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.25.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client Soc
ketVMnetPath:/var/run/socket_vmnet} &{Name: IP: Port:8443 KubernetesVersion:v1.25.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0921 22:04:41.081222    6820 start.go:125] createHost starting for "" (driver="docker")
	I0921 22:04:41.087436    6820 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0921 22:04:41.087939    6820 start.go:159] libmachine.API.Create for "force-systemd-flag-20220921220434-5916" (driver="docker")
	I0921 22:04:41.087939    6820 client.go:168] LocalClient.Create starting
	I0921 22:04:41.087939    6820 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I0921 22:04:41.088709    6820 main.go:134] libmachine: Decoding PEM data...
	I0921 22:04:41.088760    6820 main.go:134] libmachine: Parsing certificate...
	I0921 22:04:41.088961    6820 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I0921 22:04:41.089114    6820 main.go:134] libmachine: Decoding PEM data...
	I0921 22:04:41.089114    6820 main.go:134] libmachine: Parsing certificate...
	I0921 22:04:41.097728    6820 cli_runner.go:164] Run: docker network inspect force-systemd-flag-20220921220434-5916 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0921 22:04:41.310846    6820 cli_runner.go:211] docker network inspect force-systemd-flag-20220921220434-5916 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0921 22:04:41.317865    6820 network_create.go:272] running [docker network inspect force-systemd-flag-20220921220434-5916] to gather additional debugging logs...
	I0921 22:04:41.317865    6820 cli_runner.go:164] Run: docker network inspect force-systemd-flag-20220921220434-5916
	W0921 22:04:41.558922    6820 cli_runner.go:211] docker network inspect force-systemd-flag-20220921220434-5916 returned with exit code 1
	I0921 22:04:41.559046    6820 network_create.go:275] error running [docker network inspect force-systemd-flag-20220921220434-5916]: docker network inspect force-systemd-flag-20220921220434-5916: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: force-systemd-flag-20220921220434-5916
	I0921 22:04:41.559046    6820 network_create.go:277] output of [docker network inspect force-systemd-flag-20220921220434-5916]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: force-systemd-flag-20220921220434-5916
	
	** /stderr **
	I0921 22:04:41.566699    6820 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0921 22:04:41.809217    6820 network.go:290] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc0001182a8] misses:0}
	I0921 22:04:41.809985    6820 network.go:236] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0921 22:04:41.809985    6820 network_create.go:115] attempt to create docker network force-systemd-flag-20220921220434-5916 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0921 22:04:41.818949    6820 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-flag-20220921220434-5916 force-systemd-flag-20220921220434-5916
	W0921 22:04:42.199863    6820 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-flag-20220921220434-5916 force-systemd-flag-20220921220434-5916 returned with exit code 1
	E0921 22:04:42.199936    6820 network_create.go:104] error while trying to create docker network force-systemd-flag-20220921220434-5916 192.168.49.0/24: create docker network force-systemd-flag-20220921220434-5916 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-flag-20220921220434-5916 force-systemd-flag-20220921220434-5916: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network d7a167571121bd81c8e68e372542a57359eec7c1d0194d4df036e5baf97f16a8 (br-d7a167571121): conflicts with network a04d36bfb3cf4b1b1ab643a2ec70fadfc4a485878b32a275b4c7653037d66dd0 (br-a04d36bfb3cf): networks have overlapping IPv4
	W0921 22:04:42.200267    6820 out.go:239] ! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network force-systemd-flag-20220921220434-5916 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-flag-20220921220434-5916 force-systemd-flag-20220921220434-5916: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network d7a167571121bd81c8e68e372542a57359eec7c1d0194d4df036e5baf97f16a8 (br-d7a167571121): conflicts with network a04d36bfb3cf4b1b1ab643a2ec70fadfc4a485878b32a275b4c7653037d66dd0 (br-a04d36bfb3cf): networks have overlapping IPv4
	
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network force-systemd-flag-20220921220434-5916 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-flag-20220921220434-5916 force-systemd-flag-20220921220434-5916: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network d7a167571121bd81c8e68e372542a57359eec7c1d0194d4df036e5baf97f16a8 (br-d7a167571121): conflicts with network a04d36bfb3cf4b1b1ab643a2ec70fadfc4a485878b32a275b4c7653037d66dd0 (br-a04d36bfb3cf): networks have overlapping IPv4
	
	I0921 22:04:42.211052    6820 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0921 22:04:42.530720    6820 cli_runner.go:164] Run: docker volume create force-systemd-flag-20220921220434-5916 --label name.minikube.sigs.k8s.io=force-systemd-flag-20220921220434-5916 --label created_by.minikube.sigs.k8s.io=true
	W0921 22:04:42.768404    6820 cli_runner.go:211] docker volume create force-systemd-flag-20220921220434-5916 --label name.minikube.sigs.k8s.io=force-systemd-flag-20220921220434-5916 --label created_by.minikube.sigs.k8s.io=true returned with exit code 1
	I0921 22:04:42.768404    6820 client.go:171] LocalClient.Create took 1.680452s
	I0921 22:04:44.781132    6820 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0921 22:04:44.790887    6820 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20220921220434-5916
	W0921 22:04:44.985037    6820 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20220921220434-5916 returned with exit code 1
	I0921 22:04:44.985037    6820 retry.go:31] will retry after 276.165072ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-20220921220434-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20220921220434-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-20220921220434-5916
	I0921 22:04:45.277159    6820 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20220921220434-5916
	W0921 22:04:45.470901    6820 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20220921220434-5916 returned with exit code 1
	I0921 22:04:45.470901    6820 retry.go:31] will retry after 540.190908ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-20220921220434-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20220921220434-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-20220921220434-5916
	I0921 22:04:46.024013    6820 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20220921220434-5916
	W0921 22:04:46.206032    6820 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20220921220434-5916 returned with exit code 1
	W0921 22:04:46.206133    6820 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-20220921220434-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20220921220434-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-20220921220434-5916
	
	W0921 22:04:46.206133    6820 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-20220921220434-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20220921220434-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-20220921220434-5916
	I0921 22:04:46.219474    6820 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0921 22:04:46.229154    6820 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20220921220434-5916
	W0921 22:04:46.437659    6820 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20220921220434-5916 returned with exit code 1
	I0921 22:04:46.437892    6820 retry.go:31] will retry after 234.428547ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-20220921220434-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20220921220434-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-20220921220434-5916
	I0921 22:04:46.684755    6820 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20220921220434-5916
	W0921 22:04:46.876327    6820 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20220921220434-5916 returned with exit code 1
	I0921 22:04:46.876481    6820 retry.go:31] will retry after 346.739061ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-20220921220434-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20220921220434-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-20220921220434-5916
	I0921 22:04:47.240455    6820 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20220921220434-5916
	W0921 22:04:47.409805    6820 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20220921220434-5916 returned with exit code 1
	I0921 22:04:47.410216    6820 retry.go:31] will retry after 667.587979ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-20220921220434-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20220921220434-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-20220921220434-5916
	I0921 22:04:48.089979    6820 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20220921220434-5916
	W0921 22:04:48.287166    6820 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20220921220434-5916 returned with exit code 1
	W0921 22:04:48.287166    6820 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-20220921220434-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20220921220434-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-20220921220434-5916
	
	W0921 22:04:48.287166    6820 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-20220921220434-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20220921220434-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-20220921220434-5916
	I0921 22:04:48.287166    6820 start.go:128] duration metric: createHost completed in 7.20589s
	I0921 22:04:48.287166    6820 start.go:83] releasing machines lock for "force-systemd-flag-20220921220434-5916", held for 7.2062172s
	W0921 22:04:48.287166    6820 start.go:602] error starting host: creating host: create: creating: setting up container node: creating volume for force-systemd-flag-20220921220434-5916 container: docker volume create force-systemd-flag-20220921220434-5916 --label name.minikube.sigs.k8s.io=force-systemd-flag-20220921220434-5916 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create force-systemd-flag-20220921220434-5916: error while creating volume root path '/var/lib/docker/volumes/force-systemd-flag-20220921220434-5916': mkdir /var/lib/docker/volumes/force-systemd-flag-20220921220434-5916: read-only file system
	I0921 22:04:48.306174    6820 cli_runner.go:164] Run: docker container inspect force-systemd-flag-20220921220434-5916 --format={{.State.Status}}
	W0921 22:04:48.519254    6820 cli_runner.go:211] docker container inspect force-systemd-flag-20220921220434-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:04:48.519254    6820 delete.go:82] Unable to get host status for force-systemd-flag-20220921220434-5916, assuming it has already been deleted: state: unknown state "force-systemd-flag-20220921220434-5916": docker container inspect force-systemd-flag-20220921220434-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-20220921220434-5916
	W0921 22:04:48.519254    6820 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: setting up container node: creating volume for force-systemd-flag-20220921220434-5916 container: docker volume create force-systemd-flag-20220921220434-5916 --label name.minikube.sigs.k8s.io=force-systemd-flag-20220921220434-5916 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create force-systemd-flag-20220921220434-5916: error while creating volume root path '/var/lib/docker/volumes/force-systemd-flag-20220921220434-5916': mkdir /var/lib/docker/volumes/force-systemd-flag-20220921220434-5916: read-only file system
	
	! StartHost failed, but will try again: creating host: create: creating: setting up container node: creating volume for force-systemd-flag-20220921220434-5916 container: docker volume create force-systemd-flag-20220921220434-5916 --label name.minikube.sigs.k8s.io=force-systemd-flag-20220921220434-5916 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create force-systemd-flag-20220921220434-5916: error while creating volume root path '/var/lib/docker/volumes/force-systemd-flag-20220921220434-5916': mkdir /var/lib/docker/volumes/force-systemd-flag-20220921220434-5916: read-only file system
	
	I0921 22:04:48.519254    6820 start.go:617] Will try again in 5 seconds ...
	I0921 22:04:53.534066    6820 start.go:364] acquiring machines lock for force-systemd-flag-20220921220434-5916: {Name:mka0512089e1370a11e99ec6c4bf03673acd6173 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0921 22:04:53.534484    6820 start.go:368] acquired machines lock for "force-systemd-flag-20220921220434-5916" in 221.9µs
	I0921 22:04:53.534815    6820 start.go:96] Skipping create...Using existing machine configuration
	I0921 22:04:53.534895    6820 fix.go:55] fixHost starting: 
	I0921 22:04:53.548253    6820 cli_runner.go:164] Run: docker container inspect force-systemd-flag-20220921220434-5916 --format={{.State.Status}}
	W0921 22:04:53.736400    6820 cli_runner.go:211] docker container inspect force-systemd-flag-20220921220434-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:04:53.736400    6820 fix.go:103] recreateIfNeeded on force-systemd-flag-20220921220434-5916: state= err=unknown state "force-systemd-flag-20220921220434-5916": docker container inspect force-systemd-flag-20220921220434-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-20220921220434-5916
	I0921 22:04:53.736400    6820 fix.go:108] machineExists: false. err=machine does not exist
	I0921 22:04:53.997339    6820 out.go:177] * docker "force-systemd-flag-20220921220434-5916" container is missing, will recreate.
	I0921 22:04:54.000643    6820 delete.go:124] DEMOLISHING force-systemd-flag-20220921220434-5916 ...
	I0921 22:04:54.015320    6820 cli_runner.go:164] Run: docker container inspect force-systemd-flag-20220921220434-5916 --format={{.State.Status}}
	W0921 22:04:54.218068    6820 cli_runner.go:211] docker container inspect force-systemd-flag-20220921220434-5916 --format={{.State.Status}} returned with exit code 1
	W0921 22:04:54.218068    6820 stop.go:75] unable to get state: unknown state "force-systemd-flag-20220921220434-5916": docker container inspect force-systemd-flag-20220921220434-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-20220921220434-5916
	I0921 22:04:54.218068    6820 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "force-systemd-flag-20220921220434-5916": docker container inspect force-systemd-flag-20220921220434-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-20220921220434-5916
	I0921 22:04:54.231089    6820 cli_runner.go:164] Run: docker container inspect force-systemd-flag-20220921220434-5916 --format={{.State.Status}}
	W0921 22:04:54.464933    6820 cli_runner.go:211] docker container inspect force-systemd-flag-20220921220434-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:04:54.465140    6820 delete.go:82] Unable to get host status for force-systemd-flag-20220921220434-5916, assuming it has already been deleted: state: unknown state "force-systemd-flag-20220921220434-5916": docker container inspect force-systemd-flag-20220921220434-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-20220921220434-5916
	I0921 22:04:54.473921    6820 cli_runner.go:164] Run: docker container inspect -f {{.Id}} force-systemd-flag-20220921220434-5916
	W0921 22:04:54.680676    6820 cli_runner.go:211] docker container inspect -f {{.Id}} force-systemd-flag-20220921220434-5916 returned with exit code 1
	I0921 22:04:54.680766    6820 kic.go:356] could not find the container force-systemd-flag-20220921220434-5916 to remove it. will try anyways
	I0921 22:04:54.688124    6820 cli_runner.go:164] Run: docker container inspect force-systemd-flag-20220921220434-5916 --format={{.State.Status}}
	W0921 22:04:54.894932    6820 cli_runner.go:211] docker container inspect force-systemd-flag-20220921220434-5916 --format={{.State.Status}} returned with exit code 1
	W0921 22:04:54.894932    6820 oci.go:84] error getting container status, will try to delete anyways: unknown state "force-systemd-flag-20220921220434-5916": docker container inspect force-systemd-flag-20220921220434-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-20220921220434-5916
	I0921 22:04:54.903849    6820 cli_runner.go:164] Run: docker exec --privileged -t force-systemd-flag-20220921220434-5916 /bin/bash -c "sudo init 0"
	W0921 22:04:55.082638    6820 cli_runner.go:211] docker exec --privileged -t force-systemd-flag-20220921220434-5916 /bin/bash -c "sudo init 0" returned with exit code 1
	I0921 22:04:55.082739    6820 oci.go:646] error shutdown force-systemd-flag-20220921220434-5916: docker exec --privileged -t force-systemd-flag-20220921220434-5916 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: force-systemd-flag-20220921220434-5916
	I0921 22:04:56.103967    6820 cli_runner.go:164] Run: docker container inspect force-systemd-flag-20220921220434-5916 --format={{.State.Status}}
	W0921 22:04:56.328793    6820 cli_runner.go:211] docker container inspect force-systemd-flag-20220921220434-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:04:56.328861    6820 oci.go:658] temporary error verifying shutdown: unknown state "force-systemd-flag-20220921220434-5916": docker container inspect force-systemd-flag-20220921220434-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-20220921220434-5916
	I0921 22:04:56.328861    6820 oci.go:660] temporary error: container force-systemd-flag-20220921220434-5916 status is  but expect it to be exited
	I0921 22:04:56.328861    6820 retry.go:31] will retry after 328.259627ms: couldn't verify container is exited. %v: unknown state "force-systemd-flag-20220921220434-5916": docker container inspect force-systemd-flag-20220921220434-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-20220921220434-5916
	I0921 22:04:56.681033    6820 cli_runner.go:164] Run: docker container inspect force-systemd-flag-20220921220434-5916 --format={{.State.Status}}
	W0921 22:04:56.901906    6820 cli_runner.go:211] docker container inspect force-systemd-flag-20220921220434-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:04:56.901906    6820 oci.go:658] temporary error verifying shutdown: unknown state "force-systemd-flag-20220921220434-5916": docker container inspect force-systemd-flag-20220921220434-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-20220921220434-5916
	I0921 22:04:56.901906    6820 oci.go:660] temporary error: container force-systemd-flag-20220921220434-5916 status is  but expect it to be exited
	I0921 22:04:56.901906    6820 retry.go:31] will retry after 447.727139ms: couldn't verify container is exited. %v: unknown state "force-systemd-flag-20220921220434-5916": docker container inspect force-systemd-flag-20220921220434-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-20220921220434-5916
	I0921 22:04:57.362728    6820 cli_runner.go:164] Run: docker container inspect force-systemd-flag-20220921220434-5916 --format={{.State.Status}}
	W0921 22:04:57.572715    6820 cli_runner.go:211] docker container inspect force-systemd-flag-20220921220434-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:04:57.572715    6820 oci.go:658] temporary error verifying shutdown: unknown state "force-systemd-flag-20220921220434-5916": docker container inspect force-systemd-flag-20220921220434-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-20220921220434-5916
	I0921 22:04:57.572715    6820 oci.go:660] temporary error: container force-systemd-flag-20220921220434-5916 status is  but expect it to be exited
	I0921 22:04:57.572715    6820 retry.go:31] will retry after 901.025843ms: couldn't verify container is exited. %v: unknown state "force-systemd-flag-20220921220434-5916": docker container inspect force-systemd-flag-20220921220434-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-20220921220434-5916
	I0921 22:04:58.493753    6820 cli_runner.go:164] Run: docker container inspect force-systemd-flag-20220921220434-5916 --format={{.State.Status}}
	W0921 22:04:58.685465    6820 cli_runner.go:211] docker container inspect force-systemd-flag-20220921220434-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:04:58.685585    6820 oci.go:658] temporary error verifying shutdown: unknown state "force-systemd-flag-20220921220434-5916": docker container inspect force-systemd-flag-20220921220434-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-20220921220434-5916
	I0921 22:04:58.685682    6820 oci.go:660] temporary error: container force-systemd-flag-20220921220434-5916 status is  but expect it to be exited
	I0921 22:04:58.685682    6820 retry.go:31] will retry after 1.713171311s: couldn't verify container is exited. %v: unknown state "force-systemd-flag-20220921220434-5916": docker container inspect force-systemd-flag-20220921220434-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-20220921220434-5916
	I0921 22:05:00.416021    6820 cli_runner.go:164] Run: docker container inspect force-systemd-flag-20220921220434-5916 --format={{.State.Status}}
	W0921 22:05:00.639375    6820 cli_runner.go:211] docker container inspect force-systemd-flag-20220921220434-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:05:00.639495    6820 oci.go:658] temporary error verifying shutdown: unknown state "force-systemd-flag-20220921220434-5916": docker container inspect force-systemd-flag-20220921220434-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-20220921220434-5916
	I0921 22:05:00.639495    6820 oci.go:660] temporary error: container force-systemd-flag-20220921220434-5916 status is  but expect it to be exited
	I0921 22:05:00.639495    6820 retry.go:31] will retry after 3.325151152s: couldn't verify container is exited. %v: unknown state "force-systemd-flag-20220921220434-5916": docker container inspect force-systemd-flag-20220921220434-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-20220921220434-5916
	I0921 22:05:03.979577    6820 cli_runner.go:164] Run: docker container inspect force-systemd-flag-20220921220434-5916 --format={{.State.Status}}
	W0921 22:05:04.176563    6820 cli_runner.go:211] docker container inspect force-systemd-flag-20220921220434-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:05:04.176563    6820 oci.go:658] temporary error verifying shutdown: unknown state "force-systemd-flag-20220921220434-5916": docker container inspect force-systemd-flag-20220921220434-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-20220921220434-5916
	I0921 22:05:04.176563    6820 oci.go:660] temporary error: container force-systemd-flag-20220921220434-5916 status is  but expect it to be exited
	I0921 22:05:04.176563    6820 retry.go:31] will retry after 2.711970641s: couldn't verify container is exited. %v: unknown state "force-systemd-flag-20220921220434-5916": docker container inspect force-systemd-flag-20220921220434-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-20220921220434-5916
	I0921 22:05:06.897327    6820 cli_runner.go:164] Run: docker container inspect force-systemd-flag-20220921220434-5916 --format={{.State.Status}}
	W0921 22:05:07.117177    6820 cli_runner.go:211] docker container inspect force-systemd-flag-20220921220434-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:05:07.117177    6820 oci.go:658] temporary error verifying shutdown: unknown state "force-systemd-flag-20220921220434-5916": docker container inspect force-systemd-flag-20220921220434-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-20220921220434-5916
	I0921 22:05:07.117177    6820 oci.go:660] temporary error: container force-systemd-flag-20220921220434-5916 status is  but expect it to be exited
	I0921 22:05:07.117177    6820 retry.go:31] will retry after 5.015617898s: couldn't verify container is exited. %v: unknown state "force-systemd-flag-20220921220434-5916": docker container inspect force-systemd-flag-20220921220434-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-20220921220434-5916
	I0921 22:05:12.147666    6820 cli_runner.go:164] Run: docker container inspect force-systemd-flag-20220921220434-5916 --format={{.State.Status}}
	W0921 22:05:12.326607    6820 cli_runner.go:211] docker container inspect force-systemd-flag-20220921220434-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:05:12.326607    6820 oci.go:658] temporary error verifying shutdown: unknown state "force-systemd-flag-20220921220434-5916": docker container inspect force-systemd-flag-20220921220434-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-20220921220434-5916
	I0921 22:05:12.326607    6820 oci.go:660] temporary error: container force-systemd-flag-20220921220434-5916 status is  but expect it to be exited
	I0921 22:05:12.326607    6820 oci.go:88] couldn't shut down force-systemd-flag-20220921220434-5916 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "force-systemd-flag-20220921220434-5916": docker container inspect force-systemd-flag-20220921220434-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-20220921220434-5916
	 
	I0921 22:05:12.333996    6820 cli_runner.go:164] Run: docker rm -f -v force-systemd-flag-20220921220434-5916
	I0921 22:05:12.552497    6820 cli_runner.go:164] Run: docker container inspect -f {{.Id}} force-systemd-flag-20220921220434-5916
	W0921 22:05:12.745674    6820 cli_runner.go:211] docker container inspect -f {{.Id}} force-systemd-flag-20220921220434-5916 returned with exit code 1
	I0921 22:05:12.756706    6820 cli_runner.go:164] Run: docker network inspect force-systemd-flag-20220921220434-5916 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0921 22:05:12.947693    6820 cli_runner.go:211] docker network inspect force-systemd-flag-20220921220434-5916 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0921 22:05:12.953736    6820 network_create.go:272] running [docker network inspect force-systemd-flag-20220921220434-5916] to gather additional debugging logs...
	I0921 22:05:12.953736    6820 cli_runner.go:164] Run: docker network inspect force-systemd-flag-20220921220434-5916
	W0921 22:05:13.140763    6820 cli_runner.go:211] docker network inspect force-systemd-flag-20220921220434-5916 returned with exit code 1
	I0921 22:05:13.140763    6820 network_create.go:275] error running [docker network inspect force-systemd-flag-20220921220434-5916]: docker network inspect force-systemd-flag-20220921220434-5916: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: force-systemd-flag-20220921220434-5916
	I0921 22:05:13.140763    6820 network_create.go:277] output of [docker network inspect force-systemd-flag-20220921220434-5916]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: force-systemd-flag-20220921220434-5916
	
	** /stderr **
	W0921 22:05:13.141762    6820 delete.go:139] delete failed (probably ok) <nil>
	I0921 22:05:13.141762    6820 fix.go:115] Sleeping 1 second for extra luck!
	I0921 22:05:14.151393    6820 start.go:125] createHost starting for "" (driver="docker")
	I0921 22:05:14.187155    6820 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0921 22:05:14.187155    6820 start.go:159] libmachine.API.Create for "force-systemd-flag-20220921220434-5916" (driver="docker")
	I0921 22:05:14.187155    6820 client.go:168] LocalClient.Create starting
	I0921 22:05:14.188463    6820 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I0921 22:05:14.188740    6820 main.go:134] libmachine: Decoding PEM data...
	I0921 22:05:14.188805    6820 main.go:134] libmachine: Parsing certificate...
	I0921 22:05:14.189043    6820 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I0921 22:05:14.189268    6820 main.go:134] libmachine: Decoding PEM data...
	I0921 22:05:14.189340    6820 main.go:134] libmachine: Parsing certificate...
	I0921 22:05:14.215543    6820 cli_runner.go:164] Run: docker network inspect force-systemd-flag-20220921220434-5916 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0921 22:05:14.445625    6820 cli_runner.go:211] docker network inspect force-systemd-flag-20220921220434-5916 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0921 22:05:14.455212    6820 network_create.go:272] running [docker network inspect force-systemd-flag-20220921220434-5916] to gather additional debugging logs...
	I0921 22:05:14.455212    6820 cli_runner.go:164] Run: docker network inspect force-systemd-flag-20220921220434-5916
	W0921 22:05:14.660255    6820 cli_runner.go:211] docker network inspect force-systemd-flag-20220921220434-5916 returned with exit code 1
	I0921 22:05:14.660255    6820 network_create.go:275] error running [docker network inspect force-systemd-flag-20220921220434-5916]: docker network inspect force-systemd-flag-20220921220434-5916: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: force-systemd-flag-20220921220434-5916
	I0921 22:05:14.660255    6820 network_create.go:277] output of [docker network inspect force-systemd-flag-20220921220434-5916]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: force-systemd-flag-20220921220434-5916
	
	** /stderr **
	I0921 22:05:14.667259    6820 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0921 22:05:14.895181    6820 network.go:281] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0001182a8] amended:false}} dirty:map[] misses:0}
	I0921 22:05:14.895181    6820 network.go:239] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0921 22:05:14.913004    6820 network.go:290] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0001182a8] amended:true}} dirty:map[192.168.49.0:0xc0001182a8 192.168.58.0:0xc0004fac08] misses:0}
	I0921 22:05:14.913004    6820 network.go:236] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0921 22:05:14.913004    6820 network_create.go:115] attempt to create docker network force-systemd-flag-20220921220434-5916 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0921 22:05:14.921440    6820 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-flag-20220921220434-5916 force-systemd-flag-20220921220434-5916
	W0921 22:05:15.157197    6820 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-flag-20220921220434-5916 force-systemd-flag-20220921220434-5916 returned with exit code 1
	E0921 22:05:15.157197    6820 network_create.go:104] error while trying to create docker network force-systemd-flag-20220921220434-5916 192.168.58.0/24: create docker network force-systemd-flag-20220921220434-5916 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-flag-20220921220434-5916 force-systemd-flag-20220921220434-5916: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network bcb0f84e1757a7c5b2698c42566c4bdd91d96fca8de7ba6fdbe21b62ef1ca050 (br-bcb0f84e1757): conflicts with network 8a3cd8d165a4af032276277dbee93b52b8ee4aed7ca6d8aceee6814901b31792 (br-8a3cd8d165a4): networks have overlapping IPv4
	W0921 22:05:15.157197    6820 out.go:239] ! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network force-systemd-flag-20220921220434-5916 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-flag-20220921220434-5916 force-systemd-flag-20220921220434-5916: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network bcb0f84e1757a7c5b2698c42566c4bdd91d96fca8de7ba6fdbe21b62ef1ca050 (br-bcb0f84e1757): conflicts with network 8a3cd8d165a4af032276277dbee93b52b8ee4aed7ca6d8aceee6814901b31792 (br-8a3cd8d165a4): networks have overlapping IPv4
	
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network force-systemd-flag-20220921220434-5916 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-flag-20220921220434-5916 force-systemd-flag-20220921220434-5916: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network bcb0f84e1757a7c5b2698c42566c4bdd91d96fca8de7ba6fdbe21b62ef1ca050 (br-bcb0f84e1757): conflicts with network 8a3cd8d165a4af032276277dbee93b52b8ee4aed7ca6d8aceee6814901b31792 (br-8a3cd8d165a4): networks have overlapping IPv4
	
	I0921 22:05:15.174795    6820 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0921 22:05:15.458930    6820 cli_runner.go:164] Run: docker volume create force-systemd-flag-20220921220434-5916 --label name.minikube.sigs.k8s.io=force-systemd-flag-20220921220434-5916 --label created_by.minikube.sigs.k8s.io=true
	W0921 22:05:15.698933    6820 cli_runner.go:211] docker volume create force-systemd-flag-20220921220434-5916 --label name.minikube.sigs.k8s.io=force-systemd-flag-20220921220434-5916 --label created_by.minikube.sigs.k8s.io=true returned with exit code 1
	I0921 22:05:15.699049    6820 client.go:171] LocalClient.Create took 1.511853s
	I0921 22:05:17.723068    6820 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0921 22:05:17.730491    6820 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20220921220434-5916
	W0921 22:05:17.955387    6820 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20220921220434-5916 returned with exit code 1
	I0921 22:05:17.955387    6820 retry.go:31] will retry after 242.222461ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-20220921220434-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20220921220434-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-20220921220434-5916
	I0921 22:05:18.214975    6820 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20220921220434-5916
	W0921 22:05:18.408513    6820 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20220921220434-5916 returned with exit code 1
	I0921 22:05:18.408513    6820 retry.go:31] will retry after 293.637806ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-20220921220434-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20220921220434-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-20220921220434-5916
	I0921 22:05:18.713397    6820 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20220921220434-5916
	W0921 22:05:18.908901    6820 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20220921220434-5916 returned with exit code 1
	I0921 22:05:18.908901    6820 retry.go:31] will retry after 446.119795ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-20220921220434-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20220921220434-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-20220921220434-5916
	I0921 22:05:19.364164    6820 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20220921220434-5916
	W0921 22:05:19.574796    6820 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20220921220434-5916 returned with exit code 1
	W0921 22:05:19.574796    6820 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-20220921220434-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20220921220434-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-20220921220434-5916
	
	W0921 22:05:19.574796    6820 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-20220921220434-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20220921220434-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-20220921220434-5916
	I0921 22:05:19.588877    6820 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0921 22:05:19.598051    6820 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20220921220434-5916
	W0921 22:05:19.809273    6820 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20220921220434-5916 returned with exit code 1
	I0921 22:05:19.809703    6820 retry.go:31] will retry after 179.638263ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-20220921220434-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20220921220434-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-20220921220434-5916
	I0921 22:05:19.998810    6820 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20220921220434-5916
	W0921 22:05:20.224155    6820 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20220921220434-5916 returned with exit code 1
	I0921 22:05:20.224155    6820 retry.go:31] will retry after 263.695078ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-20220921220434-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20220921220434-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-20220921220434-5916
	I0921 22:05:20.511641    6820 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20220921220434-5916
	W0921 22:05:20.689941    6820 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20220921220434-5916 returned with exit code 1
	I0921 22:05:20.689941    6820 retry.go:31] will retry after 484.240172ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-20220921220434-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20220921220434-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-20220921220434-5916
	I0921 22:05:21.190604    6820 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20220921220434-5916
	W0921 22:05:21.373962    6820 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20220921220434-5916 returned with exit code 1
	W0921 22:05:21.373962    6820 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-20220921220434-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20220921220434-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-20220921220434-5916
	
	W0921 22:05:21.373962    6820 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-20220921220434-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20220921220434-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-20220921220434-5916
	I0921 22:05:21.373962    6820 start.go:128] duration metric: createHost completed in 7.2224041s
	I0921 22:05:21.385971    6820 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0921 22:05:21.391957    6820 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20220921220434-5916
	W0921 22:05:21.616219    6820 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20220921220434-5916 returned with exit code 1
	I0921 22:05:21.616219    6820 retry.go:31] will retry after 340.62286ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-20220921220434-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20220921220434-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-20220921220434-5916
	I0921 22:05:21.968757    6820 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20220921220434-5916
	W0921 22:05:22.160441    6820 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20220921220434-5916 returned with exit code 1
	I0921 22:05:22.160441    6820 retry.go:31] will retry after 297.417842ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-20220921220434-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20220921220434-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-20220921220434-5916
	I0921 22:05:22.475079    6820 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20220921220434-5916
	W0921 22:05:22.680973    6820 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20220921220434-5916 returned with exit code 1
	I0921 22:05:22.681106    6820 retry.go:31] will retry after 448.358942ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-20220921220434-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20220921220434-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-20220921220434-5916
	I0921 22:05:23.148469    6820 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20220921220434-5916
	W0921 22:05:23.340321    6820 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20220921220434-5916 returned with exit code 1
	W0921 22:05:23.340454    6820 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-20220921220434-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20220921220434-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-20220921220434-5916
	
	W0921 22:05:23.340454    6820 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-20220921220434-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20220921220434-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-20220921220434-5916
	I0921 22:05:23.353330    6820 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0921 22:05:23.362027    6820 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20220921220434-5916
	W0921 22:05:23.562865    6820 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20220921220434-5916 returned with exit code 1
	I0921 22:05:23.563138    6820 retry.go:31] will retry after 176.645665ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-20220921220434-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20220921220434-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-20220921220434-5916
	I0921 22:05:23.760105    6820 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20220921220434-5916
	W0921 22:05:23.957491    6820 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20220921220434-5916 returned with exit code 1
	I0921 22:05:23.957491    6820 retry.go:31] will retry after 512.00063ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-20220921220434-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20220921220434-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-20220921220434-5916
	I0921 22:05:24.483557    6820 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20220921220434-5916
	W0921 22:05:24.692508    6820 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20220921220434-5916 returned with exit code 1
	W0921 22:05:24.692508    6820 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-20220921220434-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20220921220434-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-20220921220434-5916
	
	W0921 22:05:24.692508    6820 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-20220921220434-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20220921220434-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-20220921220434-5916
	I0921 22:05:24.692508    6820 fix.go:57] fixHost completed within 31.15746s
	I0921 22:05:24.692508    6820 start.go:83] releasing machines lock for "force-systemd-flag-20220921220434-5916", held for 31.1576553s
	W0921 22:05:24.693322    6820 out.go:239] * Failed to start docker container. Running "minikube delete -p force-systemd-flag-20220921220434-5916" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for force-systemd-flag-20220921220434-5916 container: docker volume create force-systemd-flag-20220921220434-5916 --label name.minikube.sigs.k8s.io=force-systemd-flag-20220921220434-5916 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create force-systemd-flag-20220921220434-5916: error while creating volume root path '/var/lib/docker/volumes/force-systemd-flag-20220921220434-5916': mkdir /var/lib/docker/volumes/force-systemd-flag-20220921220434-5916: read-only file system
	
	* Failed to start docker container. Running "minikube delete -p force-systemd-flag-20220921220434-5916" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for force-systemd-flag-20220921220434-5916 container: docker volume create force-systemd-flag-20220921220434-5916 --label name.minikube.sigs.k8s.io=force-systemd-flag-20220921220434-5916 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create force-systemd-flag-20220921220434-5916: error while creating volume root path '/var/lib/docker/volumes/force-systemd-flag-20220921220434-5916': mkdir /var/lib/docker/volumes/force-systemd-flag-20220921220434-5916: read-only file system
	
	I0921 22:05:24.697621    6820 out.go:177] 
	W0921 22:05:24.699906    6820 out.go:239] X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for force-systemd-flag-20220921220434-5916 container: docker volume create force-systemd-flag-20220921220434-5916 --label name.minikube.sigs.k8s.io=force-systemd-flag-20220921220434-5916 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create force-systemd-flag-20220921220434-5916: error while creating volume root path '/var/lib/docker/volumes/force-systemd-flag-20220921220434-5916': mkdir /var/lib/docker/volumes/force-systemd-flag-20220921220434-5916: read-only file system
	
	X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for force-systemd-flag-20220921220434-5916 container: docker volume create force-systemd-flag-20220921220434-5916 --label name.minikube.sigs.k8s.io=force-systemd-flag-20220921220434-5916 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create force-systemd-flag-20220921220434-5916: error while creating volume root path '/var/lib/docker/volumes/force-systemd-flag-20220921220434-5916': mkdir /var/lib/docker/volumes/force-systemd-flag-20220921220434-5916: read-only file system
	
	W0921 22:05:24.700445    6820 out.go:239] * Suggestion: Restart Docker
	* Suggestion: Restart Docker
	W0921 22:05:24.700631    6820 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/6825
	* Related issue: https://github.com/kubernetes/minikube/issues/6825
	I0921 22:05:24.703650    6820 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:87: failed to start minikube with args: "out/minikube-windows-amd64.exe start -p force-systemd-flag-20220921220434-5916 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker" : exit status 60
docker_test.go:104: (dbg) Run:  out/minikube-windows-amd64.exe -p force-systemd-flag-20220921220434-5916 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:104: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p force-systemd-flag-20220921220434-5916 ssh "docker info --format {{.CgroupDriver}}": exit status 80 (1.1572482s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "force-systemd-flag-20220921220434-5916": docker container inspect force-systemd-flag-20220921220434-5916 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-20220921220434-5916
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                     │
	│    * If the above advice does not help, please let us know:                                                         │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                       │
	│                                                                                                                     │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                            │
	│    * Please also attach the following file to the GitHub issue:                                                     │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_ssh_2837ebd22544166cf14c5e2e977cc80019e59e54_7.log    │
	│                                                                                                                     │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
docker_test.go:106: failed to get docker cgroup driver. args "out/minikube-windows-amd64.exe -p force-systemd-flag-20220921220434-5916 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 80
docker_test.go:100: *** TestForceSystemdFlag FAILED at 2022-09-21 22:05:26.0040623 +0000 GMT m=+2133.555742201
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestForceSystemdFlag]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect force-systemd-flag-20220921220434-5916

                                                
                                                
=== CONT  TestForceSystemdFlag
helpers_test.go:231: (dbg) Non-zero exit: docker inspect force-systemd-flag-20220921220434-5916: exit status 1 (298.635ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: force-systemd-flag-20220921220434-5916

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p force-systemd-flag-20220921220434-5916 -n force-systemd-flag-20220921220434-5916

                                                
                                                
=== CONT  TestForceSystemdFlag
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p force-systemd-flag-20220921220434-5916 -n force-systemd-flag-20220921220434-5916: exit status 7 (626.4529ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0921 22:05:26.908926    9104 status.go:247] status error: host: state: unknown state "force-systemd-flag-20220921220434-5916": docker container inspect force-systemd-flag-20220921220434-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-20220921220434-5916

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-flag-20220921220434-5916" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:175: Cleaning up "force-systemd-flag-20220921220434-5916" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p force-systemd-flag-20220921220434-5916

                                                
                                                
=== CONT  TestForceSystemdFlag
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p force-systemd-flag-20220921220434-5916: (1.7700513s)
--- FAIL: TestForceSystemdFlag (54.22s)

                                                
                                    
x
+
TestForceSystemdEnv (53.73s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:149: (dbg) Run:  out/minikube-windows-amd64.exe start -p force-systemd-env-20220921220625-5916 --memory=2048 --alsologtostderr -v=5 --driver=docker

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:149: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p force-systemd-env-20220921220625-5916 --memory=2048 --alsologtostderr -v=5 --driver=docker: exit status 60 (49.985494s)

                                                
                                                
-- stdout --
	* [force-systemd-env-20220921220625-5916] minikube v1.27.0 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	  - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=14995
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_FORCE_SYSTEMD=true
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting control plane node force-systemd-env-20220921220625-5916 in cluster force-systemd-env-20220921220625-5916
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* docker "force-systemd-env-20220921220625-5916" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0921 22:06:25.708955    2256 out.go:296] Setting OutFile to fd 1664 ...
	I0921 22:06:25.773223    2256 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0921 22:06:25.773223    2256 out.go:309] Setting ErrFile to fd 1624...
	I0921 22:06:25.773223    2256 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0921 22:06:25.803207    2256 out.go:303] Setting JSON to false
	I0921 22:06:25.806319    2256 start.go:115] hostinfo: {"hostname":"minikube2","uptime":4054,"bootTime":1663793931,"procs":156,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19044 Build 19044","kernelVersion":"10.0.19044 Build 19044","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"29dced62-21fb-45d8-a34e-472b66ced002"}
	W0921 22:06:25.806319    2256 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0921 22:06:25.810542    2256 out.go:177] * [force-systemd-env-20220921220625-5916] minikube v1.27.0 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	I0921 22:06:25.814825    2256 notify.go:214] Checking for updates...
	I0921 22:06:25.817618    2256 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	I0921 22:06:25.819643    2256 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	I0921 22:06:25.822657    2256 out.go:177]   - MINIKUBE_LOCATION=14995
	I0921 22:06:25.825671    2256 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0921 22:06:25.831943    2256 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=true
	I0921 22:06:25.836000    2256 config.go:180] Loaded profile config "NoKubernetes-20220921220434-5916": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v0.0.0
	I0921 22:06:25.836560    2256 config.go:180] Loaded profile config "multinode-20220921215635-5916-m01": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.2
	I0921 22:06:25.837030    2256 config.go:180] Loaded profile config "running-upgrade-20220921220528-5916": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.0
	I0921 22:06:25.837399    2256 config.go:180] Loaded profile config "stopped-upgrade-20220921220434-5916": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.0
	I0921 22:06:25.837526    2256 driver.go:365] Setting default libvirt URI to qemu:///system
	I0921 22:06:26.125413    2256 docker.go:137] docker version: linux-20.10.17
	I0921 22:06:26.138032    2256 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0921 22:06:26.700954    2256 info.go:265] docker info: {ID:6VFI:N2PO:PLBP:KXLR:EN7R:OR7P:564S:UY7N:ZWDA:5V2U:KGL2:KJZ7 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:43 OomKillDisable:true NGoroutines:71 SystemTime:2022-09-21 22:06:26.3102332 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.10.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.9] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-
plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.19.0]] Warnings:<nil>}}
	I0921 22:06:26.706600    2256 out.go:177] * Using the docker driver based on user configuration
	I0921 22:06:26.709160    2256 start.go:284] selected driver: docker
	I0921 22:06:26.709160    2256 start.go:808] validating driver "docker" against <nil>
	I0921 22:06:26.709160    2256 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0921 22:06:26.796475    2256 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0921 22:06:27.340764    2256 info.go:265] docker info: {ID:6VFI:N2PO:PLBP:KXLR:EN7R:OR7P:564S:UY7N:ZWDA:5V2U:KGL2:KJZ7 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:43 OomKillDisable:true NGoroutines:71 SystemTime:2022-09-21 22:06:26.9470109 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.10.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.9] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-
plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.19.0]] Warnings:<nil>}}
	I0921 22:06:27.340764    2256 start_flags.go:302] no existing cluster config was found, will generate one from the flags 
	I0921 22:06:27.341762    2256 start_flags.go:849] Wait components to verify : map[apiserver:true system_pods:true]
	I0921 22:06:27.345758    2256 out.go:177] * Using Docker Desktop driver with root privileges
	I0921 22:06:27.347756    2256 cni.go:95] Creating CNI manager for ""
	I0921 22:06:27.347756    2256 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0921 22:06:27.347756    2256 start_flags.go:316] config:
	{Name:force-systemd-env-20220921220625-5916 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.2 ClusterName:force-systemd-env-20220921220625-5916 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I0921 22:06:27.350755    2256 out.go:177] * Starting control plane node force-systemd-env-20220921220625-5916 in cluster force-systemd-env-20220921220625-5916
	I0921 22:06:27.353770    2256 cache.go:120] Beginning downloading kic base image for docker with docker
	I0921 22:06:27.355755    2256 out.go:177] * Pulling base image ...
	I0921 22:06:27.358771    2256 preload.go:132] Checking if preload exists for k8s version v1.25.2 and runtime docker
	I0921 22:06:27.358771    2256 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c in local docker daemon
	I0921 22:06:27.358771    2256 preload.go:148] Found local preload: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.25.2-docker-overlay2-amd64.tar.lz4
	I0921 22:06:27.358771    2256 cache.go:57] Caching tarball of preloaded images
	I0921 22:06:27.359757    2256 preload.go:174] Found C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.25.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0921 22:06:27.359757    2256 cache.go:60] Finished verifying existence of preloaded tar for  v1.25.2 on docker
	I0921 22:06:27.359757    2256 profile.go:148] Saving config to C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\force-systemd-env-20220921220625-5916\config.json ...
	I0921 22:06:27.359757    2256 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\force-systemd-env-20220921220625-5916\config.json: {Name:mk4ebd61c7a36c4cf83a6348c9367771e4eb43d4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0921 22:06:27.577150    2256 cache.go:147] Downloading gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c to local cache
	I0921 22:06:27.577150    2256 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase_v0.0.34@sha256_f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c.tar
	I0921 22:06:27.577150    2256 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase_v0.0.34@sha256_f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c.tar
	I0921 22:06:27.577150    2256 image.go:59] Checking for gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c in local cache directory
	I0921 22:06:27.577150    2256 image.go:62] Found gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c in local cache directory, skipping pull
	I0921 22:06:27.577150    2256 image.go:103] gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c exists in cache, skipping pull
	I0921 22:06:27.577150    2256 cache.go:150] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c as a tarball
	I0921 22:06:27.577150    2256 cache.go:161] Loading gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c from local cache
	I0921 22:06:27.577150    2256 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase_v0.0.34@sha256_f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c.tar
	I0921 22:06:29.953211    2256 image.go:219] response: {"errorDetail":{"message":"mkdir /var/lib/docker/tmp/docker-import-1308150464: read-only file system"},"error":"mkdir /var/lib/docker/tmp/docker-import-1308150464: read-only file system"}
	I0921 22:06:29.953211    2256 cache.go:164] successfully loaded gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c from cached tarball
	I0921 22:06:29.953211    2256 cache.go:170] Downloading gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c to local daemon
	I0921 22:06:29.953746    2256 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c.d.lock -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase_v0.0.34@sha256_f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c.d.lock
	I0921 22:06:29.954077    2256 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c in local docker daemon
	I0921 22:06:30.198973    2256 image.go:243] Writing gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c to local daemon
	I0921 22:06:30.198973    2256 image.go:258] Getting image gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c
	I0921 22:06:30.555822    2256 image.go:272] Writing image gcr.io/k8s-minikube/kicbase:v0.0.34
	    > gcr.io/k8s-minikube/kicbase:  0 B [__________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase:  0 B [__________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase:  0 B [__________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase:  0 B [______________________] ?% ? p/s 800msI0921 22:06:31.355119    2256 image.go:306] Pulling image gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c
	I0921 22:06:31.771023    2256 cache.go:177] use image loaded from cache gcr.io/k8s-minikube/kicbase:v0.0.34
	W0921 22:06:31.771023    2256 out.go:239] ! minikube was unable to download gcr.io/k8s-minikube/kicbase:v0.0.34, but successfully downloaded gcr.io/k8s-minikube/kicbase:v0.0.34 as a fallback image
	! minikube was unable to download gcr.io/k8s-minikube/kicbase:v0.0.34, but successfully downloaded gcr.io/k8s-minikube/kicbase:v0.0.34 as a fallback image
	I0921 22:06:31.771023    2256 cache.go:208] Successfully downloaded all kic artifacts
	I0921 22:06:31.771023    2256 start.go:364] acquiring machines lock for force-systemd-env-20220921220625-5916: {Name:mk2b52fdc6c535dd39f4d6fb5e840b899e0b1131 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0921 22:06:31.771023    2256 start.go:368] acquired machines lock for "force-systemd-env-20220921220625-5916" in 0s
	I0921 22:06:31.772028    2256 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-20220921220625-5916 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.34 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.2 ClusterName:force-systemd-env-20220921220625-5916 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:clust
er.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.25.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client Socke
tVMnetPath:/var/run/socket_vmnet} &{Name: IP: Port:8443 KubernetesVersion:v1.25.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0921 22:06:31.772028    2256 start.go:125] createHost starting for "" (driver="docker")
	I0921 22:06:31.818151    2256 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0921 22:06:31.818995    2256 start.go:159] libmachine.API.Create for "force-systemd-env-20220921220625-5916" (driver="docker")
	I0921 22:06:31.818995    2256 client.go:168] LocalClient.Create starting
	I0921 22:06:31.819302    2256 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I0921 22:06:31.819968    2256 main.go:134] libmachine: Decoding PEM data...
	I0921 22:06:31.819968    2256 main.go:134] libmachine: Parsing certificate...
	I0921 22:06:31.819968    2256 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I0921 22:06:31.819968    2256 main.go:134] libmachine: Decoding PEM data...
	I0921 22:06:31.819968    2256 main.go:134] libmachine: Parsing certificate...
	I0921 22:06:31.833501    2256 cli_runner.go:164] Run: docker network inspect force-systemd-env-20220921220625-5916 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0921 22:06:32.023290    2256 cli_runner.go:211] docker network inspect force-systemd-env-20220921220625-5916 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0921 22:06:32.030337    2256 network_create.go:272] running [docker network inspect force-systemd-env-20220921220625-5916] to gather additional debugging logs...
	I0921 22:06:32.030337    2256 cli_runner.go:164] Run: docker network inspect force-systemd-env-20220921220625-5916
	W0921 22:06:32.241578    2256 cli_runner.go:211] docker network inspect force-systemd-env-20220921220625-5916 returned with exit code 1
	I0921 22:06:32.241578    2256 network_create.go:275] error running [docker network inspect force-systemd-env-20220921220625-5916]: docker network inspect force-systemd-env-20220921220625-5916: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: force-systemd-env-20220921220625-5916
	I0921 22:06:32.241578    2256 network_create.go:277] output of [docker network inspect force-systemd-env-20220921220625-5916]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: force-systemd-env-20220921220625-5916
	
	** /stderr **
	I0921 22:06:32.251279    2256 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0921 22:06:32.550670    2256 network.go:290] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc00050e330] misses:0}
	I0921 22:06:32.550670    2256 network.go:236] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0921 22:06:32.550670    2256 network_create.go:115] attempt to create docker network force-systemd-env-20220921220625-5916 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0921 22:06:32.560100    2256 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-env-20220921220625-5916 force-systemd-env-20220921220625-5916
	W0921 22:06:32.789701    2256 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-env-20220921220625-5916 force-systemd-env-20220921220625-5916 returned with exit code 1
	E0921 22:06:32.789701    2256 network_create.go:104] error while trying to create docker network force-systemd-env-20220921220625-5916 192.168.49.0/24: create docker network force-systemd-env-20220921220625-5916 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-env-20220921220625-5916 force-systemd-env-20220921220625-5916: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 57e3710d6a7349466e20bcc619aa210fdb98cd6759a0f5041210dbc15abafd7d (br-57e3710d6a73): conflicts with network a04d36bfb3cf4b1b1ab643a2ec70fadfc4a485878b32a275b4c7653037d66dd0 (br-a04d36bfb3cf): networks have overlapping IPv4
	W0921 22:06:32.790443    2256 out.go:239] ! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network force-systemd-env-20220921220625-5916 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-env-20220921220625-5916 force-systemd-env-20220921220625-5916: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 57e3710d6a7349466e20bcc619aa210fdb98cd6759a0f5041210dbc15abafd7d (br-57e3710d6a73): conflicts with network a04d36bfb3cf4b1b1ab643a2ec70fadfc4a485878b32a275b4c7653037d66dd0 (br-a04d36bfb3cf): networks have overlapping IPv4
	
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network force-systemd-env-20220921220625-5916 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-env-20220921220625-5916 force-systemd-env-20220921220625-5916: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 57e3710d6a7349466e20bcc619aa210fdb98cd6759a0f5041210dbc15abafd7d (br-57e3710d6a73): conflicts with network a04d36bfb3cf4b1b1ab643a2ec70fadfc4a485878b32a275b4c7653037d66dd0 (br-a04d36bfb3cf): networks have overlapping IPv4
	
	I0921 22:06:32.818826    2256 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0921 22:06:33.111441    2256 cli_runner.go:164] Run: docker volume create force-systemd-env-20220921220625-5916 --label name.minikube.sigs.k8s.io=force-systemd-env-20220921220625-5916 --label created_by.minikube.sigs.k8s.io=true
	W0921 22:06:33.304802    2256 cli_runner.go:211] docker volume create force-systemd-env-20220921220625-5916 --label name.minikube.sigs.k8s.io=force-systemd-env-20220921220625-5916 --label created_by.minikube.sigs.k8s.io=true returned with exit code 1
	I0921 22:06:33.304802    2256 client.go:171] LocalClient.Create took 1.4857961s
	I0921 22:06:35.327096    2256 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0921 22:06:35.336715    2256 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20220921220625-5916
	W0921 22:06:35.528096    2256 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20220921220625-5916 returned with exit code 1
	I0921 22:06:35.528096    2256 retry.go:31] will retry after 276.165072ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-20220921220625-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20220921220625-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-20220921220625-5916
	I0921 22:06:35.827779    2256 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20220921220625-5916
	W0921 22:06:36.249238    2256 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20220921220625-5916 returned with exit code 1
	I0921 22:06:36.249238    2256 retry.go:31] will retry after 540.190908ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-20220921220625-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20220921220625-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-20220921220625-5916
	I0921 22:06:36.799935    2256 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20220921220625-5916
	W0921 22:06:37.011558    2256 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20220921220625-5916 returned with exit code 1
	W0921 22:06:37.011558    2256 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-20220921220625-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20220921220625-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-20220921220625-5916
	
	W0921 22:06:37.011558    2256 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-20220921220625-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20220921220625-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-20220921220625-5916
	I0921 22:06:37.021593    2256 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0921 22:06:37.028677    2256 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20220921220625-5916
	W0921 22:06:37.232475    2256 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20220921220625-5916 returned with exit code 1
	I0921 22:06:37.232475    2256 retry.go:31] will retry after 234.428547ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-20220921220625-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20220921220625-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-20220921220625-5916
	I0921 22:06:37.476014    2256 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20220921220625-5916
	W0921 22:06:37.685406    2256 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20220921220625-5916 returned with exit code 1
	I0921 22:06:37.685406    2256 retry.go:31] will retry after 346.739061ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-20220921220625-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20220921220625-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-20220921220625-5916
	I0921 22:06:38.041036    2256 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20220921220625-5916
	W0921 22:06:38.245985    2256 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20220921220625-5916 returned with exit code 1
	I0921 22:06:38.246332    2256 retry.go:31] will retry after 667.587979ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-20220921220625-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20220921220625-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-20220921220625-5916
	I0921 22:06:38.921598    2256 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20220921220625-5916
	W0921 22:06:39.134482    2256 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20220921220625-5916 returned with exit code 1
	W0921 22:06:39.134710    2256 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-20220921220625-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20220921220625-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-20220921220625-5916
	
	W0921 22:06:39.134710    2256 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-20220921220625-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20220921220625-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-20220921220625-5916
	I0921 22:06:39.134710    2256 start.go:128] duration metric: createHost completed in 7.3626263s
	I0921 22:06:39.134710    2256 start.go:83] releasing machines lock for "force-systemd-env-20220921220625-5916", held for 7.3636304s
	W0921 22:06:39.134710    2256 start.go:602] error starting host: creating host: create: creating: setting up container node: creating volume for force-systemd-env-20220921220625-5916 container: docker volume create force-systemd-env-20220921220625-5916 --label name.minikube.sigs.k8s.io=force-systemd-env-20220921220625-5916 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create force-systemd-env-20220921220625-5916: error while creating volume root path '/var/lib/docker/volumes/force-systemd-env-20220921220625-5916': mkdir /var/lib/docker/volumes/force-systemd-env-20220921220625-5916: read-only file system
	I0921 22:06:39.149748    2256 cli_runner.go:164] Run: docker container inspect force-systemd-env-20220921220625-5916 --format={{.State.Status}}
	W0921 22:06:39.320896    2256 cli_runner.go:211] docker container inspect force-systemd-env-20220921220625-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:06:39.320896    2256 delete.go:82] Unable to get host status for force-systemd-env-20220921220625-5916, assuming it has already been deleted: state: unknown state "force-systemd-env-20220921220625-5916": docker container inspect force-systemd-env-20220921220625-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-20220921220625-5916
	W0921 22:06:39.320896    2256 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: setting up container node: creating volume for force-systemd-env-20220921220625-5916 container: docker volume create force-systemd-env-20220921220625-5916 --label name.minikube.sigs.k8s.io=force-systemd-env-20220921220625-5916 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create force-systemd-env-20220921220625-5916: error while creating volume root path '/var/lib/docker/volumes/force-systemd-env-20220921220625-5916': mkdir /var/lib/docker/volumes/force-systemd-env-20220921220625-5916: read-only file system
	
	! StartHost failed, but will try again: creating host: create: creating: setting up container node: creating volume for force-systemd-env-20220921220625-5916 container: docker volume create force-systemd-env-20220921220625-5916 --label name.minikube.sigs.k8s.io=force-systemd-env-20220921220625-5916 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create force-systemd-env-20220921220625-5916: error while creating volume root path '/var/lib/docker/volumes/force-systemd-env-20220921220625-5916': mkdir /var/lib/docker/volumes/force-systemd-env-20220921220625-5916: read-only file system
	
	I0921 22:06:39.320896    2256 start.go:617] Will try again in 5 seconds ...
	I0921 22:06:44.322156    2256 start.go:364] acquiring machines lock for force-systemd-env-20220921220625-5916: {Name:mk2b52fdc6c535dd39f4d6fb5e840b899e0b1131 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0921 22:06:44.322156    2256 start.go:368] acquired machines lock for "force-systemd-env-20220921220625-5916" in 0s
	I0921 22:06:44.322680    2256 start.go:96] Skipping create...Using existing machine configuration
	I0921 22:06:44.322680    2256 fix.go:55] fixHost starting: 
	I0921 22:06:44.336976    2256 cli_runner.go:164] Run: docker container inspect force-systemd-env-20220921220625-5916 --format={{.State.Status}}
	W0921 22:06:44.552290    2256 cli_runner.go:211] docker container inspect force-systemd-env-20220921220625-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:06:44.552366    2256 fix.go:103] recreateIfNeeded on force-systemd-env-20220921220625-5916: state= err=unknown state "force-systemd-env-20220921220625-5916": docker container inspect force-systemd-env-20220921220625-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-20220921220625-5916
	I0921 22:06:44.552366    2256 fix.go:108] machineExists: false. err=machine does not exist
	I0921 22:06:44.609625    2256 out.go:177] * docker "force-systemd-env-20220921220625-5916" container is missing, will recreate.
	I0921 22:06:44.612401    2256 delete.go:124] DEMOLISHING force-systemd-env-20220921220625-5916 ...
	I0921 22:06:44.632543    2256 cli_runner.go:164] Run: docker container inspect force-systemd-env-20220921220625-5916 --format={{.State.Status}}
	W0921 22:06:44.846630    2256 cli_runner.go:211] docker container inspect force-systemd-env-20220921220625-5916 --format={{.State.Status}} returned with exit code 1
	W0921 22:06:44.846811    2256 stop.go:75] unable to get state: unknown state "force-systemd-env-20220921220625-5916": docker container inspect force-systemd-env-20220921220625-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-20220921220625-5916
	I0921 22:06:44.846876    2256 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "force-systemd-env-20220921220625-5916": docker container inspect force-systemd-env-20220921220625-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-20220921220625-5916
	I0921 22:06:44.867914    2256 cli_runner.go:164] Run: docker container inspect force-systemd-env-20220921220625-5916 --format={{.State.Status}}
	W0921 22:06:45.082022    2256 cli_runner.go:211] docker container inspect force-systemd-env-20220921220625-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:06:45.082022    2256 delete.go:82] Unable to get host status for force-systemd-env-20220921220625-5916, assuming it has already been deleted: state: unknown state "force-systemd-env-20220921220625-5916": docker container inspect force-systemd-env-20220921220625-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-20220921220625-5916
	I0921 22:06:45.091021    2256 cli_runner.go:164] Run: docker container inspect -f {{.Id}} force-systemd-env-20220921220625-5916
	W0921 22:06:45.298524    2256 cli_runner.go:211] docker container inspect -f {{.Id}} force-systemd-env-20220921220625-5916 returned with exit code 1
	I0921 22:06:45.298708    2256 kic.go:356] could not find the container force-systemd-env-20220921220625-5916 to remove it. will try anyways
	I0921 22:06:45.306187    2256 cli_runner.go:164] Run: docker container inspect force-systemd-env-20220921220625-5916 --format={{.State.Status}}
	W0921 22:06:45.485117    2256 cli_runner.go:211] docker container inspect force-systemd-env-20220921220625-5916 --format={{.State.Status}} returned with exit code 1
	W0921 22:06:45.485117    2256 oci.go:84] error getting container status, will try to delete anyways: unknown state "force-systemd-env-20220921220625-5916": docker container inspect force-systemd-env-20220921220625-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-20220921220625-5916
	I0921 22:06:45.493059    2256 cli_runner.go:164] Run: docker exec --privileged -t force-systemd-env-20220921220625-5916 /bin/bash -c "sudo init 0"
	W0921 22:06:45.702154    2256 cli_runner.go:211] docker exec --privileged -t force-systemd-env-20220921220625-5916 /bin/bash -c "sudo init 0" returned with exit code 1
	I0921 22:06:45.702154    2256 oci.go:646] error shutdown force-systemd-env-20220921220625-5916: docker exec --privileged -t force-systemd-env-20220921220625-5916 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: force-systemd-env-20220921220625-5916
	I0921 22:06:46.714288    2256 cli_runner.go:164] Run: docker container inspect force-systemd-env-20220921220625-5916 --format={{.State.Status}}
	W0921 22:06:46.904992    2256 cli_runner.go:211] docker container inspect force-systemd-env-20220921220625-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:06:46.904992    2256 oci.go:658] temporary error verifying shutdown: unknown state "force-systemd-env-20220921220625-5916": docker container inspect force-systemd-env-20220921220625-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-20220921220625-5916
	I0921 22:06:46.904992    2256 oci.go:660] temporary error: container force-systemd-env-20220921220625-5916 status is  but expect it to be exited
	I0921 22:06:46.904992    2256 retry.go:31] will retry after 328.259627ms: couldn't verify container is exited. %v: unknown state "force-systemd-env-20220921220625-5916": docker container inspect force-systemd-env-20220921220625-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-20220921220625-5916
	I0921 22:06:47.242215    2256 cli_runner.go:164] Run: docker container inspect force-systemd-env-20220921220625-5916 --format={{.State.Status}}
	W0921 22:06:47.451537    2256 cli_runner.go:211] docker container inspect force-systemd-env-20220921220625-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:06:47.451537    2256 oci.go:658] temporary error verifying shutdown: unknown state "force-systemd-env-20220921220625-5916": docker container inspect force-systemd-env-20220921220625-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-20220921220625-5916
	I0921 22:06:47.451537    2256 oci.go:660] temporary error: container force-systemd-env-20220921220625-5916 status is  but expect it to be exited
	I0921 22:06:47.451537    2256 retry.go:31] will retry after 447.727139ms: couldn't verify container is exited. %v: unknown state "force-systemd-env-20220921220625-5916": docker container inspect force-systemd-env-20220921220625-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-20220921220625-5916
	I0921 22:06:47.921766    2256 cli_runner.go:164] Run: docker container inspect force-systemd-env-20220921220625-5916 --format={{.State.Status}}
	W0921 22:06:48.161879    2256 cli_runner.go:211] docker container inspect force-systemd-env-20220921220625-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:06:48.161879    2256 oci.go:658] temporary error verifying shutdown: unknown state "force-systemd-env-20220921220625-5916": docker container inspect force-systemd-env-20220921220625-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-20220921220625-5916
	I0921 22:06:48.161879    2256 oci.go:660] temporary error: container force-systemd-env-20220921220625-5916 status is  but expect it to be exited
	I0921 22:06:48.161879    2256 retry.go:31] will retry after 901.025843ms: couldn't verify container is exited. %v: unknown state "force-systemd-env-20220921220625-5916": docker container inspect force-systemd-env-20220921220625-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-20220921220625-5916
	I0921 22:06:49.077711    2256 cli_runner.go:164] Run: docker container inspect force-systemd-env-20220921220625-5916 --format={{.State.Status}}
	W0921 22:06:49.306419    2256 cli_runner.go:211] docker container inspect force-systemd-env-20220921220625-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:06:49.306528    2256 oci.go:658] temporary error verifying shutdown: unknown state "force-systemd-env-20220921220625-5916": docker container inspect force-systemd-env-20220921220625-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-20220921220625-5916
	I0921 22:06:49.306585    2256 oci.go:660] temporary error: container force-systemd-env-20220921220625-5916 status is  but expect it to be exited
	I0921 22:06:49.306619    2256 retry.go:31] will retry after 1.713171311s: couldn't verify container is exited. %v: unknown state "force-systemd-env-20220921220625-5916": docker container inspect force-systemd-env-20220921220625-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-20220921220625-5916
	I0921 22:06:51.037744    2256 cli_runner.go:164] Run: docker container inspect force-systemd-env-20220921220625-5916 --format={{.State.Status}}
	W0921 22:06:51.259510    2256 cli_runner.go:211] docker container inspect force-systemd-env-20220921220625-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:06:51.260493    2256 oci.go:658] temporary error verifying shutdown: unknown state "force-systemd-env-20220921220625-5916": docker container inspect force-systemd-env-20220921220625-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-20220921220625-5916
	I0921 22:06:51.260552    2256 oci.go:660] temporary error: container force-systemd-env-20220921220625-5916 status is  but expect it to be exited
	I0921 22:06:51.260552    2256 retry.go:31] will retry after 3.325151152s: couldn't verify container is exited. %v: unknown state "force-systemd-env-20220921220625-5916": docker container inspect force-systemd-env-20220921220625-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-20220921220625-5916
	I0921 22:06:54.606747    2256 cli_runner.go:164] Run: docker container inspect force-systemd-env-20220921220625-5916 --format={{.State.Status}}
	W0921 22:06:54.806762    2256 cli_runner.go:211] docker container inspect force-systemd-env-20220921220625-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:06:54.806762    2256 oci.go:658] temporary error verifying shutdown: unknown state "force-systemd-env-20220921220625-5916": docker container inspect force-systemd-env-20220921220625-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-20220921220625-5916
	I0921 22:06:54.806762    2256 oci.go:660] temporary error: container force-systemd-env-20220921220625-5916 status is  but expect it to be exited
	I0921 22:06:54.806762    2256 retry.go:31] will retry after 2.711970641s: couldn't verify container is exited. %v: unknown state "force-systemd-env-20220921220625-5916": docker container inspect force-systemd-env-20220921220625-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-20220921220625-5916
	I0921 22:06:57.533570    2256 cli_runner.go:164] Run: docker container inspect force-systemd-env-20220921220625-5916 --format={{.State.Status}}
	W0921 22:06:57.749023    2256 cli_runner.go:211] docker container inspect force-systemd-env-20220921220625-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:06:57.749023    2256 oci.go:658] temporary error verifying shutdown: unknown state "force-systemd-env-20220921220625-5916": docker container inspect force-systemd-env-20220921220625-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-20220921220625-5916
	I0921 22:06:57.749023    2256 oci.go:660] temporary error: container force-systemd-env-20220921220625-5916 status is  but expect it to be exited
	I0921 22:06:57.749023    2256 retry.go:31] will retry after 5.015617898s: couldn't verify container is exited. %v: unknown state "force-systemd-env-20220921220625-5916": docker container inspect force-systemd-env-20220921220625-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-20220921220625-5916
	I0921 22:07:02.776602    2256 cli_runner.go:164] Run: docker container inspect force-systemd-env-20220921220625-5916 --format={{.State.Status}}
	W0921 22:07:02.969358    2256 cli_runner.go:211] docker container inspect force-systemd-env-20220921220625-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:07:02.969358    2256 oci.go:658] temporary error verifying shutdown: unknown state "force-systemd-env-20220921220625-5916": docker container inspect force-systemd-env-20220921220625-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-20220921220625-5916
	I0921 22:07:02.969358    2256 oci.go:660] temporary error: container force-systemd-env-20220921220625-5916 status is  but expect it to be exited
	I0921 22:07:02.969358    2256 oci.go:88] couldn't shut down force-systemd-env-20220921220625-5916 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "force-systemd-env-20220921220625-5916": docker container inspect force-systemd-env-20220921220625-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-20220921220625-5916
	 
	I0921 22:07:02.978353    2256 cli_runner.go:164] Run: docker rm -f -v force-systemd-env-20220921220625-5916
	I0921 22:07:03.184861    2256 cli_runner.go:164] Run: docker container inspect -f {{.Id}} force-systemd-env-20220921220625-5916
	W0921 22:07:03.411037    2256 cli_runner.go:211] docker container inspect -f {{.Id}} force-systemd-env-20220921220625-5916 returned with exit code 1
	I0921 22:07:03.419208    2256 cli_runner.go:164] Run: docker network inspect force-systemd-env-20220921220625-5916 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0921 22:07:03.642179    2256 cli_runner.go:211] docker network inspect force-systemd-env-20220921220625-5916 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0921 22:07:03.649242    2256 network_create.go:272] running [docker network inspect force-systemd-env-20220921220625-5916] to gather additional debugging logs...
	I0921 22:07:03.649242    2256 cli_runner.go:164] Run: docker network inspect force-systemd-env-20220921220625-5916
	W0921 22:07:03.905838    2256 cli_runner.go:211] docker network inspect force-systemd-env-20220921220625-5916 returned with exit code 1
	I0921 22:07:03.905982    2256 network_create.go:275] error running [docker network inspect force-systemd-env-20220921220625-5916]: docker network inspect force-systemd-env-20220921220625-5916: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: force-systemd-env-20220921220625-5916
	I0921 22:07:03.905982    2256 network_create.go:277] output of [docker network inspect force-systemd-env-20220921220625-5916]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: force-systemd-env-20220921220625-5916
	
	** /stderr **
	W0921 22:07:03.907070    2256 delete.go:139] delete failed (probably ok) <nil>
	I0921 22:07:03.907070    2256 fix.go:115] Sleeping 1 second for extra luck!
	I0921 22:07:04.928330    2256 start.go:125] createHost starting for "" (driver="docker")
	I0921 22:07:05.108579    2256 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0921 22:07:05.109308    2256 start.go:159] libmachine.API.Create for "force-systemd-env-20220921220625-5916" (driver="docker")
	I0921 22:07:05.109308    2256 client.go:168] LocalClient.Create starting
	I0921 22:07:05.110363    2256 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I0921 22:07:05.110637    2256 main.go:134] libmachine: Decoding PEM data...
	I0921 22:07:05.110722    2256 main.go:134] libmachine: Parsing certificate...
	I0921 22:07:05.110760    2256 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I0921 22:07:05.110760    2256 main.go:134] libmachine: Decoding PEM data...
	I0921 22:07:05.110760    2256 main.go:134] libmachine: Parsing certificate...
	I0921 22:07:05.120252    2256 cli_runner.go:164] Run: docker network inspect force-systemd-env-20220921220625-5916 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0921 22:07:05.321739    2256 cli_runner.go:211] docker network inspect force-systemd-env-20220921220625-5916 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0921 22:07:05.330186    2256 network_create.go:272] running [docker network inspect force-systemd-env-20220921220625-5916] to gather additional debugging logs...
	I0921 22:07:05.330186    2256 cli_runner.go:164] Run: docker network inspect force-systemd-env-20220921220625-5916
	W0921 22:07:05.538769    2256 cli_runner.go:211] docker network inspect force-systemd-env-20220921220625-5916 returned with exit code 1
	I0921 22:07:05.538866    2256 network_create.go:275] error running [docker network inspect force-systemd-env-20220921220625-5916]: docker network inspect force-systemd-env-20220921220625-5916: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: force-systemd-env-20220921220625-5916
	I0921 22:07:05.538866    2256 network_create.go:277] output of [docker network inspect force-systemd-env-20220921220625-5916]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: force-systemd-env-20220921220625-5916
	
	** /stderr **
	I0921 22:07:05.546874    2256 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0921 22:07:05.757235    2256 network.go:281] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00050e330] amended:false}} dirty:map[] misses:0}
	I0921 22:07:05.757235    2256 network.go:239] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0921 22:07:05.773269    2256 network.go:290] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00050e330] amended:true}} dirty:map[192.168.49.0:0xc00050e330 192.168.58.0:0xc0006ce888] misses:0}
	I0921 22:07:05.773269    2256 network.go:236] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0921 22:07:05.774255    2256 network_create.go:115] attempt to create docker network force-systemd-env-20220921220625-5916 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0921 22:07:05.781681    2256 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-env-20220921220625-5916 force-systemd-env-20220921220625-5916
	W0921 22:07:05.994560    2256 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-env-20220921220625-5916 force-systemd-env-20220921220625-5916 returned with exit code 1
	E0921 22:07:05.995098    2256 network_create.go:104] error while trying to create docker network force-systemd-env-20220921220625-5916 192.168.58.0/24: create docker network force-systemd-env-20220921220625-5916 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-env-20220921220625-5916 force-systemd-env-20220921220625-5916: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 1a5e4babb9e3529b307aac82fb441bebf0150c39cc460199c5a981636977f23f (br-1a5e4babb9e3): conflicts with network 8a3cd8d165a4af032276277dbee93b52b8ee4aed7ca6d8aceee6814901b31792 (br-8a3cd8d165a4): networks have overlapping IPv4
	W0921 22:07:05.995098    2256 out.go:239] ! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network force-systemd-env-20220921220625-5916 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-env-20220921220625-5916 force-systemd-env-20220921220625-5916: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 1a5e4babb9e3529b307aac82fb441bebf0150c39cc460199c5a981636977f23f (br-1a5e4babb9e3): conflicts with network 8a3cd8d165a4af032276277dbee93b52b8ee4aed7ca6d8aceee6814901b31792 (br-8a3cd8d165a4): networks have overlapping IPv4
	
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network force-systemd-env-20220921220625-5916 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-env-20220921220625-5916 force-systemd-env-20220921220625-5916: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 1a5e4babb9e3529b307aac82fb441bebf0150c39cc460199c5a981636977f23f (br-1a5e4babb9e3): conflicts with network 8a3cd8d165a4af032276277dbee93b52b8ee4aed7ca6d8aceee6814901b31792 (br-8a3cd8d165a4): networks have overlapping IPv4
	
	I0921 22:07:06.016077    2256 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0921 22:07:06.222631    2256 cli_runner.go:164] Run: docker volume create force-systemd-env-20220921220625-5916 --label name.minikube.sigs.k8s.io=force-systemd-env-20220921220625-5916 --label created_by.minikube.sigs.k8s.io=true
	W0921 22:07:06.419638    2256 cli_runner.go:211] docker volume create force-systemd-env-20220921220625-5916 --label name.minikube.sigs.k8s.io=force-systemd-env-20220921220625-5916 --label created_by.minikube.sigs.k8s.io=true returned with exit code 1
	I0921 22:07:06.419638    2256 client.go:171] LocalClient.Create took 1.3103201s
	I0921 22:07:08.438221    2256 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0921 22:07:08.444828    2256 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20220921220625-5916
	W0921 22:07:08.656647    2256 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20220921220625-5916 returned with exit code 1
	I0921 22:07:08.656720    2256 retry.go:31] will retry after 242.222461ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-20220921220625-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20220921220625-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-20220921220625-5916
	I0921 22:07:08.912739    2256 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20220921220625-5916
	W0921 22:07:09.122722    2256 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20220921220625-5916 returned with exit code 1
	I0921 22:07:09.122798    2256 retry.go:31] will retry after 293.637806ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-20220921220625-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20220921220625-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-20220921220625-5916
	I0921 22:07:09.430761    2256 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20220921220625-5916
	W0921 22:07:09.622496    2256 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20220921220625-5916 returned with exit code 1
	I0921 22:07:09.622496    2256 retry.go:31] will retry after 446.119795ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-20220921220625-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20220921220625-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-20220921220625-5916
	I0921 22:07:10.081255    2256 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20220921220625-5916
	W0921 22:07:10.274816    2256 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20220921220625-5916 returned with exit code 1
	W0921 22:07:10.274816    2256 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-20220921220625-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20220921220625-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-20220921220625-5916
	
	W0921 22:07:10.274816    2256 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-20220921220625-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20220921220625-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-20220921220625-5916
	I0921 22:07:10.286747    2256 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0921 22:07:10.295672    2256 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20220921220625-5916
	W0921 22:07:10.506807    2256 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20220921220625-5916 returned with exit code 1
	I0921 22:07:10.506924    2256 retry.go:31] will retry after 179.638263ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-20220921220625-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20220921220625-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-20220921220625-5916
	I0921 22:07:10.699832    2256 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20220921220625-5916
	W0921 22:07:10.908069    2256 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20220921220625-5916 returned with exit code 1
	I0921 22:07:10.908069    2256 retry.go:31] will retry after 263.695078ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-20220921220625-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20220921220625-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-20220921220625-5916
	I0921 22:07:11.194923    2256 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20220921220625-5916
	W0921 22:07:11.374022    2256 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20220921220625-5916 returned with exit code 1
	I0921 22:07:11.374022    2256 retry.go:31] will retry after 484.240172ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-20220921220625-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20220921220625-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-20220921220625-5916
	I0921 22:07:11.881856    2256 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20220921220625-5916
	W0921 22:07:12.086401    2256 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20220921220625-5916 returned with exit code 1
	W0921 22:07:12.086401    2256 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-20220921220625-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20220921220625-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-20220921220625-5916
	
	W0921 22:07:12.086401    2256 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-20220921220625-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20220921220625-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-20220921220625-5916
	I0921 22:07:12.086401    2256 start.go:128] duration metric: createHost completed in 7.1580158s
	I0921 22:07:12.096696    2256 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0921 22:07:12.103700    2256 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20220921220625-5916
	W0921 22:07:12.289378    2256 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20220921220625-5916 returned with exit code 1
	I0921 22:07:12.289428    2256 retry.go:31] will retry after 340.62286ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-20220921220625-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20220921220625-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-20220921220625-5916
	I0921 22:07:12.639321    2256 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20220921220625-5916
	W0921 22:07:12.834948    2256 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20220921220625-5916 returned with exit code 1
	I0921 22:07:12.834948    2256 retry.go:31] will retry after 297.417842ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-20220921220625-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20220921220625-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-20220921220625-5916
	I0921 22:07:13.141815    2256 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20220921220625-5916
	W0921 22:07:13.333733    2256 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20220921220625-5916 returned with exit code 1
	I0921 22:07:13.333733    2256 retry.go:31] will retry after 448.358942ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-20220921220625-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20220921220625-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-20220921220625-5916
	I0921 22:07:13.791785    2256 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20220921220625-5916
	W0921 22:07:14.020988    2256 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20220921220625-5916 returned with exit code 1
	W0921 22:07:14.021049    2256 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-20220921220625-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20220921220625-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-20220921220625-5916
	
	W0921 22:07:14.021049    2256 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-20220921220625-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20220921220625-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-20220921220625-5916
	I0921 22:07:14.034660    2256 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0921 22:07:14.043471    2256 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20220921220625-5916
	W0921 22:07:14.281512    2256 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20220921220625-5916 returned with exit code 1
	I0921 22:07:14.281512    2256 retry.go:31] will retry after 176.645665ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-20220921220625-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20220921220625-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-20220921220625-5916
	I0921 22:07:14.477598    2256 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20220921220625-5916
	W0921 22:07:14.670954    2256 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20220921220625-5916 returned with exit code 1
	I0921 22:07:14.670954    2256 retry.go:31] will retry after 512.00063ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-20220921220625-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20220921220625-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-20220921220625-5916
	I0921 22:07:15.208656    2256 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20220921220625-5916
	W0921 22:07:15.413123    2256 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20220921220625-5916 returned with exit code 1
	W0921 22:07:15.413123    2256 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-20220921220625-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20220921220625-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-20220921220625-5916
	
	W0921 22:07:15.413123    2256 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-20220921220625-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20220921220625-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-20220921220625-5916
	I0921 22:07:15.413123    2256 fix.go:57] fixHost completed within 31.0902066s
	I0921 22:07:15.413123    2256 start.go:83] releasing machines lock for "force-systemd-env-20220921220625-5916", held for 31.0907308s
	W0921 22:07:15.413123    2256 out.go:239] * Failed to start docker container. Running "minikube delete -p force-systemd-env-20220921220625-5916" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for force-systemd-env-20220921220625-5916 container: docker volume create force-systemd-env-20220921220625-5916 --label name.minikube.sigs.k8s.io=force-systemd-env-20220921220625-5916 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create force-systemd-env-20220921220625-5916: error while creating volume root path '/var/lib/docker/volumes/force-systemd-env-20220921220625-5916': mkdir /var/lib/docker/volumes/force-systemd-env-20220921220625-5916: read-only file system
	
	* Failed to start docker container. Running "minikube delete -p force-systemd-env-20220921220625-5916" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for force-systemd-env-20220921220625-5916 container: docker volume create force-systemd-env-20220921220625-5916 --label name.minikube.sigs.k8s.io=force-systemd-env-20220921220625-5916 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create force-systemd-env-20220921220625-5916: error while creating volume root path '/var/lib/docker/volumes/force-systemd-env-20220921220625-5916': mkdir /var/lib/docker/volumes/force-systemd-env-20220921220625-5916: read-only file system
	
	I0921 22:07:15.418187    2256 out.go:177] 
	W0921 22:07:15.420149    2256 out.go:239] X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for force-systemd-env-20220921220625-5916 container: docker volume create force-systemd-env-20220921220625-5916 --label name.minikube.sigs.k8s.io=force-systemd-env-20220921220625-5916 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create force-systemd-env-20220921220625-5916: error while creating volume root path '/var/lib/docker/volumes/force-systemd-env-20220921220625-5916': mkdir /var/lib/docker/volumes/force-systemd-env-20220921220625-5916: read-only file system
	
	X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for force-systemd-env-20220921220625-5916 container: docker volume create force-systemd-env-20220921220625-5916 --label name.minikube.sigs.k8s.io=force-systemd-env-20220921220625-5916 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create force-systemd-env-20220921220625-5916: error while creating volume root path '/var/lib/docker/volumes/force-systemd-env-20220921220625-5916': mkdir /var/lib/docker/volumes/force-systemd-env-20220921220625-5916: read-only file system
	
	W0921 22:07:15.420149    2256 out.go:239] * Suggestion: Restart Docker
	* Suggestion: Restart Docker
	W0921 22:07:15.420149    2256 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/6825
	* Related issue: https://github.com/kubernetes/minikube/issues/6825
	I0921 22:07:15.424199    2256 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:151: failed to start minikube with args: "out/minikube-windows-amd64.exe start -p force-systemd-env-20220921220625-5916 --memory=2048 --alsologtostderr -v=5 --driver=docker" : exit status 60
docker_test.go:104: (dbg) Run:  out/minikube-windows-amd64.exe -p force-systemd-env-20220921220625-5916 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:104: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p force-systemd-env-20220921220625-5916 ssh "docker info --format {{.CgroupDriver}}": exit status 80 (1.1407701s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "force-systemd-env-20220921220625-5916": docker container inspect force-systemd-env-20220921220625-5916 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-20220921220625-5916
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                     │
	│    * If the above advice does not help, please let us know:                                                         │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                       │
	│                                                                                                                     │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                            │
	│    * Please also attach the following file to the GitHub issue:                                                     │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_ssh_2837ebd22544166cf14c5e2e977cc80019e59e54_7.log    │
	│                                                                                                                     │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
docker_test.go:106: failed to get docker cgroup driver. args "out/minikube-windows-amd64.exe -p force-systemd-env-20220921220625-5916 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 80
docker_test.go:160: *** TestForceSystemdEnv FAILED at 2022-09-21 22:07:16.6899173 +0000 GMT m=+2244.240758601
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestForceSystemdEnv]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect force-systemd-env-20220921220625-5916
helpers_test.go:231: (dbg) Non-zero exit: docker inspect force-systemd-env-20220921220625-5916: exit status 1 (253.5223ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: force-systemd-env-20220921220625-5916

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p force-systemd-env-20220921220625-5916 -n force-systemd-env-20220921220625-5916
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p force-systemd-env-20220921220625-5916 -n force-systemd-env-20220921220625-5916: exit status 7 (548.462ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0921 22:07:17.469057    3108 status.go:247] status error: host: state: unknown state "force-systemd-env-20220921220625-5916": docker container inspect force-systemd-env-20220921220625-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-20220921220625-5916

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-env-20220921220625-5916" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:175: Cleaning up "force-systemd-env-20220921220625-5916" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p force-systemd-env-20220921220625-5916
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p force-systemd-env-20220921220625-5916: (1.7070721s)
--- FAIL: TestForceSystemdEnv (53.73s)

                                                
                                    
x
+
TestErrorSpam/setup (48.48s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-windows-amd64.exe start -p nospam-20220921213151-5916 -n=1 --memory=2250 --wait=false --log_dir=C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-20220921213151-5916 --driver=docker
error_spam_test.go:81: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p nospam-20220921213151-5916 -n=1 --memory=2250 --wait=false --log_dir=C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-20220921213151-5916 --driver=docker: exit status 60 (48.4734369s)

                                                
                                                
-- stdout --
	* [nospam-20220921213151-5916] minikube v1.27.0 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	  - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=14995
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting control plane node nospam-20220921213151-5916 in cluster nospam-20220921213151-5916
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2250MB) ...
	* docker "nospam-20220921213151-5916" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2250MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	    > gcr.io/k8s-minikube/kicbase:  0 B [__________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase:  0 B [__________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase:  0 B [__________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase:  0 B [__________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase:  0 B [_______________________] ?% ? p/s 1.0s! minikube was unable to download gcr.io/k8s-minikube/kicbase:v0.0.34, but successfully downloaded gcr.io/k8s-minikube/kicbase:v0.0.34 as a fallback image
	E0921 21:31:58.311940    3776 network_create.go:104] error while trying to create docker network nospam-20220921213151-5916 192.168.49.0/24: create docker network nospam-20220921213151-5916 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=nospam-20220921213151-5916 nospam-20220921213151-5916: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network dc578794e48465ea18e823f53989b7d6b8f59c09d8524ac7b4ed37fbeaf99e00 (br-dc578794e484): conflicts with network a04d36bfb3cf4b1b1ab643a2ec70fadfc4a485878b32a275b4c7653037d66dd0 (br-a04d36bfb3cf): networks have overlapping IPv4
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network nospam-20220921213151-5916 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=nospam-20220921213151-5916 nospam-20220921213151-5916: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network dc578794e48465ea18e823f53989b7d6b8f59c09d8524ac7b4ed37fbeaf99e00 (br-dc578794e484): conflicts with network a04d36bfb3cf4b1b1ab643a2ec70fadfc4a485878b32a275b4c7653037d66dd0 (br-a04d36bfb3cf): networks have overlapping IPv4
	
	! StartHost failed, but will try again: creating host: create: creating: setting up container node: creating volume for nospam-20220921213151-5916 container: docker volume create nospam-20220921213151-5916 --label name.minikube.sigs.k8s.io=nospam-20220921213151-5916 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create nospam-20220921213151-5916: error while creating volume root path '/var/lib/docker/volumes/nospam-20220921213151-5916': mkdir /var/lib/docker/volumes/nospam-20220921213151-5916: read-only file system
	
	E0921 21:32:30.455045    3776 network_create.go:104] error while trying to create docker network nospam-20220921213151-5916 192.168.58.0/24: create docker network nospam-20220921213151-5916 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=nospam-20220921213151-5916 nospam-20220921213151-5916: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network eddcf3cf086cd667e487c56e48b4bc85b06a286ec089ab06055f82ccde164175 (br-eddcf3cf086c): conflicts with network 8a3cd8d165a4af032276277dbee93b52b8ee4aed7ca6d8aceee6814901b31792 (br-8a3cd8d165a4): networks have overlapping IPv4
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network nospam-20220921213151-5916 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=nospam-20220921213151-5916 nospam-20220921213151-5916: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network eddcf3cf086cd667e487c56e48b4bc85b06a286ec089ab06055f82ccde164175 (br-eddcf3cf086c): conflicts with network 8a3cd8d165a4af032276277dbee93b52b8ee4aed7ca6d8aceee6814901b31792 (br-8a3cd8d165a4): networks have overlapping IPv4
	
	* Failed to start docker container. Running "minikube delete -p nospam-20220921213151-5916" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for nospam-20220921213151-5916 container: docker volume create nospam-20220921213151-5916 --label name.minikube.sigs.k8s.io=nospam-20220921213151-5916 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create nospam-20220921213151-5916: error while creating volume root path '/var/lib/docker/volumes/nospam-20220921213151-5916': mkdir /var/lib/docker/volumes/nospam-20220921213151-5916: read-only file system
	
	X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for nospam-20220921213151-5916 container: docker volume create nospam-20220921213151-5916 --label name.minikube.sigs.k8s.io=nospam-20220921213151-5916 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create nospam-20220921213151-5916: error while creating volume root path '/var/lib/docker/volumes/nospam-20220921213151-5916': mkdir /var/lib/docker/volumes/nospam-20220921213151-5916: read-only file system
	
	* Suggestion: Restart Docker
	* Related issue: https://github.com/kubernetes/minikube/issues/6825

                                                
                                                
** /stderr **
error_spam_test.go:83: "out/minikube-windows-amd64.exe start -p nospam-20220921213151-5916 -n=1 --memory=2250 --wait=false --log_dir=C:\\Users\\jenkins.minikube2\\AppData\\Local\\Temp\\nospam-20220921213151-5916 --driver=docker" failed: exit status 60
error_spam_test.go:91: acceptable stderr: "    > gcr.io/k8s-minikube/kicbase:  0 B [__________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase:  0 B [__________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase:  0 B [__________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase:  0 B [__________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase:  0 B [_______________________] ?% ? p/s 1.0s! minikube was unable to download gcr.io/k8s-minikube/kicbase:v0.0.34, but successfully downloaded gcr.io/k8s-minikube/kicbase:v0.0.34 as a fallback image"
error_spam_test.go:96: unexpected stderr: "E0921 21:31:58.311940    3776 network_create.go:104] error while trying to create docker network nospam-20220921213151-5916 192.168.49.0/24: create docker network nospam-20220921213151-5916 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=nospam-20220921213151-5916 nospam-20220921213151-5916: exit status 1"
error_spam_test.go:96: unexpected stderr: "stdout:"
error_spam_test.go:96: unexpected stderr: "stderr:"
error_spam_test.go:96: unexpected stderr: "Error response from daemon: cannot create network dc578794e48465ea18e823f53989b7d6b8f59c09d8524ac7b4ed37fbeaf99e00 (br-dc578794e484): conflicts with network a04d36bfb3cf4b1b1ab643a2ec70fadfc4a485878b32a275b4c7653037d66dd0 (br-a04d36bfb3cf): networks have overlapping IPv4"
error_spam_test.go:96: unexpected stderr: "! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network nospam-20220921213151-5916 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=nospam-20220921213151-5916 nospam-20220921213151-5916: exit status 1"
error_spam_test.go:96: unexpected stderr: "stdout:"
error_spam_test.go:96: unexpected stderr: "stderr:"
error_spam_test.go:96: unexpected stderr: "Error response from daemon: cannot create network dc578794e48465ea18e823f53989b7d6b8f59c09d8524ac7b4ed37fbeaf99e00 (br-dc578794e484): conflicts with network a04d36bfb3cf4b1b1ab643a2ec70fadfc4a485878b32a275b4c7653037d66dd0 (br-a04d36bfb3cf): networks have overlapping IPv4"
error_spam_test.go:96: unexpected stderr: "! StartHost failed, but will try again: creating host: create: creating: setting up container node: creating volume for nospam-20220921213151-5916 container: docker volume create nospam-20220921213151-5916 --label name.minikube.sigs.k8s.io=nospam-20220921213151-5916 --label created_by.minikube.sigs.k8s.io=true: exit status 1"
error_spam_test.go:96: unexpected stderr: "stdout:"
error_spam_test.go:96: unexpected stderr: "stderr:"
error_spam_test.go:96: unexpected stderr: "Error response from daemon: create nospam-20220921213151-5916: error while creating volume root path '/var/lib/docker/volumes/nospam-20220921213151-5916': mkdir /var/lib/docker/volumes/nospam-20220921213151-5916: read-only file system"
error_spam_test.go:96: unexpected stderr: "E0921 21:32:30.455045    3776 network_create.go:104] error while trying to create docker network nospam-20220921213151-5916 192.168.58.0/24: create docker network nospam-20220921213151-5916 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=nospam-20220921213151-5916 nospam-20220921213151-5916: exit status 1"
error_spam_test.go:96: unexpected stderr: "stdout:"
error_spam_test.go:96: unexpected stderr: "stderr:"
error_spam_test.go:96: unexpected stderr: "Error response from daemon: cannot create network eddcf3cf086cd667e487c56e48b4bc85b06a286ec089ab06055f82ccde164175 (br-eddcf3cf086c): conflicts with network 8a3cd8d165a4af032276277dbee93b52b8ee4aed7ca6d8aceee6814901b31792 (br-8a3cd8d165a4): networks have overlapping IPv4"
error_spam_test.go:96: unexpected stderr: "! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network nospam-20220921213151-5916 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=nospam-20220921213151-5916 nospam-20220921213151-5916: exit status 1"
error_spam_test.go:96: unexpected stderr: "stdout:"
error_spam_test.go:96: unexpected stderr: "stderr:"
error_spam_test.go:96: unexpected stderr: "Error response from daemon: cannot create network eddcf3cf086cd667e487c56e48b4bc85b06a286ec089ab06055f82ccde164175 (br-eddcf3cf086c): conflicts with network 8a3cd8d165a4af032276277dbee93b52b8ee4aed7ca6d8aceee6814901b31792 (br-8a3cd8d165a4): networks have overlapping IPv4"
error_spam_test.go:96: unexpected stderr: "* Failed to start docker container. Running \"minikube delete -p nospam-20220921213151-5916\" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for nospam-20220921213151-5916 container: docker volume create nospam-20220921213151-5916 --label name.minikube.sigs.k8s.io=nospam-20220921213151-5916 --label created_by.minikube.sigs.k8s.io=true: exit status 1"
error_spam_test.go:96: unexpected stderr: "stdout:"
error_spam_test.go:96: unexpected stderr: "stderr:"
error_spam_test.go:96: unexpected stderr: "Error response from daemon: create nospam-20220921213151-5916: error while creating volume root path '/var/lib/docker/volumes/nospam-20220921213151-5916': mkdir /var/lib/docker/volumes/nospam-20220921213151-5916: read-only file system"
error_spam_test.go:96: unexpected stderr: "X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for nospam-20220921213151-5916 container: docker volume create nospam-20220921213151-5916 --label name.minikube.sigs.k8s.io=nospam-20220921213151-5916 --label created_by.minikube.sigs.k8s.io=true: exit status 1"
error_spam_test.go:96: unexpected stderr: "stdout:"
error_spam_test.go:96: unexpected stderr: "stderr:"
error_spam_test.go:96: unexpected stderr: "Error response from daemon: create nospam-20220921213151-5916: error while creating volume root path '/var/lib/docker/volumes/nospam-20220921213151-5916': mkdir /var/lib/docker/volumes/nospam-20220921213151-5916: read-only file system"
error_spam_test.go:96: unexpected stderr: "* Suggestion: Restart Docker"
error_spam_test.go:96: unexpected stderr: "* Related issue: https://github.com/kubernetes/minikube/issues/6825"
error_spam_test.go:110: minikube stdout:
* [nospam-20220921213151-5916] minikube v1.27.0 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
- KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
- MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
- MINIKUBE_LOCATION=14995
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
* Using the docker driver based on user configuration
* Using Docker Desktop driver with root privileges
* Starting control plane node nospam-20220921213151-5916 in cluster nospam-20220921213151-5916
* Pulling base image ...
* Creating docker container (CPUs=2, Memory=2250MB) ...
* docker "nospam-20220921213151-5916" container is missing, will recreate.
* Creating docker container (CPUs=2, Memory=2250MB) ...

                                                
                                                

                                                
                                                
error_spam_test.go:111: minikube stderr:
> gcr.io/k8s-minikube/kicbase:  0 B [__________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase:  0 B [__________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase:  0 B [__________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase:  0 B [__________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase:  0 B [_______________________] ?% ? p/s 1.0s! minikube was unable to download gcr.io/k8s-minikube/kicbase:v0.0.34, but successfully downloaded gcr.io/k8s-minikube/kicbase:v0.0.34 as a fallback image
E0921 21:31:58.311940    3776 network_create.go:104] error while trying to create docker network nospam-20220921213151-5916 192.168.49.0/24: create docker network nospam-20220921213151-5916 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=nospam-20220921213151-5916 nospam-20220921213151-5916: exit status 1
stdout:

                                                
                                                
stderr:
Error response from daemon: cannot create network dc578794e48465ea18e823f53989b7d6b8f59c09d8524ac7b4ed37fbeaf99e00 (br-dc578794e484): conflicts with network a04d36bfb3cf4b1b1ab643a2ec70fadfc4a485878b32a275b4c7653037d66dd0 (br-a04d36bfb3cf): networks have overlapping IPv4
! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network nospam-20220921213151-5916 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=nospam-20220921213151-5916 nospam-20220921213151-5916: exit status 1
stdout:

                                                
                                                
stderr:
Error response from daemon: cannot create network dc578794e48465ea18e823f53989b7d6b8f59c09d8524ac7b4ed37fbeaf99e00 (br-dc578794e484): conflicts with network a04d36bfb3cf4b1b1ab643a2ec70fadfc4a485878b32a275b4c7653037d66dd0 (br-a04d36bfb3cf): networks have overlapping IPv4

                                                
                                                
! StartHost failed, but will try again: creating host: create: creating: setting up container node: creating volume for nospam-20220921213151-5916 container: docker volume create nospam-20220921213151-5916 --label name.minikube.sigs.k8s.io=nospam-20220921213151-5916 --label created_by.minikube.sigs.k8s.io=true: exit status 1
stdout:

                                                
                                                
stderr:
Error response from daemon: create nospam-20220921213151-5916: error while creating volume root path '/var/lib/docker/volumes/nospam-20220921213151-5916': mkdir /var/lib/docker/volumes/nospam-20220921213151-5916: read-only file system

                                                
                                                
E0921 21:32:30.455045    3776 network_create.go:104] error while trying to create docker network nospam-20220921213151-5916 192.168.58.0/24: create docker network nospam-20220921213151-5916 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=nospam-20220921213151-5916 nospam-20220921213151-5916: exit status 1
stdout:

                                                
                                                
stderr:
Error response from daemon: cannot create network eddcf3cf086cd667e487c56e48b4bc85b06a286ec089ab06055f82ccde164175 (br-eddcf3cf086c): conflicts with network 8a3cd8d165a4af032276277dbee93b52b8ee4aed7ca6d8aceee6814901b31792 (br-8a3cd8d165a4): networks have overlapping IPv4
! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network nospam-20220921213151-5916 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=nospam-20220921213151-5916 nospam-20220921213151-5916: exit status 1
stdout:

                                                
                                                
stderr:
Error response from daemon: cannot create network eddcf3cf086cd667e487c56e48b4bc85b06a286ec089ab06055f82ccde164175 (br-eddcf3cf086c): conflicts with network 8a3cd8d165a4af032276277dbee93b52b8ee4aed7ca6d8aceee6814901b31792 (br-8a3cd8d165a4): networks have overlapping IPv4

                                                
                                                
* Failed to start docker container. Running "minikube delete -p nospam-20220921213151-5916" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for nospam-20220921213151-5916 container: docker volume create nospam-20220921213151-5916 --label name.minikube.sigs.k8s.io=nospam-20220921213151-5916 --label created_by.minikube.sigs.k8s.io=true: exit status 1
stdout:

                                                
                                                
stderr:
Error response from daemon: create nospam-20220921213151-5916: error while creating volume root path '/var/lib/docker/volumes/nospam-20220921213151-5916': mkdir /var/lib/docker/volumes/nospam-20220921213151-5916: read-only file system

                                                
                                                
X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for nospam-20220921213151-5916 container: docker volume create nospam-20220921213151-5916 --label name.minikube.sigs.k8s.io=nospam-20220921213151-5916 --label created_by.minikube.sigs.k8s.io=true: exit status 1
stdout:

                                                
                                                
stderr:
Error response from daemon: create nospam-20220921213151-5916: error while creating volume root path '/var/lib/docker/volumes/nospam-20220921213151-5916': mkdir /var/lib/docker/volumes/nospam-20220921213151-5916: read-only file system

                                                
                                                
* Suggestion: Restart Docker
* Related issue: https://github.com/kubernetes/minikube/issues/6825
error_spam_test.go:121: missing kubeadm init sub-step "Generating certificates and keys ..."
error_spam_test.go:121: missing kubeadm init sub-step "Booting up control plane ..."
error_spam_test.go:121: missing kubeadm init sub-step "Configuring RBAC rules ..."
--- FAIL: TestErrorSpam/setup (48.48s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (50.05s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2160: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-20220921213353-5916 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker
functional_test.go:2160: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p functional-20220921213353-5916 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker: exit status 60 (49.2276628s)

                                                
                                                
-- stdout --
	* [functional-20220921213353-5916] minikube v1.27.0 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	  - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=14995
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting control plane node functional-20220921213353-5916 in cluster functional-20220921213353-5916
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=4000MB) ...
	* docker "functional-20220921213353-5916" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=4000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Local proxy ignored: not passing HTTP_PROXY=localhost:57935 to docker env.
	! Local proxy ignored: not passing HTTP_PROXY=localhost:57935 to docker env.
	    > gcr.io/k8s-minikube/kicbase:  0 B [__________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase:  0 B [__________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase:  0 B [__________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase:  0 B [__________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase:  0 B [______________________] ?% ? p/s 900ms! minikube was unable to download gcr.io/k8s-minikube/kicbase:v0.0.34, but successfully downloaded gcr.io/k8s-minikube/kicbase:v0.0.34 as a fallback image
	! Local proxy ignored: not passing HTTP_PROXY=localhost:57935 to docker env.
	! Local proxy ignored: not passing HTTP_PROXY=localhost:57935 to docker env.
	E0921 21:34:00.057085    6700 network_create.go:104] error while trying to create docker network functional-20220921213353-5916 192.168.49.0/24: create docker network functional-20220921213353-5916 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=functional-20220921213353-5916 functional-20220921213353-5916: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 50e160fc84917f960027e2004ae70429c14bc6edfa07bef3f7a387f76068e409 (br-50e160fc8491): conflicts with network a04d36bfb3cf4b1b1ab643a2ec70fadfc4a485878b32a275b4c7653037d66dd0 (br-a04d36bfb3cf): networks have overlapping IPv4
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network functional-20220921213353-5916 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=functional-20220921213353-5916 functional-20220921213353-5916: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 50e160fc84917f960027e2004ae70429c14bc6edfa07bef3f7a387f76068e409 (br-50e160fc8491): conflicts with network a04d36bfb3cf4b1b1ab643a2ec70fadfc4a485878b32a275b4c7653037d66dd0 (br-a04d36bfb3cf): networks have overlapping IPv4
	
	! StartHost failed, but will try again: creating host: create: creating: setting up container node: creating volume for functional-20220921213353-5916 container: docker volume create functional-20220921213353-5916 --label name.minikube.sigs.k8s.io=functional-20220921213353-5916 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create functional-20220921213353-5916: error while creating volume root path '/var/lib/docker/volumes/functional-20220921213353-5916': mkdir /var/lib/docker/volumes/functional-20220921213353-5916: read-only file system
	
	! Local proxy ignored: not passing HTTP_PROXY=localhost:57935 to docker env.
	! Local proxy ignored: not passing HTTP_PROXY=localhost:57935 to docker env.
	E0921 21:34:32.224381    6700 network_create.go:104] error while trying to create docker network functional-20220921213353-5916 192.168.58.0/24: create docker network functional-20220921213353-5916 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=functional-20220921213353-5916 functional-20220921213353-5916: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 2d96550280fff0790245953bdfb5f2503997f5bd32cb8273b8c98519f092e6c7 (br-2d96550280ff): conflicts with network 8a3cd8d165a4af032276277dbee93b52b8ee4aed7ca6d8aceee6814901b31792 (br-8a3cd8d165a4): networks have overlapping IPv4
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network functional-20220921213353-5916 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=functional-20220921213353-5916 functional-20220921213353-5916: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 2d96550280fff0790245953bdfb5f2503997f5bd32cb8273b8c98519f092e6c7 (br-2d96550280ff): conflicts with network 8a3cd8d165a4af032276277dbee93b52b8ee4aed7ca6d8aceee6814901b31792 (br-8a3cd8d165a4): networks have overlapping IPv4
	
	* Failed to start docker container. Running "minikube delete -p functional-20220921213353-5916" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for functional-20220921213353-5916 container: docker volume create functional-20220921213353-5916 --label name.minikube.sigs.k8s.io=functional-20220921213353-5916 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create functional-20220921213353-5916: error while creating volume root path '/var/lib/docker/volumes/functional-20220921213353-5916': mkdir /var/lib/docker/volumes/functional-20220921213353-5916: read-only file system
	
	X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for functional-20220921213353-5916 container: docker volume create functional-20220921213353-5916 --label name.minikube.sigs.k8s.io=functional-20220921213353-5916 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create functional-20220921213353-5916: error while creating volume root path '/var/lib/docker/volumes/functional-20220921213353-5916': mkdir /var/lib/docker/volumes/functional-20220921213353-5916: read-only file system
	
	* Suggestion: Restart Docker
	* Related issue: https://github.com/kubernetes/minikube/issues/6825

                                                
                                                
** /stderr **
functional_test.go:2162: failed minikube start. args "out/minikube-windows-amd64.exe start -p functional-20220921213353-5916 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker": exit status 60
functional_test.go:2167: start stdout=* [functional-20220921213353-5916] minikube v1.27.0 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
- KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
- MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
- MINIKUBE_LOCATION=14995
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
* Using the docker driver based on user configuration
* Using Docker Desktop driver with root privileges
* Starting control plane node functional-20220921213353-5916 in cluster functional-20220921213353-5916
* Pulling base image ...
* Creating docker container (CPUs=2, Memory=4000MB) ...
* docker "functional-20220921213353-5916" container is missing, will recreate.
* Creating docker container (CPUs=2, Memory=4000MB) ...

                                                
                                                

                                                
                                                
, want: *Found network options:*
functional_test.go:2172: start stderr=! Local proxy ignored: not passing HTTP_PROXY=localhost:57935 to docker env.
! Local proxy ignored: not passing HTTP_PROXY=localhost:57935 to docker env.
> gcr.io/k8s-minikube/kicbase:  0 B [__________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase:  0 B [__________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase:  0 B [__________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase:  0 B [__________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase:  0 B [______________________] ?% ? p/s 900ms! minikube was unable to download gcr.io/k8s-minikube/kicbase:v0.0.34, but successfully downloaded gcr.io/k8s-minikube/kicbase:v0.0.34 as a fallback image
! Local proxy ignored: not passing HTTP_PROXY=localhost:57935 to docker env.
! Local proxy ignored: not passing HTTP_PROXY=localhost:57935 to docker env.
E0921 21:34:00.057085    6700 network_create.go:104] error while trying to create docker network functional-20220921213353-5916 192.168.49.0/24: create docker network functional-20220921213353-5916 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=functional-20220921213353-5916 functional-20220921213353-5916: exit status 1
stdout:

                                                
                                                
stderr:
Error response from daemon: cannot create network 50e160fc84917f960027e2004ae70429c14bc6edfa07bef3f7a387f76068e409 (br-50e160fc8491): conflicts with network a04d36bfb3cf4b1b1ab643a2ec70fadfc4a485878b32a275b4c7653037d66dd0 (br-a04d36bfb3cf): networks have overlapping IPv4
! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network functional-20220921213353-5916 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=functional-20220921213353-5916 functional-20220921213353-5916: exit status 1
stdout:

                                                
                                                
stderr:
Error response from daemon: cannot create network 50e160fc84917f960027e2004ae70429c14bc6edfa07bef3f7a387f76068e409 (br-50e160fc8491): conflicts with network a04d36bfb3cf4b1b1ab643a2ec70fadfc4a485878b32a275b4c7653037d66dd0 (br-a04d36bfb3cf): networks have overlapping IPv4

                                                
                                                
! StartHost failed, but will try again: creating host: create: creating: setting up container node: creating volume for functional-20220921213353-5916 container: docker volume create functional-20220921213353-5916 --label name.minikube.sigs.k8s.io=functional-20220921213353-5916 --label created_by.minikube.sigs.k8s.io=true: exit status 1
stdout:

                                                
                                                
stderr:
Error response from daemon: create functional-20220921213353-5916: error while creating volume root path '/var/lib/docker/volumes/functional-20220921213353-5916': mkdir /var/lib/docker/volumes/functional-20220921213353-5916: read-only file system

                                                
                                                
! Local proxy ignored: not passing HTTP_PROXY=localhost:57935 to docker env.
! Local proxy ignored: not passing HTTP_PROXY=localhost:57935 to docker env.
E0921 21:34:32.224381    6700 network_create.go:104] error while trying to create docker network functional-20220921213353-5916 192.168.58.0/24: create docker network functional-20220921213353-5916 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=functional-20220921213353-5916 functional-20220921213353-5916: exit status 1
stdout:

                                                
                                                
stderr:
Error response from daemon: cannot create network 2d96550280fff0790245953bdfb5f2503997f5bd32cb8273b8c98519f092e6c7 (br-2d96550280ff): conflicts with network 8a3cd8d165a4af032276277dbee93b52b8ee4aed7ca6d8aceee6814901b31792 (br-8a3cd8d165a4): networks have overlapping IPv4
! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network functional-20220921213353-5916 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=functional-20220921213353-5916 functional-20220921213353-5916: exit status 1
stdout:

                                                
                                                
stderr:
Error response from daemon: cannot create network 2d96550280fff0790245953bdfb5f2503997f5bd32cb8273b8c98519f092e6c7 (br-2d96550280ff): conflicts with network 8a3cd8d165a4af032276277dbee93b52b8ee4aed7ca6d8aceee6814901b31792 (br-8a3cd8d165a4): networks have overlapping IPv4

                                                
                                                
* Failed to start docker container. Running "minikube delete -p functional-20220921213353-5916" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for functional-20220921213353-5916 container: docker volume create functional-20220921213353-5916 --label name.minikube.sigs.k8s.io=functional-20220921213353-5916 --label created_by.minikube.sigs.k8s.io=true: exit status 1
stdout:

                                                
                                                
stderr:
Error response from daemon: create functional-20220921213353-5916: error while creating volume root path '/var/lib/docker/volumes/functional-20220921213353-5916': mkdir /var/lib/docker/volumes/functional-20220921213353-5916: read-only file system

                                                
                                                
X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for functional-20220921213353-5916 container: docker volume create functional-20220921213353-5916 --label name.minikube.sigs.k8s.io=functional-20220921213353-5916 --label created_by.minikube.sigs.k8s.io=true: exit status 1
stdout:

                                                
                                                
stderr:
Error response from daemon: create functional-20220921213353-5916: error while creating volume root path '/var/lib/docker/volumes/functional-20220921213353-5916': mkdir /var/lib/docker/volumes/functional-20220921213353-5916: read-only file system

                                                
                                                
* Suggestion: Restart Docker
* Related issue: https://github.com/kubernetes/minikube/issues/6825
, want: *You appear to be using a proxy*
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestFunctional/serial/StartWithProxy]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect functional-20220921213353-5916
helpers_test.go:231: (dbg) Non-zero exit: docker inspect functional-20220921213353-5916: exit status 1 (234.8706ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: functional-20220921213353-5916

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-20220921213353-5916 -n functional-20220921213353-5916
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-20220921213353-5916 -n functional-20220921213353-5916: exit status 7 (560.5917ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0921 21:34:43.126289    1760 status.go:247] status error: host: state: unknown state "functional-20220921213353-5916": docker container inspect functional-20220921213353-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220921213353-5916

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-20220921213353-5916" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestFunctional/serial/StartWithProxy (50.05s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (76.39s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:651: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-20220921213353-5916 --alsologtostderr -v=8
functional_test.go:651: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p functional-20220921213353-5916 --alsologtostderr -v=8: exit status 60 (1m15.3497772s)

                                                
                                                
-- stdout --
	* [functional-20220921213353-5916] minikube v1.27.0 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	  - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=14995
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on existing profile
	* Starting control plane node functional-20220921213353-5916 in cluster functional-20220921213353-5916
	* Pulling base image ...
	* docker "functional-20220921213353-5916" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=4000MB) ...
	* docker "functional-20220921213353-5916" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=4000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0921 21:34:43.378417    4624 out.go:296] Setting OutFile to fd 576 ...
	I0921 21:34:43.436501    4624 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0921 21:34:43.436501    4624 out.go:309] Setting ErrFile to fd 932...
	I0921 21:34:43.436501    4624 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0921 21:34:43.454181    4624 out.go:303] Setting JSON to false
	I0921 21:34:43.455932    4624 start.go:115] hostinfo: {"hostname":"minikube2","uptime":2152,"bootTime":1663793931,"procs":149,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19044 Build 19044","kernelVersion":"10.0.19044 Build 19044","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"29dced62-21fb-45d8-a34e-472b66ced002"}
	W0921 21:34:43.455932    4624 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0921 21:34:43.461227    4624 out.go:177] * [functional-20220921213353-5916] minikube v1.27.0 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	I0921 21:34:43.465119    4624 notify.go:214] Checking for updates...
	I0921 21:34:43.467330    4624 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	I0921 21:34:43.469598    4624 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	I0921 21:34:43.472093    4624 out.go:177]   - MINIKUBE_LOCATION=14995
	I0921 21:34:43.477894    4624 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0921 21:34:43.481280    4624 config.go:180] Loaded profile config "functional-20220921213353-5916": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.2
	I0921 21:34:43.481487    4624 driver.go:365] Setting default libvirt URI to qemu:///system
	I0921 21:34:43.766889    4624 docker.go:137] docker version: linux-20.10.17
	I0921 21:34:43.774327    4624 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0921 21:34:44.307473    4624 info.go:265] docker info: {ID:6VFI:N2PO:PLBP:KXLR:EN7R:OR7P:564S:UY7N:ZWDA:5V2U:KGL2:KJZ7 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:43 OomKillDisable:true NGoroutines:47 SystemTime:2022-09-21 21:34:43.9304217 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.10.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.9] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-
plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.19.0]] Warnings:<nil>}}
	I0921 21:34:44.312674    4624 out.go:177] * Using the docker driver based on existing profile
	I0921 21:34:44.315031    4624 start.go:284] selected driver: docker
	I0921 21:34:44.315031    4624 start.go:808] validating driver "docker" against &{Name:functional-20220921213353-5916 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.2 ClusterName:functional-20220921213353-5916 Namespace:default APIServerName:mini
kubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.25.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpti
mizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I0921 21:34:44.316309    4624 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0921 21:34:44.331742    4624 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0921 21:34:44.853590    4624 info.go:265] docker info: {ID:6VFI:N2PO:PLBP:KXLR:EN7R:OR7P:564S:UY7N:ZWDA:5V2U:KGL2:KJZ7 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:43 OomKillDisable:true NGoroutines:47 SystemTime:2022-09-21 21:34:44.4882481 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.10.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.9] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-
plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.19.0]] Warnings:<nil>}}
	I0921 21:34:44.901593    4624 cni.go:95] Creating CNI manager for ""
	I0921 21:34:44.901593    4624 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0921 21:34:44.901719    4624 start_flags.go:316] config:
	{Name:functional-20220921213353-5916 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.2 ClusterName:functional-20220921213353-5916 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.25.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPa
th:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I0921 21:34:44.906058    4624 out.go:177] * Starting control plane node functional-20220921213353-5916 in cluster functional-20220921213353-5916
	I0921 21:34:44.908220    4624 cache.go:120] Beginning downloading kic base image for docker with docker
	I0921 21:34:44.909863    4624 out.go:177] * Pulling base image ...
	I0921 21:34:44.913535    4624 preload.go:132] Checking if preload exists for k8s version v1.25.2 and runtime docker
	I0921 21:34:44.913535    4624 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c in local docker daemon
	I0921 21:34:44.913535    4624 preload.go:148] Found local preload: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.25.2-docker-overlay2-amd64.tar.lz4
	I0921 21:34:44.914533    4624 cache.go:57] Caching tarball of preloaded images
	I0921 21:34:44.914533    4624 preload.go:174] Found C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.25.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0921 21:34:44.914533    4624 cache.go:60] Finished verifying existence of preloaded tar for  v1.25.2 on docker
	I0921 21:34:44.914533    4624 profile.go:148] Saving config to C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\functional-20220921213353-5916\config.json ...
	I0921 21:34:45.119170    4624 cache.go:147] Downloading gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c to local cache
	I0921 21:34:45.119170    4624 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase_v0.0.34@sha256_f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c.tar
	I0921 21:34:45.119170    4624 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase_v0.0.34@sha256_f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c.tar
	I0921 21:34:45.119170    4624 image.go:59] Checking for gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c in local cache directory
	I0921 21:34:45.119170    4624 image.go:62] Found gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c in local cache directory, skipping pull
	I0921 21:34:45.119170    4624 image.go:103] gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c exists in cache, skipping pull
	I0921 21:34:45.119170    4624 cache.go:150] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c as a tarball
	I0921 21:34:45.119170    4624 cache.go:161] Loading gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c from local cache
	I0921 21:34:45.120176    4624 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase_v0.0.34@sha256_f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c.tar
	I0921 21:34:47.316517    4624 image.go:219] response: {"errorDetail":{"message":"mkdir /var/lib/docker/tmp/docker-import-3391937603: read-only file system"},"error":"mkdir /var/lib/docker/tmp/docker-import-3391937603: read-only file system"}
	I0921 21:34:47.316567    4624 cache.go:164] successfully loaded gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c from cached tarball
	I0921 21:34:47.316567    4624 cache.go:170] Downloading gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c to local daemon
	I0921 21:34:47.316642    4624 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c.d.lock -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase_v0.0.34@sha256_f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c.d.lock
	I0921 21:34:47.316983    4624 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c in local docker daemon
	I0921 21:34:47.514201    4624 image.go:243] Writing gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c to local daemon
	I0921 21:34:47.514201    4624 image.go:258] Getting image gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c
	I0921 21:34:47.773346    4624 image.go:272] Writing image gcr.io/k8s-minikube/kicbase:v0.0.34
	    > gcr.io/k8s-minikube/kicbase:  0 B [__________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase:  0 B [__________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase:  0 B [__________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase:  0 B [__________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase:  0 B [______________________] ?% ? p/s 900msI0921 21:34:48.674071    4624 image.go:306] Pulling image gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c
	I0921 21:34:49.010274    4624 cache.go:177] use image loaded from cache gcr.io/k8s-minikube/kicbase:v0.0.34
	W0921 21:34:49.010405    4624 out.go:239] ! minikube was unable to download gcr.io/k8s-minikube/kicbase:v0.0.34, but successfully downloaded gcr.io/k8s-minikube/kicbase:v0.0.34 as a fallback image
	! minikube was unable to download gcr.io/k8s-minikube/kicbase:v0.0.34, but successfully downloaded gcr.io/k8s-minikube/kicbase:v0.0.34 as a fallback image
	I0921 21:34:49.010405    4624 cache.go:208] Successfully downloaded all kic artifacts
	I0921 21:34:49.010609    4624 start.go:364] acquiring machines lock for functional-20220921213353-5916: {Name:mk3f5ae8740d25300eb345feb1053ed449398cb0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0921 21:34:49.010609    4624 start.go:368] acquired machines lock for "functional-20220921213353-5916" in 0s
	I0921 21:34:49.010609    4624 start.go:96] Skipping create...Using existing machine configuration
	I0921 21:34:49.010609    4624 fix.go:55] fixHost starting: 
	I0921 21:34:49.030946    4624 cli_runner.go:164] Run: docker container inspect functional-20220921213353-5916 --format={{.State.Status}}
	W0921 21:34:49.228175    4624 cli_runner.go:211] docker container inspect functional-20220921213353-5916 --format={{.State.Status}} returned with exit code 1
	I0921 21:34:49.228175    4624 fix.go:103] recreateIfNeeded on functional-20220921213353-5916: state= err=unknown state "functional-20220921213353-5916": docker container inspect functional-20220921213353-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220921213353-5916
	I0921 21:34:49.228175    4624 fix.go:108] machineExists: false. err=machine does not exist
	I0921 21:34:49.232609    4624 out.go:177] * docker "functional-20220921213353-5916" container is missing, will recreate.
	I0921 21:34:49.235247    4624 delete.go:124] DEMOLISHING functional-20220921213353-5916 ...
	I0921 21:34:49.251139    4624 cli_runner.go:164] Run: docker container inspect functional-20220921213353-5916 --format={{.State.Status}}
	W0921 21:34:49.444987    4624 cli_runner.go:211] docker container inspect functional-20220921213353-5916 --format={{.State.Status}} returned with exit code 1
	W0921 21:34:49.444987    4624 stop.go:75] unable to get state: unknown state "functional-20220921213353-5916": docker container inspect functional-20220921213353-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220921213353-5916
	I0921 21:34:49.444987    4624 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "functional-20220921213353-5916": docker container inspect functional-20220921213353-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220921213353-5916
	I0921 21:34:49.463050    4624 cli_runner.go:164] Run: docker container inspect functional-20220921213353-5916 --format={{.State.Status}}
	W0921 21:34:49.660188    4624 cli_runner.go:211] docker container inspect functional-20220921213353-5916 --format={{.State.Status}} returned with exit code 1
	I0921 21:34:49.660243    4624 delete.go:82] Unable to get host status for functional-20220921213353-5916, assuming it has already been deleted: state: unknown state "functional-20220921213353-5916": docker container inspect functional-20220921213353-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220921213353-5916
	I0921 21:34:49.669304    4624 cli_runner.go:164] Run: docker container inspect -f {{.Id}} functional-20220921213353-5916
	W0921 21:34:49.880897    4624 cli_runner.go:211] docker container inspect -f {{.Id}} functional-20220921213353-5916 returned with exit code 1
	I0921 21:34:49.881010    4624 kic.go:356] could not find the container functional-20220921213353-5916 to remove it. will try anyways
	I0921 21:34:49.888491    4624 cli_runner.go:164] Run: docker container inspect functional-20220921213353-5916 --format={{.State.Status}}
	W0921 21:34:50.083613    4624 cli_runner.go:211] docker container inspect functional-20220921213353-5916 --format={{.State.Status}} returned with exit code 1
	W0921 21:34:50.083613    4624 oci.go:84] error getting container status, will try to delete anyways: unknown state "functional-20220921213353-5916": docker container inspect functional-20220921213353-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220921213353-5916
	I0921 21:34:50.093349    4624 cli_runner.go:164] Run: docker exec --privileged -t functional-20220921213353-5916 /bin/bash -c "sudo init 0"
	W0921 21:34:50.283539    4624 cli_runner.go:211] docker exec --privileged -t functional-20220921213353-5916 /bin/bash -c "sudo init 0" returned with exit code 1
	I0921 21:34:50.283949    4624 oci.go:646] error shutdown functional-20220921213353-5916: docker exec --privileged -t functional-20220921213353-5916 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: functional-20220921213353-5916
	I0921 21:34:51.301300    4624 cli_runner.go:164] Run: docker container inspect functional-20220921213353-5916 --format={{.State.Status}}
	W0921 21:34:51.512263    4624 cli_runner.go:211] docker container inspect functional-20220921213353-5916 --format={{.State.Status}} returned with exit code 1
	I0921 21:34:51.512472    4624 oci.go:658] temporary error verifying shutdown: unknown state "functional-20220921213353-5916": docker container inspect functional-20220921213353-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220921213353-5916
	I0921 21:34:51.512472    4624 oci.go:660] temporary error: container functional-20220921213353-5916 status is  but expect it to be exited
	I0921 21:34:51.512472    4624 retry.go:31] will retry after 552.330144ms: couldn't verify container is exited. %v: unknown state "functional-20220921213353-5916": docker container inspect functional-20220921213353-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220921213353-5916
	I0921 21:34:52.078300    4624 cli_runner.go:164] Run: docker container inspect functional-20220921213353-5916 --format={{.State.Status}}
	W0921 21:34:52.258375    4624 cli_runner.go:211] docker container inspect functional-20220921213353-5916 --format={{.State.Status}} returned with exit code 1
	I0921 21:34:52.258375    4624 oci.go:658] temporary error verifying shutdown: unknown state "functional-20220921213353-5916": docker container inspect functional-20220921213353-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220921213353-5916
	I0921 21:34:52.258375    4624 oci.go:660] temporary error: container functional-20220921213353-5916 status is  but expect it to be exited
	I0921 21:34:52.258375    4624 retry.go:31] will retry after 1.080381816s: couldn't verify container is exited. %v: unknown state "functional-20220921213353-5916": docker container inspect functional-20220921213353-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220921213353-5916
	I0921 21:34:53.350360    4624 cli_runner.go:164] Run: docker container inspect functional-20220921213353-5916 --format={{.State.Status}}
	W0921 21:34:53.529366    4624 cli_runner.go:211] docker container inspect functional-20220921213353-5916 --format={{.State.Status}} returned with exit code 1
	I0921 21:34:53.529616    4624 oci.go:658] temporary error verifying shutdown: unknown state "functional-20220921213353-5916": docker container inspect functional-20220921213353-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220921213353-5916
	I0921 21:34:53.529616    4624 oci.go:660] temporary error: container functional-20220921213353-5916 status is  but expect it to be exited
	I0921 21:34:53.529616    4624 retry.go:31] will retry after 1.31013006s: couldn't verify container is exited. %v: unknown state "functional-20220921213353-5916": docker container inspect functional-20220921213353-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220921213353-5916
	I0921 21:34:54.851748    4624 cli_runner.go:164] Run: docker container inspect functional-20220921213353-5916 --format={{.State.Status}}
	W0921 21:34:55.064279    4624 cli_runner.go:211] docker container inspect functional-20220921213353-5916 --format={{.State.Status}} returned with exit code 1
	I0921 21:34:55.064501    4624 oci.go:658] temporary error verifying shutdown: unknown state "functional-20220921213353-5916": docker container inspect functional-20220921213353-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220921213353-5916
	I0921 21:34:55.064529    4624 oci.go:660] temporary error: container functional-20220921213353-5916 status is  but expect it to be exited
	I0921 21:34:55.064574    4624 retry.go:31] will retry after 1.582392691s: couldn't verify container is exited. %v: unknown state "functional-20220921213353-5916": docker container inspect functional-20220921213353-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220921213353-5916
	I0921 21:34:56.666909    4624 cli_runner.go:164] Run: docker container inspect functional-20220921213353-5916 --format={{.State.Status}}
	W0921 21:34:56.882433    4624 cli_runner.go:211] docker container inspect functional-20220921213353-5916 --format={{.State.Status}} returned with exit code 1
	I0921 21:34:56.882671    4624 oci.go:658] temporary error verifying shutdown: unknown state "functional-20220921213353-5916": docker container inspect functional-20220921213353-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220921213353-5916
	I0921 21:34:56.882796    4624 oci.go:660] temporary error: container functional-20220921213353-5916 status is  but expect it to be exited
	I0921 21:34:56.882796    4624 retry.go:31] will retry after 2.340488664s: couldn't verify container is exited. %v: unknown state "functional-20220921213353-5916": docker container inspect functional-20220921213353-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220921213353-5916
	I0921 21:34:59.243301    4624 cli_runner.go:164] Run: docker container inspect functional-20220921213353-5916 --format={{.State.Status}}
	W0921 21:34:59.442449    4624 cli_runner.go:211] docker container inspect functional-20220921213353-5916 --format={{.State.Status}} returned with exit code 1
	I0921 21:34:59.442748    4624 oci.go:658] temporary error verifying shutdown: unknown state "functional-20220921213353-5916": docker container inspect functional-20220921213353-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220921213353-5916
	I0921 21:34:59.442748    4624 oci.go:660] temporary error: container functional-20220921213353-5916 status is  but expect it to be exited
	I0921 21:34:59.442811    4624 retry.go:31] will retry after 4.506218855s: couldn't verify container is exited. %v: unknown state "functional-20220921213353-5916": docker container inspect functional-20220921213353-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220921213353-5916
	I0921 21:35:03.962266    4624 cli_runner.go:164] Run: docker container inspect functional-20220921213353-5916 --format={{.State.Status}}
	W0921 21:35:04.155042    4624 cli_runner.go:211] docker container inspect functional-20220921213353-5916 --format={{.State.Status}} returned with exit code 1
	I0921 21:35:04.155144    4624 oci.go:658] temporary error verifying shutdown: unknown state "functional-20220921213353-5916": docker container inspect functional-20220921213353-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220921213353-5916
	I0921 21:35:04.155144    4624 oci.go:660] temporary error: container functional-20220921213353-5916 status is  but expect it to be exited
	I0921 21:35:04.155200    4624 retry.go:31] will retry after 3.221479586s: couldn't verify container is exited. %v: unknown state "functional-20220921213353-5916": docker container inspect functional-20220921213353-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220921213353-5916
	I0921 21:35:07.385304    4624 cli_runner.go:164] Run: docker container inspect functional-20220921213353-5916 --format={{.State.Status}}
	W0921 21:35:07.581764    4624 cli_runner.go:211] docker container inspect functional-20220921213353-5916 --format={{.State.Status}} returned with exit code 1
	I0921 21:35:07.582198    4624 oci.go:658] temporary error verifying shutdown: unknown state "functional-20220921213353-5916": docker container inspect functional-20220921213353-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220921213353-5916
	I0921 21:35:07.582198    4624 oci.go:660] temporary error: container functional-20220921213353-5916 status is  but expect it to be exited
	I0921 21:35:07.582283    4624 oci.go:88] couldn't shut down functional-20220921213353-5916 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "functional-20220921213353-5916": docker container inspect functional-20220921213353-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220921213353-5916
	 
	I0921 21:35:07.591910    4624 cli_runner.go:164] Run: docker rm -f -v functional-20220921213353-5916
	I0921 21:35:07.812524    4624 cli_runner.go:164] Run: docker container inspect -f {{.Id}} functional-20220921213353-5916
	W0921 21:35:07.990290    4624 cli_runner.go:211] docker container inspect -f {{.Id}} functional-20220921213353-5916 returned with exit code 1
	I0921 21:35:07.997886    4624 cli_runner.go:164] Run: docker network inspect functional-20220921213353-5916 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0921 21:35:08.177746    4624 cli_runner.go:211] docker network inspect functional-20220921213353-5916 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0921 21:35:08.187815    4624 network_create.go:272] running [docker network inspect functional-20220921213353-5916] to gather additional debugging logs...
	I0921 21:35:08.187815    4624 cli_runner.go:164] Run: docker network inspect functional-20220921213353-5916
	W0921 21:35:08.395418    4624 cli_runner.go:211] docker network inspect functional-20220921213353-5916 returned with exit code 1
	I0921 21:35:08.395603    4624 network_create.go:275] error running [docker network inspect functional-20220921213353-5916]: docker network inspect functional-20220921213353-5916: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: functional-20220921213353-5916
	I0921 21:35:08.395603    4624 network_create.go:277] output of [docker network inspect functional-20220921213353-5916]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: functional-20220921213353-5916
	
	** /stderr **
	W0921 21:35:08.396985    4624 delete.go:139] delete failed (probably ok) <nil>
	I0921 21:35:08.396985    4624 fix.go:115] Sleeping 1 second for extra luck!
	I0921 21:35:09.409198    4624 start.go:125] createHost starting for "" (driver="docker")
	I0921 21:35:09.413444    4624 out.go:204] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0921 21:35:09.413496    4624 start.go:159] libmachine.API.Create for "functional-20220921213353-5916" (driver="docker")
	I0921 21:35:09.413496    4624 client.go:168] LocalClient.Create starting
	I0921 21:35:09.414193    4624 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I0921 21:35:09.414193    4624 main.go:134] libmachine: Decoding PEM data...
	I0921 21:35:09.414193    4624 main.go:134] libmachine: Parsing certificate...
	I0921 21:35:09.414889    4624 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I0921 21:35:09.415070    4624 main.go:134] libmachine: Decoding PEM data...
	I0921 21:35:09.415168    4624 main.go:134] libmachine: Parsing certificate...
	I0921 21:35:09.422812    4624 cli_runner.go:164] Run: docker network inspect functional-20220921213353-5916 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0921 21:35:09.610886    4624 cli_runner.go:211] docker network inspect functional-20220921213353-5916 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0921 21:35:09.620923    4624 network_create.go:272] running [docker network inspect functional-20220921213353-5916] to gather additional debugging logs...
	I0921 21:35:09.620923    4624 cli_runner.go:164] Run: docker network inspect functional-20220921213353-5916
	W0921 21:35:09.813090    4624 cli_runner.go:211] docker network inspect functional-20220921213353-5916 returned with exit code 1
	I0921 21:35:09.813320    4624 network_create.go:275] error running [docker network inspect functional-20220921213353-5916]: docker network inspect functional-20220921213353-5916: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: functional-20220921213353-5916
	I0921 21:35:09.813408    4624 network_create.go:277] output of [docker network inspect functional-20220921213353-5916]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: functional-20220921213353-5916
	
	** /stderr **
	I0921 21:35:09.820743    4624 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0921 21:35:10.036309    4624 network.go:290] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc000730098] misses:0}
	I0921 21:35:10.036309    4624 network.go:236] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0921 21:35:10.036309    4624 network_create.go:115] attempt to create docker network functional-20220921213353-5916 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0921 21:35:10.042280    4624 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=functional-20220921213353-5916 functional-20220921213353-5916
	W0921 21:35:10.232131    4624 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=functional-20220921213353-5916 functional-20220921213353-5916 returned with exit code 1
	E0921 21:35:10.232243    4624 network_create.go:104] error while trying to create docker network functional-20220921213353-5916 192.168.49.0/24: create docker network functional-20220921213353-5916 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=functional-20220921213353-5916 functional-20220921213353-5916: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 90ffbaec71b19379877f034e07db12e63091da476c470e338659bebd382d3be2 (br-90ffbaec71b1): conflicts with network a04d36bfb3cf4b1b1ab643a2ec70fadfc4a485878b32a275b4c7653037d66dd0 (br-a04d36bfb3cf): networks have overlapping IPv4
	W0921 21:35:10.232433    4624 out.go:239] ! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network functional-20220921213353-5916 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=functional-20220921213353-5916 functional-20220921213353-5916: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 90ffbaec71b19379877f034e07db12e63091da476c470e338659bebd382d3be2 (br-90ffbaec71b1): conflicts with network a04d36bfb3cf4b1b1ab643a2ec70fadfc4a485878b32a275b4c7653037d66dd0 (br-a04d36bfb3cf): networks have overlapping IPv4
	
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network functional-20220921213353-5916 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=functional-20220921213353-5916 functional-20220921213353-5916: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 90ffbaec71b19379877f034e07db12e63091da476c470e338659bebd382d3be2 (br-90ffbaec71b1): conflicts with network a04d36bfb3cf4b1b1ab643a2ec70fadfc4a485878b32a275b4c7653037d66dd0 (br-a04d36bfb3cf): networks have overlapping IPv4
	
	I0921 21:35:10.246156    4624 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0921 21:35:10.444354    4624 cli_runner.go:164] Run: docker volume create functional-20220921213353-5916 --label name.minikube.sigs.k8s.io=functional-20220921213353-5916 --label created_by.minikube.sigs.k8s.io=true
	W0921 21:35:10.652539    4624 cli_runner.go:211] docker volume create functional-20220921213353-5916 --label name.minikube.sigs.k8s.io=functional-20220921213353-5916 --label created_by.minikube.sigs.k8s.io=true returned with exit code 1
	I0921 21:35:10.652539    4624 client.go:171] LocalClient.Create took 1.2390364s
	I0921 21:35:12.671222    4624 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0921 21:35:12.678127    4624 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916
	W0921 21:35:12.880294    4624 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916 returned with exit code 1
	I0921 21:35:12.880742    4624 retry.go:31] will retry after 149.242379ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220921213353-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220921213353-5916
	I0921 21:35:13.047940    4624 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916
	W0921 21:35:13.227195    4624 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916 returned with exit code 1
	I0921 21:35:13.227195    4624 retry.go:31] will retry after 300.341948ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220921213353-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220921213353-5916
	I0921 21:35:13.539727    4624 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916
	W0921 21:35:13.720489    4624 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916 returned with exit code 1
	I0921 21:35:13.720646    4624 retry.go:31] will retry after 571.057104ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220921213353-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220921213353-5916
	I0921 21:35:14.314652    4624 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916
	W0921 21:35:14.525380    4624 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916 returned with exit code 1
	W0921 21:35:14.525380    4624 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220921213353-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220921213353-5916
	
	W0921 21:35:14.525380    4624 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220921213353-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220921213353-5916
	I0921 21:35:14.535410    4624 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0921 21:35:14.550255    4624 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916
	W0921 21:35:14.760081    4624 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916 returned with exit code 1
	I0921 21:35:14.760081    4624 retry.go:31] will retry after 178.565968ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220921213353-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220921213353-5916
	I0921 21:35:14.957997    4624 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916
	W0921 21:35:15.152143    4624 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916 returned with exit code 1
	I0921 21:35:15.152528    4624 retry.go:31] will retry after 330.246446ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220921213353-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220921213353-5916
	I0921 21:35:15.505786    4624 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916
	W0921 21:35:15.700011    4624 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916 returned with exit code 1
	I0921 21:35:15.700347    4624 retry.go:31] will retry after 460.157723ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220921213353-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220921213353-5916
	I0921 21:35:16.169384    4624 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916
	W0921 21:35:16.348273    4624 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916 returned with exit code 1
	W0921 21:35:16.348273    4624 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220921213353-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220921213353-5916
	
	W0921 21:35:16.348273    4624 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220921213353-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220921213353-5916
	I0921 21:35:16.348273    4624 start.go:128] duration metric: createHost completed in 6.9390428s
	I0921 21:35:16.359126    4624 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0921 21:35:16.364742    4624 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916
	W0921 21:35:16.550281    4624 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916 returned with exit code 1
	I0921 21:35:16.550281    4624 retry.go:31] will retry after 195.758538ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220921213353-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220921213353-5916
	I0921 21:35:16.767239    4624 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916
	W0921 21:35:16.946911    4624 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916 returned with exit code 1
	I0921 21:35:16.947314    4624 retry.go:31] will retry after 297.413196ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220921213353-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220921213353-5916
	I0921 21:35:17.269336    4624 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916
	W0921 21:35:17.448412    4624 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916 returned with exit code 1
	I0921 21:35:17.448516    4624 retry.go:31] will retry after 663.23513ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220921213353-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220921213353-5916
	I0921 21:35:18.133397    4624 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916
	W0921 21:35:18.325644    4624 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916 returned with exit code 1
	W0921 21:35:18.326029    4624 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220921213353-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220921213353-5916
	
	W0921 21:35:18.326109    4624 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220921213353-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220921213353-5916
	I0921 21:35:18.336519    4624 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0921 21:35:18.340397    4624 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916
	W0921 21:35:18.537219    4624 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916 returned with exit code 1
	I0921 21:35:18.537392    4624 retry.go:31] will retry after 175.796719ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220921213353-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220921213353-5916
	I0921 21:35:18.736534    4624 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916
	W0921 21:35:18.930382    4624 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916 returned with exit code 1
	I0921 21:35:18.930382    4624 retry.go:31] will retry after 322.826781ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220921213353-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220921213353-5916
	I0921 21:35:19.264803    4624 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916
	W0921 21:35:19.442812    4624 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916 returned with exit code 1
	I0921 21:35:19.442812    4624 retry.go:31] will retry after 602.253718ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220921213353-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220921213353-5916
	I0921 21:35:20.067182    4624 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916
	W0921 21:35:20.276310    4624 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916 returned with exit code 1
	W0921 21:35:20.276648    4624 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220921213353-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220921213353-5916
	
	W0921 21:35:20.276648    4624 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220921213353-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220921213353-5916
	I0921 21:35:20.276788    4624 fix.go:57] fixHost completed within 31.2660266s
	I0921 21:35:20.276788    4624 start.go:83] releasing machines lock for "functional-20220921213353-5916", held for 31.2660266s
	W0921 21:35:20.276976    4624 start.go:602] error starting host: recreate: creating host: create: creating: setting up container node: creating volume for functional-20220921213353-5916 container: docker volume create functional-20220921213353-5916 --label name.minikube.sigs.k8s.io=functional-20220921213353-5916 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create functional-20220921213353-5916: error while creating volume root path '/var/lib/docker/volumes/functional-20220921213353-5916': mkdir /var/lib/docker/volumes/functional-20220921213353-5916: read-only file system
	W0921 21:35:20.277305    4624 out.go:239] ! StartHost failed, but will try again: recreate: creating host: create: creating: setting up container node: creating volume for functional-20220921213353-5916 container: docker volume create functional-20220921213353-5916 --label name.minikube.sigs.k8s.io=functional-20220921213353-5916 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create functional-20220921213353-5916: error while creating volume root path '/var/lib/docker/volumes/functional-20220921213353-5916': mkdir /var/lib/docker/volumes/functional-20220921213353-5916: read-only file system
	
	! StartHost failed, but will try again: recreate: creating host: create: creating: setting up container node: creating volume for functional-20220921213353-5916 container: docker volume create functional-20220921213353-5916 --label name.minikube.sigs.k8s.io=functional-20220921213353-5916 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create functional-20220921213353-5916: error while creating volume root path '/var/lib/docker/volumes/functional-20220921213353-5916': mkdir /var/lib/docker/volumes/functional-20220921213353-5916: read-only file system
	
	I0921 21:35:20.277305    4624 start.go:617] Will try again in 5 seconds ...
	I0921 21:35:25.288931    4624 start.go:364] acquiring machines lock for functional-20220921213353-5916: {Name:mk3f5ae8740d25300eb345feb1053ed449398cb0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0921 21:35:25.288931    4624 start.go:368] acquired machines lock for "functional-20220921213353-5916" in 0s
	I0921 21:35:25.288931    4624 start.go:96] Skipping create...Using existing machine configuration
	I0921 21:35:25.288931    4624 fix.go:55] fixHost starting: 
	I0921 21:35:25.303665    4624 cli_runner.go:164] Run: docker container inspect functional-20220921213353-5916 --format={{.State.Status}}
	W0921 21:35:25.489922    4624 cli_runner.go:211] docker container inspect functional-20220921213353-5916 --format={{.State.Status}} returned with exit code 1
	I0921 21:35:25.490161    4624 fix.go:103] recreateIfNeeded on functional-20220921213353-5916: state= err=unknown state "functional-20220921213353-5916": docker container inspect functional-20220921213353-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220921213353-5916
	I0921 21:35:25.490266    4624 fix.go:108] machineExists: false. err=machine does not exist
	I0921 21:35:25.494388    4624 out.go:177] * docker "functional-20220921213353-5916" container is missing, will recreate.
	I0921 21:35:25.496787    4624 delete.go:124] DEMOLISHING functional-20220921213353-5916 ...
	I0921 21:35:25.509354    4624 cli_runner.go:164] Run: docker container inspect functional-20220921213353-5916 --format={{.State.Status}}
	W0921 21:35:25.705705    4624 cli_runner.go:211] docker container inspect functional-20220921213353-5916 --format={{.State.Status}} returned with exit code 1
	W0921 21:35:25.706072    4624 stop.go:75] unable to get state: unknown state "functional-20220921213353-5916": docker container inspect functional-20220921213353-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220921213353-5916
	I0921 21:35:25.706150    4624 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "functional-20220921213353-5916": docker container inspect functional-20220921213353-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220921213353-5916
	I0921 21:35:25.720535    4624 cli_runner.go:164] Run: docker container inspect functional-20220921213353-5916 --format={{.State.Status}}
	W0921 21:35:25.922103    4624 cli_runner.go:211] docker container inspect functional-20220921213353-5916 --format={{.State.Status}} returned with exit code 1
	I0921 21:35:25.922186    4624 delete.go:82] Unable to get host status for functional-20220921213353-5916, assuming it has already been deleted: state: unknown state "functional-20220921213353-5916": docker container inspect functional-20220921213353-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220921213353-5916
	I0921 21:35:25.929934    4624 cli_runner.go:164] Run: docker container inspect -f {{.Id}} functional-20220921213353-5916
	W0921 21:35:26.107589    4624 cli_runner.go:211] docker container inspect -f {{.Id}} functional-20220921213353-5916 returned with exit code 1
	I0921 21:35:26.107589    4624 kic.go:356] could not find the container functional-20220921213353-5916 to remove it. will try anyways
	I0921 21:35:26.114427    4624 cli_runner.go:164] Run: docker container inspect functional-20220921213353-5916 --format={{.State.Status}}
	W0921 21:35:26.299184    4624 cli_runner.go:211] docker container inspect functional-20220921213353-5916 --format={{.State.Status}} returned with exit code 1
	W0921 21:35:26.299184    4624 oci.go:84] error getting container status, will try to delete anyways: unknown state "functional-20220921213353-5916": docker container inspect functional-20220921213353-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220921213353-5916
	I0921 21:35:26.309289    4624 cli_runner.go:164] Run: docker exec --privileged -t functional-20220921213353-5916 /bin/bash -c "sudo init 0"
	W0921 21:35:26.485471    4624 cli_runner.go:211] docker exec --privileged -t functional-20220921213353-5916 /bin/bash -c "sudo init 0" returned with exit code 1
	I0921 21:35:26.485504    4624 oci.go:646] error shutdown functional-20220921213353-5916: docker exec --privileged -t functional-20220921213353-5916 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: functional-20220921213353-5916
	I0921 21:35:27.497244    4624 cli_runner.go:164] Run: docker container inspect functional-20220921213353-5916 --format={{.State.Status}}
	W0921 21:35:27.690693    4624 cli_runner.go:211] docker container inspect functional-20220921213353-5916 --format={{.State.Status}} returned with exit code 1
	I0921 21:35:27.690851    4624 oci.go:658] temporary error verifying shutdown: unknown state "functional-20220921213353-5916": docker container inspect functional-20220921213353-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220921213353-5916
	I0921 21:35:27.690929    4624 oci.go:660] temporary error: container functional-20220921213353-5916 status is  but expect it to be exited
	I0921 21:35:27.691050    4624 retry.go:31] will retry after 396.557122ms: couldn't verify container is exited. %v: unknown state "functional-20220921213353-5916": docker container inspect functional-20220921213353-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220921213353-5916
	I0921 21:35:28.103974    4624 cli_runner.go:164] Run: docker container inspect functional-20220921213353-5916 --format={{.State.Status}}
	W0921 21:35:28.297550    4624 cli_runner.go:211] docker container inspect functional-20220921213353-5916 --format={{.State.Status}} returned with exit code 1
	I0921 21:35:28.297550    4624 oci.go:658] temporary error verifying shutdown: unknown state "functional-20220921213353-5916": docker container inspect functional-20220921213353-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220921213353-5916
	I0921 21:35:28.297550    4624 oci.go:660] temporary error: container functional-20220921213353-5916 status is  but expect it to be exited
	I0921 21:35:28.297550    4624 retry.go:31] will retry after 597.811922ms: couldn't verify container is exited. %v: unknown state "functional-20220921213353-5916": docker container inspect functional-20220921213353-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220921213353-5916
	I0921 21:35:28.916455    4624 cli_runner.go:164] Run: docker container inspect functional-20220921213353-5916 --format={{.State.Status}}
	W0921 21:35:29.129153    4624 cli_runner.go:211] docker container inspect functional-20220921213353-5916 --format={{.State.Status}} returned with exit code 1
	I0921 21:35:29.129153    4624 oci.go:658] temporary error verifying shutdown: unknown state "functional-20220921213353-5916": docker container inspect functional-20220921213353-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220921213353-5916
	I0921 21:35:29.129153    4624 oci.go:660] temporary error: container functional-20220921213353-5916 status is  but expect it to be exited
	I0921 21:35:29.129153    4624 retry.go:31] will retry after 1.409144665s: couldn't verify container is exited. %v: unknown state "functional-20220921213353-5916": docker container inspect functional-20220921213353-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220921213353-5916
	I0921 21:35:30.550314    4624 cli_runner.go:164] Run: docker container inspect functional-20220921213353-5916 --format={{.State.Status}}
	W0921 21:35:30.725416    4624 cli_runner.go:211] docker container inspect functional-20220921213353-5916 --format={{.State.Status}} returned with exit code 1
	I0921 21:35:30.725689    4624 oci.go:658] temporary error verifying shutdown: unknown state "functional-20220921213353-5916": docker container inspect functional-20220921213353-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220921213353-5916
	I0921 21:35:30.725689    4624 oci.go:660] temporary error: container functional-20220921213353-5916 status is  but expect it to be exited
	I0921 21:35:30.725719    4624 retry.go:31] will retry after 1.192358242s: couldn't verify container is exited. %v: unknown state "functional-20220921213353-5916": docker container inspect functional-20220921213353-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220921213353-5916
	I0921 21:35:31.940207    4624 cli_runner.go:164] Run: docker container inspect functional-20220921213353-5916 --format={{.State.Status}}
	W0921 21:35:32.138395    4624 cli_runner.go:211] docker container inspect functional-20220921213353-5916 --format={{.State.Status}} returned with exit code 1
	I0921 21:35:32.138474    4624 oci.go:658] temporary error verifying shutdown: unknown state "functional-20220921213353-5916": docker container inspect functional-20220921213353-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220921213353-5916
	I0921 21:35:32.138575    4624 oci.go:660] temporary error: container functional-20220921213353-5916 status is  but expect it to be exited
	I0921 21:35:32.138636    4624 retry.go:31] will retry after 3.456004252s: couldn't verify container is exited. %v: unknown state "functional-20220921213353-5916": docker container inspect functional-20220921213353-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220921213353-5916
	I0921 21:35:35.613582    4624 cli_runner.go:164] Run: docker container inspect functional-20220921213353-5916 --format={{.State.Status}}
	W0921 21:35:35.807505    4624 cli_runner.go:211] docker container inspect functional-20220921213353-5916 --format={{.State.Status}} returned with exit code 1
	I0921 21:35:35.807505    4624 oci.go:658] temporary error verifying shutdown: unknown state "functional-20220921213353-5916": docker container inspect functional-20220921213353-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220921213353-5916
	I0921 21:35:35.807505    4624 oci.go:660] temporary error: container functional-20220921213353-5916 status is  but expect it to be exited
	I0921 21:35:35.807505    4624 retry.go:31] will retry after 4.543793083s: couldn't verify container is exited. %v: unknown state "functional-20220921213353-5916": docker container inspect functional-20220921213353-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220921213353-5916
	I0921 21:35:40.375515    4624 cli_runner.go:164] Run: docker container inspect functional-20220921213353-5916 --format={{.State.Status}}
	W0921 21:35:40.570531    4624 cli_runner.go:211] docker container inspect functional-20220921213353-5916 --format={{.State.Status}} returned with exit code 1
	I0921 21:35:40.570531    4624 oci.go:658] temporary error verifying shutdown: unknown state "functional-20220921213353-5916": docker container inspect functional-20220921213353-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220921213353-5916
	I0921 21:35:40.570531    4624 oci.go:660] temporary error: container functional-20220921213353-5916 status is  but expect it to be exited
	I0921 21:35:40.570531    4624 retry.go:31] will retry after 5.830976587s: couldn't verify container is exited. %v: unknown state "functional-20220921213353-5916": docker container inspect functional-20220921213353-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220921213353-5916
	I0921 21:35:46.411324    4624 cli_runner.go:164] Run: docker container inspect functional-20220921213353-5916 --format={{.State.Status}}
	W0921 21:35:46.639085    4624 cli_runner.go:211] docker container inspect functional-20220921213353-5916 --format={{.State.Status}} returned with exit code 1
	I0921 21:35:46.639154    4624 oci.go:658] temporary error verifying shutdown: unknown state "functional-20220921213353-5916": docker container inspect functional-20220921213353-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220921213353-5916
	I0921 21:35:46.639241    4624 oci.go:660] temporary error: container functional-20220921213353-5916 status is  but expect it to be exited
	I0921 21:35:46.639331    4624 oci.go:88] couldn't shut down functional-20220921213353-5916 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "functional-20220921213353-5916": docker container inspect functional-20220921213353-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220921213353-5916
	 
	I0921 21:35:46.646232    4624 cli_runner.go:164] Run: docker rm -f -v functional-20220921213353-5916
	I0921 21:35:46.843703    4624 cli_runner.go:164] Run: docker container inspect -f {{.Id}} functional-20220921213353-5916
	W0921 21:35:47.023315    4624 cli_runner.go:211] docker container inspect -f {{.Id}} functional-20220921213353-5916 returned with exit code 1
	I0921 21:35:47.031371    4624 cli_runner.go:164] Run: docker network inspect functional-20220921213353-5916 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0921 21:35:47.227120    4624 cli_runner.go:211] docker network inspect functional-20220921213353-5916 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0921 21:35:47.234126    4624 network_create.go:272] running [docker network inspect functional-20220921213353-5916] to gather additional debugging logs...
	I0921 21:35:47.234126    4624 cli_runner.go:164] Run: docker network inspect functional-20220921213353-5916
	W0921 21:35:47.414898    4624 cli_runner.go:211] docker network inspect functional-20220921213353-5916 returned with exit code 1
	I0921 21:35:47.414898    4624 network_create.go:275] error running [docker network inspect functional-20220921213353-5916]: docker network inspect functional-20220921213353-5916: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: functional-20220921213353-5916
	I0921 21:35:47.414898    4624 network_create.go:277] output of [docker network inspect functional-20220921213353-5916]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: functional-20220921213353-5916
	
	** /stderr **
	W0921 21:35:47.416463    4624 delete.go:139] delete failed (probably ok) <nil>
	I0921 21:35:47.416463    4624 fix.go:115] Sleeping 1 second for extra luck!
	I0921 21:35:48.429325    4624 start.go:125] createHost starting for "" (driver="docker")
	I0921 21:35:48.434458    4624 out.go:204] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0921 21:35:48.435186    4624 start.go:159] libmachine.API.Create for "functional-20220921213353-5916" (driver="docker")
	I0921 21:35:48.435186    4624 client.go:168] LocalClient.Create starting
	I0921 21:35:48.435755    4624 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I0921 21:35:48.435976    4624 main.go:134] libmachine: Decoding PEM data...
	I0921 21:35:48.436043    4624 main.go:134] libmachine: Parsing certificate...
	I0921 21:35:48.436200    4624 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I0921 21:35:48.436200    4624 main.go:134] libmachine: Decoding PEM data...
	I0921 21:35:48.436200    4624 main.go:134] libmachine: Parsing certificate...
	I0921 21:35:48.444678    4624 cli_runner.go:164] Run: docker network inspect functional-20220921213353-5916 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0921 21:35:48.634291    4624 cli_runner.go:211] docker network inspect functional-20220921213353-5916 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0921 21:35:48.644156    4624 network_create.go:272] running [docker network inspect functional-20220921213353-5916] to gather additional debugging logs...
	I0921 21:35:48.644156    4624 cli_runner.go:164] Run: docker network inspect functional-20220921213353-5916
	W0921 21:35:48.832218    4624 cli_runner.go:211] docker network inspect functional-20220921213353-5916 returned with exit code 1
	I0921 21:35:48.832264    4624 network_create.go:275] error running [docker network inspect functional-20220921213353-5916]: docker network inspect functional-20220921213353-5916: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: functional-20220921213353-5916
	I0921 21:35:48.832328    4624 network_create.go:277] output of [docker network inspect functional-20220921213353-5916]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: functional-20220921213353-5916
	
	** /stderr **
	I0921 21:35:48.839971    4624 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0921 21:35:49.036899    4624 network.go:281] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000730098] amended:false}} dirty:map[] misses:0}
	I0921 21:35:49.036899    4624 network.go:239] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0921 21:35:49.053059    4624 network.go:290] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000730098] amended:true}} dirty:map[192.168.49.0:0xc000730098 192.168.58.0:0xc000526740] misses:0}
	I0921 21:35:49.054142    4624 network.go:236] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0921 21:35:49.054142    4624 network_create.go:115] attempt to create docker network functional-20220921213353-5916 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0921 21:35:49.062603    4624 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=functional-20220921213353-5916 functional-20220921213353-5916
	W0921 21:35:49.237016    4624 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=functional-20220921213353-5916 functional-20220921213353-5916 returned with exit code 1
	E0921 21:35:49.237195    4624 network_create.go:104] error while trying to create docker network functional-20220921213353-5916 192.168.58.0/24: create docker network functional-20220921213353-5916 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=functional-20220921213353-5916 functional-20220921213353-5916: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 09c36bf009a37dc8005f0a8a8ad410d32f976d3f6ea3778cdb5a88c224bdbc9a (br-09c36bf009a3): conflicts with network 8a3cd8d165a4af032276277dbee93b52b8ee4aed7ca6d8aceee6814901b31792 (br-8a3cd8d165a4): networks have overlapping IPv4
	W0921 21:35:49.237581    4624 out.go:239] ! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network functional-20220921213353-5916 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=functional-20220921213353-5916 functional-20220921213353-5916: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 09c36bf009a37dc8005f0a8a8ad410d32f976d3f6ea3778cdb5a88c224bdbc9a (br-09c36bf009a3): conflicts with network 8a3cd8d165a4af032276277dbee93b52b8ee4aed7ca6d8aceee6814901b31792 (br-8a3cd8d165a4): networks have overlapping IPv4
	
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network functional-20220921213353-5916 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=functional-20220921213353-5916 functional-20220921213353-5916: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 09c36bf009a37dc8005f0a8a8ad410d32f976d3f6ea3778cdb5a88c224bdbc9a (br-09c36bf009a3): conflicts with network 8a3cd8d165a4af032276277dbee93b52b8ee4aed7ca6d8aceee6814901b31792 (br-8a3cd8d165a4): networks have overlapping IPv4
	
	I0921 21:35:49.251443    4624 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0921 21:35:49.449649    4624 cli_runner.go:164] Run: docker volume create functional-20220921213353-5916 --label name.minikube.sigs.k8s.io=functional-20220921213353-5916 --label created_by.minikube.sigs.k8s.io=true
	W0921 21:35:49.656339    4624 cli_runner.go:211] docker volume create functional-20220921213353-5916 --label name.minikube.sigs.k8s.io=functional-20220921213353-5916 --label created_by.minikube.sigs.k8s.io=true returned with exit code 1
	I0921 21:35:49.656454    4624 client.go:171] LocalClient.Create took 1.2212237s
	I0921 21:35:51.677779    4624 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0921 21:35:51.684486    4624 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916
	W0921 21:35:51.884349    4624 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916 returned with exit code 1
	I0921 21:35:51.884524    4624 retry.go:31] will retry after 164.582069ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220921213353-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220921213353-5916
	I0921 21:35:52.072152    4624 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916
	W0921 21:35:52.282554    4624 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916 returned with exit code 1
	I0921 21:35:52.282836    4624 retry.go:31] will retry after 415.22004ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220921213353-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220921213353-5916
	I0921 21:35:52.721193    4624 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916
	W0921 21:35:52.929149    4624 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916 returned with exit code 1
	W0921 21:35:52.929329    4624 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220921213353-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220921213353-5916
	
	W0921 21:35:52.929424    4624 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220921213353-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220921213353-5916
	I0921 21:35:52.940506    4624 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0921 21:35:52.947204    4624 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916
	W0921 21:35:53.130278    4624 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916 returned with exit code 1
	I0921 21:35:53.130278    4624 retry.go:31] will retry after 144.863405ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220921213353-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220921213353-5916
	I0921 21:35:53.293929    4624 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916
	W0921 21:35:53.489706    4624 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916 returned with exit code 1
	I0921 21:35:53.489965    4624 retry.go:31] will retry after 410.553224ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220921213353-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220921213353-5916
	I0921 21:35:53.914458    4624 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916
	W0921 21:35:54.098345    4624 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916 returned with exit code 1
	I0921 21:35:54.098345    4624 retry.go:31] will retry after 314.505366ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220921213353-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220921213353-5916
	I0921 21:35:54.434059    4624 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916
	W0921 21:35:54.612978    4624 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916 returned with exit code 1
	W0921 21:35:54.612978    4624 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220921213353-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220921213353-5916
	
	W0921 21:35:54.612978    4624 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220921213353-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220921213353-5916
	I0921 21:35:54.612978    4624 start.go:128] duration metric: createHost completed in 6.1834793s
	I0921 21:35:54.627287    4624 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0921 21:35:54.636610    4624 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916
	W0921 21:35:54.816131    4624 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916 returned with exit code 1
	I0921 21:35:54.816131    4624 retry.go:31] will retry after 200.38067ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220921213353-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220921213353-5916
	I0921 21:35:55.029698    4624 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916
	W0921 21:35:55.222670    4624 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916 returned with exit code 1
	I0921 21:35:55.223116    4624 retry.go:31] will retry after 252.474839ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220921213353-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220921213353-5916
	I0921 21:35:55.489316    4624 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916
	W0921 21:35:55.698496    4624 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916 returned with exit code 1
	I0921 21:35:55.698496    4624 retry.go:31] will retry after 585.618668ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220921213353-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220921213353-5916
	I0921 21:35:56.308530    4624 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916
	W0921 21:35:56.498784    4624 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916 returned with exit code 1
	W0921 21:35:56.499347    4624 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220921213353-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220921213353-5916
	
	W0921 21:35:56.499411    4624 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220921213353-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220921213353-5916
	I0921 21:35:56.510982    4624 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0921 21:35:56.517474    4624 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916
	W0921 21:35:56.716867    4624 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916 returned with exit code 1
	I0921 21:35:56.716867    4624 retry.go:31] will retry after 194.626905ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220921213353-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220921213353-5916
	I0921 21:35:56.925507    4624 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916
	W0921 21:35:57.105121    4624 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916 returned with exit code 1
	I0921 21:35:57.105121    4624 retry.go:31] will retry after 346.182076ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220921213353-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220921213353-5916
	I0921 21:35:57.470111    4624 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916
	W0921 21:35:57.648645    4624 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916 returned with exit code 1
	I0921 21:35:57.648645    4624 retry.go:31] will retry after 579.704465ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220921213353-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220921213353-5916
	I0921 21:35:58.240101    4624 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916
	W0921 21:35:58.432829    4624 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916 returned with exit code 1
	W0921 21:35:58.433165    4624 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220921213353-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220921213353-5916
	
	W0921 21:35:58.433165    4624 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220921213353-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220921213353-5916
	I0921 21:35:58.433165    4624 fix.go:57] fixHost completed within 33.1440669s
	I0921 21:35:58.433165    4624 start.go:83] releasing machines lock for "functional-20220921213353-5916", held for 33.1440669s
	W0921 21:35:58.433165    4624 out.go:239] * Failed to start docker container. Running "minikube delete -p functional-20220921213353-5916" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for functional-20220921213353-5916 container: docker volume create functional-20220921213353-5916 --label name.minikube.sigs.k8s.io=functional-20220921213353-5916 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create functional-20220921213353-5916: error while creating volume root path '/var/lib/docker/volumes/functional-20220921213353-5916': mkdir /var/lib/docker/volumes/functional-20220921213353-5916: read-only file system
	
	* Failed to start docker container. Running "minikube delete -p functional-20220921213353-5916" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for functional-20220921213353-5916 container: docker volume create functional-20220921213353-5916 --label name.minikube.sigs.k8s.io=functional-20220921213353-5916 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create functional-20220921213353-5916: error while creating volume root path '/var/lib/docker/volumes/functional-20220921213353-5916': mkdir /var/lib/docker/volumes/functional-20220921213353-5916: read-only file system
	
	I0921 21:35:58.466899    4624 out.go:177] 
	W0921 21:35:58.469850    4624 out.go:239] X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for functional-20220921213353-5916 container: docker volume create functional-20220921213353-5916 --label name.minikube.sigs.k8s.io=functional-20220921213353-5916 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create functional-20220921213353-5916: error while creating volume root path '/var/lib/docker/volumes/functional-20220921213353-5916': mkdir /var/lib/docker/volumes/functional-20220921213353-5916: read-only file system
	
	X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for functional-20220921213353-5916 container: docker volume create functional-20220921213353-5916 --label name.minikube.sigs.k8s.io=functional-20220921213353-5916 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create functional-20220921213353-5916: error while creating volume root path '/var/lib/docker/volumes/functional-20220921213353-5916': mkdir /var/lib/docker/volumes/functional-20220921213353-5916: read-only file system
	
	W0921 21:35:58.470062    4624 out.go:239] * Suggestion: Restart Docker
	* Suggestion: Restart Docker
	W0921 21:35:58.470156    4624 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/6825
	* Related issue: https://github.com/kubernetes/minikube/issues/6825
	I0921 21:35:58.473788    4624 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:653: failed to soft start minikube. args "out/minikube-windows-amd64.exe start -p functional-20220921213353-5916 --alsologtostderr -v=8": exit status 60
functional_test.go:655: soft start took 1m15.5675599s for "functional-20220921213353-5916" cluster.
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestFunctional/serial/SoftStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect functional-20220921213353-5916
helpers_test.go:231: (dbg) Non-zero exit: docker inspect functional-20220921213353-5916: exit status 1 (264.3947ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: functional-20220921213353-5916

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-20220921213353-5916 -n functional-20220921213353-5916
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-20220921213353-5916 -n functional-20220921213353-5916: exit status 7 (536.8396ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0921 21:35:59.512329    7452 status.go:247] status error: host: state: unknown state "functional-20220921213353-5916": docker container inspect functional-20220921213353-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220921213353-5916

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-20220921213353-5916" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestFunctional/serial/SoftStart (76.39s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (1.09s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:673: (dbg) Run:  kubectl config current-context
functional_test.go:673: (dbg) Non-zero exit: kubectl config current-context: exit status 1 (262.5314ms)

                                                
                                                
** stderr ** 
	W0921 21:35:59.729813    6528 loader.go:223] Config not found: C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	error: current-context is not set

                                                
                                                
** /stderr **
functional_test.go:675: failed to get current-context. args "kubectl config current-context" : exit status 1
functional_test.go:679: expected current-context = "functional-20220921213353-5916", but got *""*
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestFunctional/serial/KubeContext]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect functional-20220921213353-5916
helpers_test.go:231: (dbg) Non-zero exit: docker inspect functional-20220921213353-5916: exit status 1 (267.7319ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: functional-20220921213353-5916

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-20220921213353-5916 -n functional-20220921213353-5916
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-20220921213353-5916 -n functional-20220921213353-5916: exit status 7 (548.4525ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0921 21:36:00.606013    8728 status.go:247] status error: host: state: unknown state "functional-20220921213353-5916": docker container inspect functional-20220921213353-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220921213353-5916

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-20220921213353-5916" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestFunctional/serial/KubeContext (1.09s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (1.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:688: (dbg) Run:  kubectl --context functional-20220921213353-5916 get po -A
functional_test.go:688: (dbg) Non-zero exit: kubectl --context functional-20220921213353-5916 get po -A: exit status 1 (220.3611ms)

                                                
                                                
** stderr ** 
	W0921 21:36:00.798178    3372 loader.go:223] Config not found: C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	Error in configuration: 
	* context was not found for specified context: functional-20220921213353-5916
	* cluster has no server defined

                                                
                                                
** /stderr **
functional_test.go:690: failed to get kubectl pods: args "kubectl --context functional-20220921213353-5916 get po -A" : exit status 1
functional_test.go:694: expected stderr to be empty but got *"W0921 21:36:00.798178    3372 loader.go:223] Config not found: C:\\Users\\jenkins.minikube2\\minikube-integration\\kubeconfig\nError in configuration: \n* context was not found for specified context: functional-20220921213353-5916\n* cluster has no server defined\n"*: args "kubectl --context functional-20220921213353-5916 get po -A"
functional_test.go:697: expected stdout to include *kube-system* but got *""*. args: "kubectl --context functional-20220921213353-5916 get po -A"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestFunctional/serial/KubectlGetPods]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect functional-20220921213353-5916
helpers_test.go:231: (dbg) Non-zero exit: docker inspect functional-20220921213353-5916: exit status 1 (239.3836ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: functional-20220921213353-5916

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-20220921213353-5916 -n functional-20220921213353-5916
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-20220921213353-5916 -n functional-20220921213353-5916: exit status 7 (564.4568ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0921 21:36:01.657362    8464 status.go:247] status error: host: state: unknown state "functional-20220921213353-5916": docker container inspect functional-20220921213353-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220921213353-5916

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-20220921213353-5916" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestFunctional/serial/KubectlGetPods (1.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (1.03s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1116: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220921213353-5916 ssh sudo crictl images
functional_test.go:1116: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-20220921213353-5916 ssh sudo crictl images: exit status 80 (1.0285798s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "functional-20220921213353-5916": docker container inspect functional-20220921213353-5916 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220921213353-5916
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                     │
	│    * If the above advice does not help, please let us know:                                                         │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                       │
	│                                                                                                                     │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                            │
	│    * Please also attach the following file to the GitHub issue:                                                     │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_ssh_f40552ee918ac053c4c404bc1ee7532c196ce64c_3.log    │
	│                                                                                                                     │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1118: failed to get images by "out/minikube-windows-amd64.exe -p functional-20220921213353-5916 ssh sudo crictl images" ssh exit status 80
functional_test.go:1122: expected sha for pause:3.3 "0184c1613d929" to be in the output but got *
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "functional-20220921213353-5916": docker container inspect functional-20220921213353-5916 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220921213353-5916
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                     │
	│    * If the above advice does not help, please let us know:                                                         │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                       │
	│                                                                                                                     │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                            │
	│    * Please also attach the following file to the GitHub issue:                                                     │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_ssh_f40552ee918ac053c4c404bc1ee7532c196ce64c_3.log    │
	│                                                                                                                     │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr ***
--- FAIL: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (1.03s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (3.77s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1139: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220921213353-5916 ssh sudo docker rmi k8s.gcr.io/pause:latest
functional_test.go:1139: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-20220921213353-5916 ssh sudo docker rmi k8s.gcr.io/pause:latest: exit status 80 (1.0219593s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "functional-20220921213353-5916": docker container inspect functional-20220921213353-5916 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220921213353-5916
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                     │
	│    * If the above advice does not help, please let us know:                                                         │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                       │
	│                                                                                                                     │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                            │
	│    * Please also attach the following file to the GitHub issue:                                                     │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_ssh_695159ccd5e0da3f5d811f2823eb9163b9dc65a6_3.log    │
	│                                                                                                                     │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1142: failed to manually delete image "out/minikube-windows-amd64.exe -p functional-20220921213353-5916 ssh sudo docker rmi k8s.gcr.io/pause:latest" : exit status 80
functional_test.go:1145: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220921213353-5916 ssh sudo crictl inspecti k8s.gcr.io/pause:latest
functional_test.go:1145: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-20220921213353-5916 ssh sudo crictl inspecti k8s.gcr.io/pause:latest: exit status 80 (1.0834106s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "functional-20220921213353-5916": docker container inspect functional-20220921213353-5916 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220921213353-5916
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                     │
	│    * If the above advice does not help, please let us know:                                                         │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                       │
	│                                                                                                                     │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                            │
	│    * Please also attach the following file to the GitHub issue:                                                     │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_ssh_90c12c9ea894b73e3971aa1ec67d0a7aeefe0b8f_7.log    │
	│                                                                                                                     │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1150: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220921213353-5916 cache reload
functional_test.go:1155: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220921213353-5916 ssh sudo crictl inspecti k8s.gcr.io/pause:latest
functional_test.go:1155: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-20220921213353-5916 ssh sudo crictl inspecti k8s.gcr.io/pause:latest: exit status 80 (992.7198ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "functional-20220921213353-5916": docker container inspect functional-20220921213353-5916 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220921213353-5916
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                     │
	│    * If the above advice does not help, please let us know:                                                         │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                       │
	│                                                                                                                     │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                            │
	│    * Please also attach the following file to the GitHub issue:                                                     │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_ssh_90c12c9ea894b73e3971aa1ec67d0a7aeefe0b8f_7.log    │
	│                                                                                                                     │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1157: expected "out/minikube-windows-amd64.exe -p functional-20220921213353-5916 ssh sudo crictl inspecti k8s.gcr.io/pause:latest" to run successfully but got error: exit status 80
--- FAIL: TestFunctional/serial/CacheCmd/cache/cache_reload (3.77s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (1.42s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:708: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220921213353-5916 kubectl -- --context functional-20220921213353-5916 get pods
functional_test.go:708: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-20220921213353-5916 kubectl -- --context functional-20220921213353-5916 get pods: exit status 1 (600.435ms)

                                                
                                                
** stderr ** 
	W0921 21:36:13.686496    8080 loader.go:223] Config not found: C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	Error in configuration: 
	* context was not found for specified context: functional-20220921213353-5916
	* no server found for cluster "functional-20220921213353-5916"

                                                
                                                
** /stderr **
functional_test.go:711: failed to get pods. args "out/minikube-windows-amd64.exe -p functional-20220921213353-5916 kubectl -- --context functional-20220921213353-5916 get pods": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestFunctional/serial/MinikubeKubectlCmd]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect functional-20220921213353-5916
helpers_test.go:231: (dbg) Non-zero exit: docker inspect functional-20220921213353-5916: exit status 1 (239.3296ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: functional-20220921213353-5916

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-20220921213353-5916 -n functional-20220921213353-5916
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-20220921213353-5916 -n functional-20220921213353-5916: exit status 7 (569.3263ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0921 21:36:14.588272    8112 status.go:247] status error: host: state: unknown state "functional-20220921213353-5916": docker container inspect functional-20220921213353-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220921213353-5916

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-20220921213353-5916" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestFunctional/serial/MinikubeKubectlCmd (1.42s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (1.4s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:733: (dbg) Run:  out\kubectl.exe --context functional-20220921213353-5916 get pods
functional_test.go:733: (dbg) Non-zero exit: out\kubectl.exe --context functional-20220921213353-5916 get pods: exit status 1 (572.7579ms)

                                                
                                                
** stderr ** 
	W0921 21:36:15.079167    1312 loader.go:223] Config not found: C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	Error in configuration: 
	* context was not found for specified context: functional-20220921213353-5916
	* no server found for cluster "functional-20220921213353-5916"

                                                
                                                
** /stderr **
functional_test.go:736: failed to run kubectl directly. args "out\\kubectl.exe --context functional-20220921213353-5916 get pods": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestFunctional/serial/MinikubeKubectlCmdDirectly]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect functional-20220921213353-5916
helpers_test.go:231: (dbg) Non-zero exit: docker inspect functional-20220921213353-5916: exit status 1 (238.7481ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: functional-20220921213353-5916

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-20220921213353-5916 -n functional-20220921213353-5916
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-20220921213353-5916 -n functional-20220921213353-5916: exit status 7 (579.0942ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0921 21:36:15.992452    7640 status.go:247] status error: host: state: unknown state "functional-20220921213353-5916": docker container inspect functional-20220921213353-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220921213353-5916

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-20220921213353-5916" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestFunctional/serial/MinikubeKubectlCmdDirectly (1.40s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (76.17s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:749: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-20220921213353-5916 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:749: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p functional-20220921213353-5916 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: exit status 60 (1m15.3464876s)

                                                
                                                
-- stdout --
	* [functional-20220921213353-5916] minikube v1.27.0 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	  - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=14995
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on existing profile
	* Starting control plane node functional-20220921213353-5916 in cluster functional-20220921213353-5916
	* Pulling base image ...
	* docker "functional-20220921213353-5916" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=4000MB) ...
	* docker "functional-20220921213353-5916" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=4000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	    > gcr.io/k8s-minikube/kicbase:  0 B [__________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase:  0 B [__________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase:  0 B [__________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase:  0 B [__________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase:  0 B [______________________] ?% ? p/s 800ms! minikube was unable to download gcr.io/k8s-minikube/kicbase:v0.0.34, but successfully downloaded gcr.io/k8s-minikube/kicbase:v0.0.34 as a fallback image
	E0921 21:36:43.107952    6080 network_create.go:104] error while trying to create docker network functional-20220921213353-5916 192.168.49.0/24: create docker network functional-20220921213353-5916 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=functional-20220921213353-5916 functional-20220921213353-5916: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network f714dcf8a6e38001b3c606bcfef85df748989420f42256b76baa4bc8f6fcda81 (br-f714dcf8a6e3): conflicts with network a04d36bfb3cf4b1b1ab643a2ec70fadfc4a485878b32a275b4c7653037d66dd0 (br-a04d36bfb3cf): networks have overlapping IPv4
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network functional-20220921213353-5916 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=functional-20220921213353-5916 functional-20220921213353-5916: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network f714dcf8a6e38001b3c606bcfef85df748989420f42256b76baa4bc8f6fcda81 (br-f714dcf8a6e3): conflicts with network a04d36bfb3cf4b1b1ab643a2ec70fadfc4a485878b32a275b4c7653037d66dd0 (br-a04d36bfb3cf): networks have overlapping IPv4
	
	! StartHost failed, but will try again: recreate: creating host: create: creating: setting up container node: creating volume for functional-20220921213353-5916 container: docker volume create functional-20220921213353-5916 --label name.minikube.sigs.k8s.io=functional-20220921213353-5916 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create functional-20220921213353-5916: error while creating volume root path '/var/lib/docker/volumes/functional-20220921213353-5916': mkdir /var/lib/docker/volumes/functional-20220921213353-5916: read-only file system
	
	E0921 21:37:22.121367    6080 network_create.go:104] error while trying to create docker network functional-20220921213353-5916 192.168.58.0/24: create docker network functional-20220921213353-5916 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=functional-20220921213353-5916 functional-20220921213353-5916: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 086f6ca4b948148b5c753ba2bc2bae51b8309d08fd412973a0dd7526f4b38637 (br-086f6ca4b948): conflicts with network 8a3cd8d165a4af032276277dbee93b52b8ee4aed7ca6d8aceee6814901b31792 (br-8a3cd8d165a4): networks have overlapping IPv4
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network functional-20220921213353-5916 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=functional-20220921213353-5916 functional-20220921213353-5916: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 086f6ca4b948148b5c753ba2bc2bae51b8309d08fd412973a0dd7526f4b38637 (br-086f6ca4b948): conflicts with network 8a3cd8d165a4af032276277dbee93b52b8ee4aed7ca6d8aceee6814901b31792 (br-8a3cd8d165a4): networks have overlapping IPv4
	
	* Failed to start docker container. Running "minikube delete -p functional-20220921213353-5916" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for functional-20220921213353-5916 container: docker volume create functional-20220921213353-5916 --label name.minikube.sigs.k8s.io=functional-20220921213353-5916 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create functional-20220921213353-5916: error while creating volume root path '/var/lib/docker/volumes/functional-20220921213353-5916': mkdir /var/lib/docker/volumes/functional-20220921213353-5916: read-only file system
	
	X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for functional-20220921213353-5916 container: docker volume create functional-20220921213353-5916 --label name.minikube.sigs.k8s.io=functional-20220921213353-5916 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create functional-20220921213353-5916: error while creating volume root path '/var/lib/docker/volumes/functional-20220921213353-5916': mkdir /var/lib/docker/volumes/functional-20220921213353-5916: read-only file system
	
	* Suggestion: Restart Docker
	* Related issue: https://github.com/kubernetes/minikube/issues/6825

                                                
                                                
** /stderr **
functional_test.go:751: failed to restart minikube. args "out/minikube-windows-amd64.exe start -p functional-20220921213353-5916 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all": exit status 60
functional_test.go:753: restart took 1m15.3464876s for "functional-20220921213353-5916" cluster.
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestFunctional/serial/ExtraConfig]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect functional-20220921213353-5916
helpers_test.go:231: (dbg) Non-zero exit: docker inspect functional-20220921213353-5916: exit status 1 (236.6446ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: functional-20220921213353-5916

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-20220921213353-5916 -n functional-20220921213353-5916
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-20220921213353-5916 -n functional-20220921213353-5916: exit status 7 (579.9406ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0921 21:37:32.167794    2344 status.go:247] status error: host: state: unknown state "functional-20220921213353-5916": docker container inspect functional-20220921213353-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220921213353-5916

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-20220921213353-5916" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestFunctional/serial/ExtraConfig (76.17s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (1.05s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:802: (dbg) Run:  kubectl --context functional-20220921213353-5916 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:802: (dbg) Non-zero exit: kubectl --context functional-20220921213353-5916 get po -l tier=control-plane -n kube-system -o=json: exit status 1 (232.1308ms)

                                                
                                                
** stderr ** 
	W0921 21:37:32.343199    7528 loader.go:223] Config not found: C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	error: context "functional-20220921213353-5916" does not exist

                                                
                                                
** /stderr **
functional_test.go:804: failed to get components. args "kubectl --context functional-20220921213353-5916 get po -l tier=control-plane -n kube-system -o=json": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestFunctional/serial/ComponentHealth]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect functional-20220921213353-5916
helpers_test.go:231: (dbg) Non-zero exit: docker inspect functional-20220921213353-5916: exit status 1 (236.9262ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: functional-20220921213353-5916

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-20220921213353-5916 -n functional-20220921213353-5916
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-20220921213353-5916 -n functional-20220921213353-5916: exit status 7 (561.9191ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0921 21:37:33.213744    9032 status.go:247] status error: host: state: unknown state "functional-20220921213353-5916": docker container inspect functional-20220921213353-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220921213353-5916

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-20220921213353-5916" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestFunctional/serial/ComponentHealth (1.05s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.65s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1228: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220921213353-5916 logs
functional_test.go:1228: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-20220921213353-5916 logs: exit status 80 (1.1389509s)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------------------------------------|-------------------------------------|-------------------|---------|---------------------|---------------------|
	| Command |                                        Args                                        |               Profile               |       User        | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------------------------------------|-------------------------------------|-------------------|---------|---------------------|---------------------|
	| start   | -o=json --download-only -p                                                         | download-only-20220921212952-5916   | minikube2\jenkins | v1.27.0 | 21 Sep 22 21:29 GMT |                     |
	|         | download-only-20220921212952-5916                                                  |                                     |                   |         |                     |                     |
	|         | --force --alsologtostderr                                                          |                                     |                   |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                                                       |                                     |                   |         |                     |                     |
	|         | --container-runtime=docker                                                         |                                     |                   |         |                     |                     |
	|         | --driver=docker                                                                    |                                     |                   |         |                     |                     |
	| start   | -o=json --download-only -p                                                         | download-only-20220921212952-5916   | minikube2\jenkins | v1.27.0 | 21 Sep 22 21:30 GMT |                     |
	|         | download-only-20220921212952-5916                                                  |                                     |                   |         |                     |                     |
	|         | --force --alsologtostderr                                                          |                                     |                   |         |                     |                     |
	|         | --kubernetes-version=v1.25.2                                                       |                                     |                   |         |                     |                     |
	|         | --container-runtime=docker                                                         |                                     |                   |         |                     |                     |
	|         | --driver=docker                                                                    |                                     |                   |         |                     |                     |
	| delete  | --all                                                                              | minikube                            | minikube2\jenkins | v1.27.0 | 21 Sep 22 21:30 GMT | 21 Sep 22 21:30 GMT |
	| delete  | -p                                                                                 | download-only-20220921212952-5916   | minikube2\jenkins | v1.27.0 | 21 Sep 22 21:30 GMT | 21 Sep 22 21:30 GMT |
	|         | download-only-20220921212952-5916                                                  |                                     |                   |         |                     |                     |
	| delete  | -p                                                                                 | download-only-20220921212952-5916   | minikube2\jenkins | v1.27.0 | 21 Sep 22 21:30 GMT | 21 Sep 22 21:30 GMT |
	|         | download-only-20220921212952-5916                                                  |                                     |                   |         |                     |                     |
	| start   | --download-only -p                                                                 | download-docker-20220921213020-5916 | minikube2\jenkins | v1.27.0 | 21 Sep 22 21:30 GMT |                     |
	|         | download-docker-20220921213020-5916                                                |                                     |                   |         |                     |                     |
	|         | --force --alsologtostderr                                                          |                                     |                   |         |                     |                     |
	|         | --driver=docker                                                                    |                                     |                   |         |                     |                     |
	| delete  | -p                                                                                 | download-docker-20220921213020-5916 | minikube2\jenkins | v1.27.0 | 21 Sep 22 21:30 GMT | 21 Sep 22 21:30 GMT |
	|         | download-docker-20220921213020-5916                                                |                                     |                   |         |                     |                     |
	| start   | --download-only -p                                                                 | binary-mirror-20220921213055-5916   | minikube2\jenkins | v1.27.0 | 21 Sep 22 21:30 GMT |                     |
	|         | binary-mirror-20220921213055-5916                                                  |                                     |                   |         |                     |                     |
	|         | --alsologtostderr --binary-mirror                                                  |                                     |                   |         |                     |                     |
	|         | http://127.0.0.1:57904                                                             |                                     |                   |         |                     |                     |
	|         | --driver=docker                                                                    |                                     |                   |         |                     |                     |
	| delete  | -p                                                                                 | binary-mirror-20220921213055-5916   | minikube2\jenkins | v1.27.0 | 21 Sep 22 21:30 GMT | 21 Sep 22 21:30 GMT |
	|         | binary-mirror-20220921213055-5916                                                  |                                     |                   |         |                     |                     |
	| start   | -p addons-20220921213059-5916                                                      | addons-20220921213059-5916          | minikube2\jenkins | v1.27.0 | 21 Sep 22 21:31 GMT |                     |
	|         | --wait=true --memory=4000                                                          |                                     |                   |         |                     |                     |
	|         | --alsologtostderr                                                                  |                                     |                   |         |                     |                     |
	|         | --addons=registry                                                                  |                                     |                   |         |                     |                     |
	|         | --addons=metrics-server                                                            |                                     |                   |         |                     |                     |
	|         | --addons=volumesnapshots                                                           |                                     |                   |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                       |                                     |                   |         |                     |                     |
	|         | --addons=gcp-auth                                                                  |                                     |                   |         |                     |                     |
	|         | --driver=docker                                                                    |                                     |                   |         |                     |                     |
	|         | --addons=ingress                                                                   |                                     |                   |         |                     |                     |
	|         | --addons=ingress-dns                                                               |                                     |                   |         |                     |                     |
	|         | --addons=helm-tiller                                                               |                                     |                   |         |                     |                     |
	| delete  | -p addons-20220921213059-5916                                                      | addons-20220921213059-5916          | minikube2\jenkins | v1.27.0 | 21 Sep 22 21:31 GMT | 21 Sep 22 21:31 GMT |
	| start   | -p nospam-20220921213151-5916 -n=1 --memory=2250 --wait=false                      | nospam-20220921213151-5916          | minikube2\jenkins | v1.27.0 | 21 Sep 22 21:31 GMT |                     |
	|         | --log_dir=C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-20220921213151-5916 |                                     |                   |         |                     |                     |
	|         | --driver=docker                                                                    |                                     |                   |         |                     |                     |
	| start   | nospam-20220921213151-5916 --log_dir                                               | nospam-20220921213151-5916          | minikube2\jenkins | v1.27.0 | 21 Sep 22 21:32 GMT |                     |
	|         | C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-20220921213151-5916           |                                     |                   |         |                     |                     |
	|         | start --dry-run                                                                    |                                     |                   |         |                     |                     |
	| start   | nospam-20220921213151-5916 --log_dir                                               | nospam-20220921213151-5916          | minikube2\jenkins | v1.27.0 | 21 Sep 22 21:32 GMT |                     |
	|         | C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-20220921213151-5916           |                                     |                   |         |                     |                     |
	|         | start --dry-run                                                                    |                                     |                   |         |                     |                     |
	| start   | nospam-20220921213151-5916 --log_dir                                               | nospam-20220921213151-5916          | minikube2\jenkins | v1.27.0 | 21 Sep 22 21:32 GMT |                     |
	|         | C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-20220921213151-5916           |                                     |                   |         |                     |                     |
	|         | start --dry-run                                                                    |                                     |                   |         |                     |                     |
	| pause   | nospam-20220921213151-5916 --log_dir                                               | nospam-20220921213151-5916          | minikube2\jenkins | v1.27.0 | 21 Sep 22 21:32 GMT |                     |
	|         | C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-20220921213151-5916           |                                     |                   |         |                     |                     |
	|         | pause                                                                              |                                     |                   |         |                     |                     |
	| pause   | nospam-20220921213151-5916 --log_dir                                               | nospam-20220921213151-5916          | minikube2\jenkins | v1.27.0 | 21 Sep 22 21:32 GMT |                     |
	|         | C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-20220921213151-5916           |                                     |                   |         |                     |                     |
	|         | pause                                                                              |                                     |                   |         |                     |                     |
	| pause   | nospam-20220921213151-5916 --log_dir                                               | nospam-20220921213151-5916          | minikube2\jenkins | v1.27.0 | 21 Sep 22 21:32 GMT |                     |
	|         | C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-20220921213151-5916           |                                     |                   |         |                     |                     |
	|         | pause                                                                              |                                     |                   |         |                     |                     |
	| unpause | nospam-20220921213151-5916 --log_dir                                               | nospam-20220921213151-5916          | minikube2\jenkins | v1.27.0 | 21 Sep 22 21:32 GMT |                     |
	|         | C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-20220921213151-5916           |                                     |                   |         |                     |                     |
	|         | unpause                                                                            |                                     |                   |         |                     |                     |
	| unpause | nospam-20220921213151-5916 --log_dir                                               | nospam-20220921213151-5916          | minikube2\jenkins | v1.27.0 | 21 Sep 22 21:32 GMT |                     |
	|         | C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-20220921213151-5916           |                                     |                   |         |                     |                     |
	|         | unpause                                                                            |                                     |                   |         |                     |                     |
	| unpause | nospam-20220921213151-5916 --log_dir                                               | nospam-20220921213151-5916          | minikube2\jenkins | v1.27.0 | 21 Sep 22 21:32 GMT |                     |
	|         | C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-20220921213151-5916           |                                     |                   |         |                     |                     |
	|         | unpause                                                                            |                                     |                   |         |                     |                     |
	| stop    | nospam-20220921213151-5916 --log_dir                                               | nospam-20220921213151-5916          | minikube2\jenkins | v1.27.0 | 21 Sep 22 21:32 GMT |                     |
	|         | C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-20220921213151-5916           |                                     |                   |         |                     |                     |
	|         | stop                                                                               |                                     |                   |         |                     |                     |
	| stop    | nospam-20220921213151-5916 --log_dir                                               | nospam-20220921213151-5916          | minikube2\jenkins | v1.27.0 | 21 Sep 22 21:33 GMT |                     |
	|         | C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-20220921213151-5916           |                                     |                   |         |                     |                     |
	|         | stop                                                                               |                                     |                   |         |                     |                     |
	| stop    | nospam-20220921213151-5916 --log_dir                                               | nospam-20220921213151-5916          | minikube2\jenkins | v1.27.0 | 21 Sep 22 21:33 GMT |                     |
	|         | C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-20220921213151-5916           |                                     |                   |         |                     |                     |
	|         | stop                                                                               |                                     |                   |         |                     |                     |
	| delete  | -p nospam-20220921213151-5916                                                      | nospam-20220921213151-5916          | minikube2\jenkins | v1.27.0 | 21 Sep 22 21:33 GMT | 21 Sep 22 21:33 GMT |
	| start   | -p                                                                                 | functional-20220921213353-5916      | minikube2\jenkins | v1.27.0 | 21 Sep 22 21:33 GMT |                     |
	|         | functional-20220921213353-5916                                                     |                                     |                   |         |                     |                     |
	|         | --memory=4000                                                                      |                                     |                   |         |                     |                     |
	|         | --apiserver-port=8441                                                              |                                     |                   |         |                     |                     |
	|         | --wait=all --driver=docker                                                         |                                     |                   |         |                     |                     |
	| start   | -p                                                                                 | functional-20220921213353-5916      | minikube2\jenkins | v1.27.0 | 21 Sep 22 21:34 GMT |                     |
	|         | functional-20220921213353-5916                                                     |                                     |                   |         |                     |                     |
	|         | --alsologtostderr -v=8                                                             |                                     |                   |         |                     |                     |
	| cache   | functional-20220921213353-5916                                                     | functional-20220921213353-5916      | minikube2\jenkins | v1.27.0 | 21 Sep 22 21:36 GMT | 21 Sep 22 21:36 GMT |
	|         | cache add k8s.gcr.io/pause:3.1                                                     |                                     |                   |         |                     |                     |
	| cache   | functional-20220921213353-5916                                                     | functional-20220921213353-5916      | minikube2\jenkins | v1.27.0 | 21 Sep 22 21:36 GMT | 21 Sep 22 21:36 GMT |
	|         | cache add k8s.gcr.io/pause:3.3                                                     |                                     |                   |         |                     |                     |
	| cache   | functional-20220921213353-5916                                                     | functional-20220921213353-5916      | minikube2\jenkins | v1.27.0 | 21 Sep 22 21:36 GMT | 21 Sep 22 21:36 GMT |
	|         | cache add                                                                          |                                     |                   |         |                     |                     |
	|         | k8s.gcr.io/pause:latest                                                            |                                     |                   |         |                     |                     |
	| cache   | delete k8s.gcr.io/pause:3.3                                                        | minikube                            | minikube2\jenkins | v1.27.0 | 21 Sep 22 21:36 GMT | 21 Sep 22 21:36 GMT |
	| cache   | list                                                                               | minikube                            | minikube2\jenkins | v1.27.0 | 21 Sep 22 21:36 GMT | 21 Sep 22 21:36 GMT |
	| ssh     | functional-20220921213353-5916                                                     | functional-20220921213353-5916      | minikube2\jenkins | v1.27.0 | 21 Sep 22 21:36 GMT |                     |
	|         | ssh sudo crictl images                                                             |                                     |                   |         |                     |                     |
	| ssh     | functional-20220921213353-5916                                                     | functional-20220921213353-5916      | minikube2\jenkins | v1.27.0 | 21 Sep 22 21:36 GMT |                     |
	|         | ssh sudo docker rmi                                                                |                                     |                   |         |                     |                     |
	|         | k8s.gcr.io/pause:latest                                                            |                                     |                   |         |                     |                     |
	| ssh     | functional-20220921213353-5916                                                     | functional-20220921213353-5916      | minikube2\jenkins | v1.27.0 | 21 Sep 22 21:36 GMT |                     |
	|         | ssh sudo crictl inspecti                                                           |                                     |                   |         |                     |                     |
	|         | k8s.gcr.io/pause:latest                                                            |                                     |                   |         |                     |                     |
	| cache   | functional-20220921213353-5916                                                     | functional-20220921213353-5916      | minikube2\jenkins | v1.27.0 | 21 Sep 22 21:36 GMT | 21 Sep 22 21:36 GMT |
	|         | cache reload                                                                       |                                     |                   |         |                     |                     |
	| ssh     | functional-20220921213353-5916                                                     | functional-20220921213353-5916      | minikube2\jenkins | v1.27.0 | 21 Sep 22 21:36 GMT |                     |
	|         | ssh sudo crictl inspecti                                                           |                                     |                   |         |                     |                     |
	|         | k8s.gcr.io/pause:latest                                                            |                                     |                   |         |                     |                     |
	| cache   | delete k8s.gcr.io/pause:3.1                                                        | minikube                            | minikube2\jenkins | v1.27.0 | 21 Sep 22 21:36 GMT | 21 Sep 22 21:36 GMT |
	| cache   | delete k8s.gcr.io/pause:latest                                                     | minikube                            | minikube2\jenkins | v1.27.0 | 21 Sep 22 21:36 GMT | 21 Sep 22 21:36 GMT |
	| kubectl | functional-20220921213353-5916                                                     | functional-20220921213353-5916      | minikube2\jenkins | v1.27.0 | 21 Sep 22 21:36 GMT |                     |
	|         | kubectl -- --context                                                               |                                     |                   |         |                     |                     |
	|         | functional-20220921213353-5916                                                     |                                     |                   |         |                     |                     |
	|         | get pods                                                                           |                                     |                   |         |                     |                     |
	| start   | -p functional-20220921213353-5916                                                  | functional-20220921213353-5916      | minikube2\jenkins | v1.27.0 | 21 Sep 22 21:36 GMT |                     |
	|         | --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision           |                                     |                   |         |                     |                     |
	|         | --wait=all                                                                         |                                     |                   |         |                     |                     |
	|---------|------------------------------------------------------------------------------------|-------------------------------------|-------------------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/09/21 21:36:16
	Running on machine: minikube2
	Binary: Built with gc go1.19.1 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0921 21:36:16.253670    6080 out.go:296] Setting OutFile to fd 992 ...
	I0921 21:36:16.306661    6080 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0921 21:36:16.306661    6080 out.go:309] Setting ErrFile to fd 668...
	I0921 21:36:16.306661    6080 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0921 21:36:16.326042    6080 out.go:303] Setting JSON to false
	I0921 21:36:16.328732    6080 start.go:115] hostinfo: {"hostname":"minikube2","uptime":2244,"bootTime":1663793932,"procs":149,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19044 Build 19044","kernelVersion":"10.0.19044 Build 19044","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"29dced62-21fb-45d8-a34e-472b66ced002"}
	W0921 21:36:16.329367    6080 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0921 21:36:16.333798    6080 out.go:177] * [functional-20220921213353-5916] minikube v1.27.0 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	I0921 21:36:16.336178    6080 notify.go:214] Checking for updates...
	I0921 21:36:16.338864    6080 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	I0921 21:36:16.341289    6080 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	I0921 21:36:16.343686    6080 out.go:177]   - MINIKUBE_LOCATION=14995
	I0921 21:36:16.346193    6080 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0921 21:36:16.351139    6080 config.go:180] Loaded profile config "functional-20220921213353-5916": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.2
	I0921 21:36:16.351139    6080 driver.go:365] Setting default libvirt URI to qemu:///system
	I0921 21:36:16.637054    6080 docker.go:137] docker version: linux-20.10.17
	I0921 21:36:16.644807    6080 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0921 21:36:17.168944    6080 info.go:265] docker info: {ID:6VFI:N2PO:PLBP:KXLR:EN7R:OR7P:564S:UY7N:ZWDA:5V2U:KGL2:KJZ7 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:43 OomKillDisable:true NGoroutines:49 SystemTime:2022-09-21 21:36:16.7990226 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.10.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.9] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-
plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.19.0]] Warnings:<nil>}}
	I0921 21:36:17.173954    6080 out.go:177] * Using the docker driver based on existing profile
	I0921 21:36:17.176425    6080 start.go:284] selected driver: docker
	I0921 21:36:17.176425    6080 start.go:808] validating driver "docker" against &{Name:functional-20220921213353-5916 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.2 ClusterName:functional-20220921213353-5916 Namespace:default APIServerName:mini
kubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.25.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpti
mizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I0921 21:36:17.176425    6080 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0921 21:36:17.189017    6080 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0921 21:36:17.716472    6080 info.go:265] docker info: {ID:6VFI:N2PO:PLBP:KXLR:EN7R:OR7P:564S:UY7N:ZWDA:5V2U:KGL2:KJZ7 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:43 OomKillDisable:true NGoroutines:49 SystemTime:2022-09-21 21:36:17.3434501 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.10.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.9] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-
plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.19.0]] Warnings:<nil>}}
	I0921 21:36:17.774790    6080 start_flags.go:867] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0921 21:36:17.774790    6080 cni.go:95] Creating CNI manager for ""
	I0921 21:36:17.774860    6080 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0921 21:36:17.774860    6080 start_flags.go:316] config:
	{Name:functional-20220921213353-5916 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.2 ClusterName:functional-20220921213353-5916 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.25.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptim
izations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I0921 21:36:17.779515    6080 out.go:177] * Starting control plane node functional-20220921213353-5916 in cluster functional-20220921213353-5916
	I0921 21:36:17.781339    6080 cache.go:120] Beginning downloading kic base image for docker with docker
	I0921 21:36:17.783952    6080 out.go:177] * Pulling base image ...
	I0921 21:36:17.786719    6080 preload.go:132] Checking if preload exists for k8s version v1.25.2 and runtime docker
	I0921 21:36:17.786719    6080 preload.go:148] Found local preload: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.25.2-docker-overlay2-amd64.tar.lz4
	I0921 21:36:17.786719    6080 cache.go:57] Caching tarball of preloaded images
	I0921 21:36:17.786719    6080 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c in local docker daemon
	I0921 21:36:17.786719    6080 preload.go:174] Found C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.25.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0921 21:36:17.786719    6080 cache.go:60] Finished verifying existence of preloaded tar for  v1.25.2 on docker
	I0921 21:36:17.787674    6080 profile.go:148] Saving config to C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\functional-20220921213353-5916\config.json ...
	I0921 21:36:17.997663    6080 cache.go:147] Downloading gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c to local cache
	I0921 21:36:17.997782    6080 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase_v0.0.34@sha256_f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c.tar
	I0921 21:36:17.998107    6080 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase_v0.0.34@sha256_f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c.tar
	I0921 21:36:17.998186    6080 image.go:59] Checking for gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c in local cache directory
	I0921 21:36:17.998298    6080 image.go:62] Found gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c in local cache directory, skipping pull
	I0921 21:36:17.998298    6080 image.go:103] gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c exists in cache, skipping pull
	I0921 21:36:17.998502    6080 cache.go:150] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c as a tarball
	I0921 21:36:17.998502    6080 cache.go:161] Loading gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c from local cache
	I0921 21:36:17.998502    6080 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase_v0.0.34@sha256_f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c.tar
	I0921 21:36:20.222960    6080 cache.go:164] successfully loaded gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c from cached tarball
	I0921 21:36:20.223038    6080 cache.go:170] Downloading gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c to local daemon
	I0921 21:36:20.223101    6080 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c.d.lock -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase_v0.0.34@sha256_f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c.d.lock
	I0921 21:36:20.223101    6080 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c in local docker daemon
	I0921 21:36:20.407337    6080 image.go:243] Writing gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c to local daemon
	I0921 21:36:21.937626    6080 cache.go:177] use image loaded from cache gcr.io/k8s-minikube/kicbase:v0.0.34
	W0921 21:36:21.937626    6080 out.go:239] ! minikube was unable to download gcr.io/k8s-minikube/kicbase:v0.0.34, but successfully downloaded gcr.io/k8s-minikube/kicbase:v0.0.34 as a fallback image
	I0921 21:36:21.937626    6080 cache.go:208] Successfully downloaded all kic artifacts
	I0921 21:36:21.937626    6080 start.go:364] acquiring machines lock for functional-20220921213353-5916: {Name:mk3f5ae8740d25300eb345feb1053ed449398cb0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0921 21:36:21.938173    6080 start.go:368] acquired machines lock for "functional-20220921213353-5916" in 546.6µs
	I0921 21:36:21.938354    6080 start.go:96] Skipping create...Using existing machine configuration
	I0921 21:36:21.938434    6080 fix.go:55] fixHost starting: 
	I0921 21:36:21.952190    6080 cli_runner.go:164] Run: docker container inspect functional-20220921213353-5916 --format={{.State.Status}}
	W0921 21:36:22.139760    6080 cli_runner.go:211] docker container inspect functional-20220921213353-5916 --format={{.State.Status}} returned with exit code 1
	I0921 21:36:22.139760    6080 fix.go:103] recreateIfNeeded on functional-20220921213353-5916: state= err=unknown state "functional-20220921213353-5916": docker container inspect functional-20220921213353-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220921213353-5916
	I0921 21:36:22.139760    6080 fix.go:108] machineExists: false. err=machine does not exist
	I0921 21:36:22.143965    6080 out.go:177] * docker "functional-20220921213353-5916" container is missing, will recreate.
	I0921 21:36:22.146227    6080 delete.go:124] DEMOLISHING functional-20220921213353-5916 ...
	I0921 21:36:22.160033    6080 cli_runner.go:164] Run: docker container inspect functional-20220921213353-5916 --format={{.State.Status}}
	W0921 21:36:22.355164    6080 cli_runner.go:211] docker container inspect functional-20220921213353-5916 --format={{.State.Status}} returned with exit code 1
	W0921 21:36:22.355291    6080 stop.go:75] unable to get state: unknown state "functional-20220921213353-5916": docker container inspect functional-20220921213353-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220921213353-5916
	I0921 21:36:22.355334    6080 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "functional-20220921213353-5916": docker container inspect functional-20220921213353-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220921213353-5916
	I0921 21:36:22.368925    6080 cli_runner.go:164] Run: docker container inspect functional-20220921213353-5916 --format={{.State.Status}}
	W0921 21:36:22.566986    6080 cli_runner.go:211] docker container inspect functional-20220921213353-5916 --format={{.State.Status}} returned with exit code 1
	I0921 21:36:22.567187    6080 delete.go:82] Unable to get host status for functional-20220921213353-5916, assuming it has already been deleted: state: unknown state "functional-20220921213353-5916": docker container inspect functional-20220921213353-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220921213353-5916
	I0921 21:36:22.575667    6080 cli_runner.go:164] Run: docker container inspect -f {{.Id}} functional-20220921213353-5916
	W0921 21:36:22.754941    6080 cli_runner.go:211] docker container inspect -f {{.Id}} functional-20220921213353-5916 returned with exit code 1
	I0921 21:36:22.754992    6080 kic.go:356] could not find the container functional-20220921213353-5916 to remove it. will try anyways
	I0921 21:36:22.766306    6080 cli_runner.go:164] Run: docker container inspect functional-20220921213353-5916 --format={{.State.Status}}
	W0921 21:36:22.956751    6080 cli_runner.go:211] docker container inspect functional-20220921213353-5916 --format={{.State.Status}} returned with exit code 1
	W0921 21:36:22.956751    6080 oci.go:84] error getting container status, will try to delete anyways: unknown state "functional-20220921213353-5916": docker container inspect functional-20220921213353-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220921213353-5916
	I0921 21:36:22.967169    6080 cli_runner.go:164] Run: docker exec --privileged -t functional-20220921213353-5916 /bin/bash -c "sudo init 0"
	W0921 21:36:23.207048    6080 cli_runner.go:211] docker exec --privileged -t functional-20220921213353-5916 /bin/bash -c "sudo init 0" returned with exit code 1
	I0921 21:36:23.207048    6080 oci.go:646] error shutdown functional-20220921213353-5916: docker exec --privileged -t functional-20220921213353-5916 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: functional-20220921213353-5916
	I0921 21:36:24.226922    6080 cli_runner.go:164] Run: docker container inspect functional-20220921213353-5916 --format={{.State.Status}}
	W0921 21:36:24.405201    6080 cli_runner.go:211] docker container inspect functional-20220921213353-5916 --format={{.State.Status}} returned with exit code 1
	I0921 21:36:24.405449    6080 oci.go:658] temporary error verifying shutdown: unknown state "functional-20220921213353-5916": docker container inspect functional-20220921213353-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220921213353-5916
	I0921 21:36:24.405449    6080 oci.go:660] temporary error: container functional-20220921213353-5916 status is  but expect it to be exited
	I0921 21:36:24.405524    6080 retry.go:31] will retry after 552.330144ms: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20220921213353-5916": docker container inspect functional-20220921213353-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220921213353-5916
	I0921 21:36:24.968714    6080 cli_runner.go:164] Run: docker container inspect functional-20220921213353-5916 --format={{.State.Status}}
	W0921 21:36:25.206628    6080 cli_runner.go:211] docker container inspect functional-20220921213353-5916 --format={{.State.Status}} returned with exit code 1
	I0921 21:36:25.206698    6080 oci.go:658] temporary error verifying shutdown: unknown state "functional-20220921213353-5916": docker container inspect functional-20220921213353-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220921213353-5916
	I0921 21:36:25.206698    6080 oci.go:660] temporary error: container functional-20220921213353-5916 status is  but expect it to be exited
	I0921 21:36:25.206698    6080 retry.go:31] will retry after 1.080381816s: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20220921213353-5916": docker container inspect functional-20220921213353-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220921213353-5916
	I0921 21:36:26.308678    6080 cli_runner.go:164] Run: docker container inspect functional-20220921213353-5916 --format={{.State.Status}}
	W0921 21:36:26.501826    6080 cli_runner.go:211] docker container inspect functional-20220921213353-5916 --format={{.State.Status}} returned with exit code 1
	I0921 21:36:26.501826    6080 oci.go:658] temporary error verifying shutdown: unknown state "functional-20220921213353-5916": docker container inspect functional-20220921213353-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220921213353-5916
	I0921 21:36:26.501826    6080 oci.go:660] temporary error: container functional-20220921213353-5916 status is  but expect it to be exited
	I0921 21:36:26.501826    6080 retry.go:31] will retry after 1.31013006s: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20220921213353-5916": docker container inspect functional-20220921213353-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220921213353-5916
	I0921 21:36:27.820650    6080 cli_runner.go:164] Run: docker container inspect functional-20220921213353-5916 --format={{.State.Status}}
	W0921 21:36:28.014954    6080 cli_runner.go:211] docker container inspect functional-20220921213353-5916 --format={{.State.Status}} returned with exit code 1
	I0921 21:36:28.015083    6080 oci.go:658] temporary error verifying shutdown: unknown state "functional-20220921213353-5916": docker container inspect functional-20220921213353-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220921213353-5916
	I0921 21:36:28.015083    6080 oci.go:660] temporary error: container functional-20220921213353-5916 status is  but expect it to be exited
	I0921 21:36:28.015083    6080 retry.go:31] will retry after 1.582392691s: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20220921213353-5916": docker container inspect functional-20220921213353-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220921213353-5916
	I0921 21:36:29.609977    6080 cli_runner.go:164] Run: docker container inspect functional-20220921213353-5916 --format={{.State.Status}}
	W0921 21:36:29.803271    6080 cli_runner.go:211] docker container inspect functional-20220921213353-5916 --format={{.State.Status}} returned with exit code 1
	I0921 21:36:29.803271    6080 oci.go:658] temporary error verifying shutdown: unknown state "functional-20220921213353-5916": docker container inspect functional-20220921213353-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220921213353-5916
	I0921 21:36:29.803271    6080 oci.go:660] temporary error: container functional-20220921213353-5916 status is  but expect it to be exited
	I0921 21:36:29.803271    6080 retry.go:31] will retry after 2.340488664s: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20220921213353-5916": docker container inspect functional-20220921213353-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220921213353-5916
	I0921 21:36:32.151484    6080 cli_runner.go:164] Run: docker container inspect functional-20220921213353-5916 --format={{.State.Status}}
	W0921 21:36:32.330326    6080 cli_runner.go:211] docker container inspect functional-20220921213353-5916 --format={{.State.Status}} returned with exit code 1
	I0921 21:36:32.330326    6080 oci.go:658] temporary error verifying shutdown: unknown state "functional-20220921213353-5916": docker container inspect functional-20220921213353-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220921213353-5916
	I0921 21:36:32.330326    6080 oci.go:660] temporary error: container functional-20220921213353-5916 status is  but expect it to be exited
	I0921 21:36:32.330326    6080 retry.go:31] will retry after 4.506218855s: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20220921213353-5916": docker container inspect functional-20220921213353-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220921213353-5916
	I0921 21:36:36.847004    6080 cli_runner.go:164] Run: docker container inspect functional-20220921213353-5916 --format={{.State.Status}}
	W0921 21:36:37.044165    6080 cli_runner.go:211] docker container inspect functional-20220921213353-5916 --format={{.State.Status}} returned with exit code 1
	I0921 21:36:37.044165    6080 oci.go:658] temporary error verifying shutdown: unknown state "functional-20220921213353-5916": docker container inspect functional-20220921213353-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220921213353-5916
	I0921 21:36:37.044165    6080 oci.go:660] temporary error: container functional-20220921213353-5916 status is  but expect it to be exited
	I0921 21:36:37.044165    6080 retry.go:31] will retry after 3.221479586s: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20220921213353-5916": docker container inspect functional-20220921213353-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220921213353-5916
	I0921 21:36:40.277013    6080 cli_runner.go:164] Run: docker container inspect functional-20220921213353-5916 --format={{.State.Status}}
	W0921 21:36:40.455025    6080 cli_runner.go:211] docker container inspect functional-20220921213353-5916 --format={{.State.Status}} returned with exit code 1
	I0921 21:36:40.455025    6080 oci.go:658] temporary error verifying shutdown: unknown state "functional-20220921213353-5916": docker container inspect functional-20220921213353-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220921213353-5916
	I0921 21:36:40.455025    6080 oci.go:660] temporary error: container functional-20220921213353-5916 status is  but expect it to be exited
	I0921 21:36:40.455025    6080 oci.go:88] couldn't shut down functional-20220921213353-5916 (might be okay): verify shutdown: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20220921213353-5916": docker container inspect functional-20220921213353-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220921213353-5916
	 
	I0921 21:36:40.462755    6080 cli_runner.go:164] Run: docker rm -f -v functional-20220921213353-5916
	I0921 21:36:40.679812    6080 cli_runner.go:164] Run: docker container inspect -f {{.Id}} functional-20220921213353-5916
	W0921 21:36:40.858916    6080 cli_runner.go:211] docker container inspect -f {{.Id}} functional-20220921213353-5916 returned with exit code 1
	I0921 21:36:40.867031    6080 cli_runner.go:164] Run: docker network inspect functional-20220921213353-5916 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0921 21:36:41.061541    6080 cli_runner.go:211] docker network inspect functional-20220921213353-5916 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0921 21:36:41.069860    6080 network_create.go:272] running [docker network inspect functional-20220921213353-5916] to gather additional debugging logs...
	I0921 21:36:41.069860    6080 cli_runner.go:164] Run: docker network inspect functional-20220921213353-5916
	W0921 21:36:41.250007    6080 cli_runner.go:211] docker network inspect functional-20220921213353-5916 returned with exit code 1
	I0921 21:36:41.250007    6080 network_create.go:275] error running [docker network inspect functional-20220921213353-5916]: docker network inspect functional-20220921213353-5916: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: functional-20220921213353-5916
	I0921 21:36:41.250007    6080 network_create.go:277] output of [docker network inspect functional-20220921213353-5916]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: functional-20220921213353-5916
	
	** /stderr **
	W0921 21:36:41.250911    6080 delete.go:139] delete failed (probably ok) <nil>
	I0921 21:36:41.250911    6080 fix.go:115] Sleeping 1 second for extra luck!
	I0921 21:36:42.257634    6080 start.go:125] createHost starting for "" (driver="docker")
	I0921 21:36:42.261811    6080 out.go:204] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0921 21:36:42.262397    6080 start.go:159] libmachine.API.Create for "functional-20220921213353-5916" (driver="docker")
	I0921 21:36:42.262460    6080 client.go:168] LocalClient.Create starting
	I0921 21:36:42.263164    6080 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I0921 21:36:42.263330    6080 main.go:134] libmachine: Decoding PEM data...
	I0921 21:36:42.263330    6080 main.go:134] libmachine: Parsing certificate...
	I0921 21:36:42.263575    6080 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I0921 21:36:42.263854    6080 main.go:134] libmachine: Decoding PEM data...
	I0921 21:36:42.263854    6080 main.go:134] libmachine: Parsing certificate...
	I0921 21:36:42.272090    6080 cli_runner.go:164] Run: docker network inspect functional-20220921213353-5916 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0921 21:36:42.459821    6080 cli_runner.go:211] docker network inspect functional-20220921213353-5916 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0921 21:36:42.465249    6080 network_create.go:272] running [docker network inspect functional-20220921213353-5916] to gather additional debugging logs...
	I0921 21:36:42.466249    6080 cli_runner.go:164] Run: docker network inspect functional-20220921213353-5916
	W0921 21:36:42.661406    6080 cli_runner.go:211] docker network inspect functional-20220921213353-5916 returned with exit code 1
	I0921 21:36:42.661406    6080 network_create.go:275] error running [docker network inspect functional-20220921213353-5916]: docker network inspect functional-20220921213353-5916: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: functional-20220921213353-5916
	I0921 21:36:42.661406    6080 network_create.go:277] output of [docker network inspect functional-20220921213353-5916]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: functional-20220921213353-5916
	
	** /stderr **
	I0921 21:36:42.668467    6080 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0921 21:36:42.883642    6080 network.go:290] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc000406830] misses:0}
	I0921 21:36:42.883642    6080 network.go:236] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0921 21:36:42.884322    6080 network_create.go:115] attempt to create docker network functional-20220921213353-5916 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0921 21:36:42.891741    6080 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=functional-20220921213353-5916 functional-20220921213353-5916
	W0921 21:36:43.107737    6080 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=functional-20220921213353-5916 functional-20220921213353-5916 returned with exit code 1
	E0921 21:36:43.107952    6080 network_create.go:104] error while trying to create docker network functional-20220921213353-5916 192.168.49.0/24: create docker network functional-20220921213353-5916 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=functional-20220921213353-5916 functional-20220921213353-5916: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network f714dcf8a6e38001b3c606bcfef85df748989420f42256b76baa4bc8f6fcda81 (br-f714dcf8a6e3): conflicts with network a04d36bfb3cf4b1b1ab643a2ec70fadfc4a485878b32a275b4c7653037d66dd0 (br-a04d36bfb3cf): networks have overlapping IPv4
	W0921 21:36:43.107952    6080 out.go:239] ! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network functional-20220921213353-5916 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=functional-20220921213353-5916 functional-20220921213353-5916: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network f714dcf8a6e38001b3c606bcfef85df748989420f42256b76baa4bc8f6fcda81 (br-f714dcf8a6e3): conflicts with network a04d36bfb3cf4b1b1ab643a2ec70fadfc4a485878b32a275b4c7653037d66dd0 (br-a04d36bfb3cf): networks have overlapping IPv4
	
	I0921 21:36:43.124052    6080 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0921 21:36:43.349497    6080 cli_runner.go:164] Run: docker volume create functional-20220921213353-5916 --label name.minikube.sigs.k8s.io=functional-20220921213353-5916 --label created_by.minikube.sigs.k8s.io=true
	W0921 21:36:43.528251    6080 cli_runner.go:211] docker volume create functional-20220921213353-5916 --label name.minikube.sigs.k8s.io=functional-20220921213353-5916 --label created_by.minikube.sigs.k8s.io=true returned with exit code 1
	I0921 21:36:43.528251    6080 client.go:171] LocalClient.Create took 1.2657847s
	I0921 21:36:45.547272    6080 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0921 21:36:45.554271    6080 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916
	W0921 21:36:45.756359    6080 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916 returned with exit code 1
	I0921 21:36:45.756684    6080 retry.go:31] will retry after 149.242379ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220921213353-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220921213353-5916
	I0921 21:36:45.922450    6080 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916
	W0921 21:36:46.130089    6080 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916 returned with exit code 1
	I0921 21:36:46.130621    6080 retry.go:31] will retry after 300.341948ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220921213353-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220921213353-5916
	I0921 21:36:46.443946    6080 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916
	W0921 21:36:46.650140    6080 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916 returned with exit code 1
	I0921 21:36:46.650140    6080 retry.go:31] will retry after 571.057104ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220921213353-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220921213353-5916
	I0921 21:36:47.236794    6080 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916
	W0921 21:36:47.416282    6080 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916 returned with exit code 1
	W0921 21:36:47.416449    6080 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220921213353-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220921213353-5916
	
	W0921 21:36:47.416543    6080 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220921213353-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220921213353-5916
	I0921 21:36:47.426387    6080 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0921 21:36:47.431388    6080 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916
	W0921 21:36:47.625368    6080 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916 returned with exit code 1
	I0921 21:36:47.625368    6080 retry.go:31] will retry after 178.565968ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220921213353-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220921213353-5916
	I0921 21:36:47.825346    6080 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916
	W0921 21:36:48.016391    6080 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916 returned with exit code 1
	I0921 21:36:48.016535    6080 retry.go:31] will retry after 330.246446ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220921213353-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220921213353-5916
	I0921 21:36:48.364223    6080 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916
	W0921 21:36:48.547800    6080 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916 returned with exit code 1
	I0921 21:36:48.547800    6080 retry.go:31] will retry after 460.157723ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220921213353-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220921213353-5916
	I0921 21:36:49.022989    6080 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916
	W0921 21:36:49.209142    6080 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916 returned with exit code 1
	W0921 21:36:49.209142    6080 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220921213353-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220921213353-5916
	
	W0921 21:36:49.209142    6080 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220921213353-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220921213353-5916
	I0921 21:36:49.209142    6080 start.go:128] duration metric: createHost completed in 6.9514718s
	I0921 21:36:49.221157    6080 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0921 21:36:49.226882    6080 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916
	W0921 21:36:49.411553    6080 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916 returned with exit code 1
	I0921 21:36:49.411553    6080 retry.go:31] will retry after 195.758538ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220921213353-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220921213353-5916
	I0921 21:36:49.618547    6080 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916
	W0921 21:36:49.808476    6080 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916 returned with exit code 1
	I0921 21:36:49.808476    6080 retry.go:31] will retry after 297.413196ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220921213353-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220921213353-5916
	I0921 21:36:50.118067    6080 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916
	W0921 21:36:50.296678    6080 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916 returned with exit code 1
	I0921 21:36:50.297113    6080 retry.go:31] will retry after 663.23513ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220921213353-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220921213353-5916
	I0921 21:36:50.980153    6080 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916
	W0921 21:36:51.186661    6080 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916 returned with exit code 1
	W0921 21:36:51.186882    6080 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220921213353-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220921213353-5916
	
	W0921 21:36:51.186882    6080 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220921213353-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220921213353-5916
	I0921 21:36:51.197365    6080 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0921 21:36:51.203311    6080 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916
	W0921 21:36:51.387794    6080 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916 returned with exit code 1
	I0921 21:36:51.387794    6080 retry.go:31] will retry after 175.796719ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220921213353-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220921213353-5916
	I0921 21:36:51.584761    6080 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916
	W0921 21:36:51.777183    6080 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916 returned with exit code 1
	I0921 21:36:51.777695    6080 retry.go:31] will retry after 322.826781ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220921213353-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220921213353-5916
	I0921 21:36:52.122330    6080 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916
	W0921 21:36:52.302625    6080 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916 returned with exit code 1
	I0921 21:36:52.302625    6080 retry.go:31] will retry after 602.253718ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220921213353-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220921213353-5916
	I0921 21:36:52.924004    6080 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916
	W0921 21:36:53.134372    6080 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916 returned with exit code 1
	W0921 21:36:53.134372    6080 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220921213353-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220921213353-5916
	
	W0921 21:36:53.134372    6080 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220921213353-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220921213353-5916
	I0921 21:36:53.134372    6080 fix.go:57] fixHost completed within 31.1958556s
	I0921 21:36:53.134372    6080 start.go:83] releasing machines lock for "functional-20220921213353-5916", held for 31.196037s
	W0921 21:36:53.134372    6080 start.go:602] error starting host: recreate: creating host: create: creating: setting up container node: creating volume for functional-20220921213353-5916 container: docker volume create functional-20220921213353-5916 --label name.minikube.sigs.k8s.io=functional-20220921213353-5916 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create functional-20220921213353-5916: error while creating volume root path '/var/lib/docker/volumes/functional-20220921213353-5916': mkdir /var/lib/docker/volumes/functional-20220921213353-5916: read-only file system
	W0921 21:36:53.134372    6080 out.go:239] ! StartHost failed, but will try again: recreate: creating host: create: creating: setting up container node: creating volume for functional-20220921213353-5916 container: docker volume create functional-20220921213353-5916 --label name.minikube.sigs.k8s.io=functional-20220921213353-5916 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create functional-20220921213353-5916: error while creating volume root path '/var/lib/docker/volumes/functional-20220921213353-5916': mkdir /var/lib/docker/volumes/functional-20220921213353-5916: read-only file system
	
	I0921 21:36:53.134372    6080 start.go:617] Will try again in 5 seconds ...
	I0921 21:36:58.145977    6080 start.go:364] acquiring machines lock for functional-20220921213353-5916: {Name:mk3f5ae8740d25300eb345feb1053ed449398cb0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0921 21:36:58.146618    6080 start.go:368] acquired machines lock for "functional-20220921213353-5916" in 475.4µs
	I0921 21:36:58.146795    6080 start.go:96] Skipping create...Using existing machine configuration
	I0921 21:36:58.146795    6080 fix.go:55] fixHost starting: 
	I0921 21:36:58.161804    6080 cli_runner.go:164] Run: docker container inspect functional-20220921213353-5916 --format={{.State.Status}}
	W0921 21:36:58.363602    6080 cli_runner.go:211] docker container inspect functional-20220921213353-5916 --format={{.State.Status}} returned with exit code 1
	I0921 21:36:58.363628    6080 fix.go:103] recreateIfNeeded on functional-20220921213353-5916: state= err=unknown state "functional-20220921213353-5916": docker container inspect functional-20220921213353-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220921213353-5916
	I0921 21:36:58.363628    6080 fix.go:108] machineExists: false. err=machine does not exist
	I0921 21:36:58.368237    6080 out.go:177] * docker "functional-20220921213353-5916" container is missing, will recreate.
	I0921 21:36:58.370531    6080 delete.go:124] DEMOLISHING functional-20220921213353-5916 ...
	I0921 21:36:58.383951    6080 cli_runner.go:164] Run: docker container inspect functional-20220921213353-5916 --format={{.State.Status}}
	W0921 21:36:58.567537    6080 cli_runner.go:211] docker container inspect functional-20220921213353-5916 --format={{.State.Status}} returned with exit code 1
	W0921 21:36:58.567537    6080 stop.go:75] unable to get state: unknown state "functional-20220921213353-5916": docker container inspect functional-20220921213353-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220921213353-5916
	I0921 21:36:58.567537    6080 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "functional-20220921213353-5916": docker container inspect functional-20220921213353-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220921213353-5916
	I0921 21:36:58.584383    6080 cli_runner.go:164] Run: docker container inspect functional-20220921213353-5916 --format={{.State.Status}}
	W0921 21:36:58.770316    6080 cli_runner.go:211] docker container inspect functional-20220921213353-5916 --format={{.State.Status}} returned with exit code 1
	I0921 21:36:58.770580    6080 delete.go:82] Unable to get host status for functional-20220921213353-5916, assuming it has already been deleted: state: unknown state "functional-20220921213353-5916": docker container inspect functional-20220921213353-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220921213353-5916
	I0921 21:36:58.778192    6080 cli_runner.go:164] Run: docker container inspect -f {{.Id}} functional-20220921213353-5916
	W0921 21:36:58.980023    6080 cli_runner.go:211] docker container inspect -f {{.Id}} functional-20220921213353-5916 returned with exit code 1
	I0921 21:36:58.980076    6080 kic.go:356] could not find the container functional-20220921213353-5916 to remove it. will try anyways
	I0921 21:36:58.988937    6080 cli_runner.go:164] Run: docker container inspect functional-20220921213353-5916 --format={{.State.Status}}
	W0921 21:36:59.182351    6080 cli_runner.go:211] docker container inspect functional-20220921213353-5916 --format={{.State.Status}} returned with exit code 1
	W0921 21:36:59.182351    6080 oci.go:84] error getting container status, will try to delete anyways: unknown state "functional-20220921213353-5916": docker container inspect functional-20220921213353-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220921213353-5916
	I0921 21:36:59.189987    6080 cli_runner.go:164] Run: docker exec --privileged -t functional-20220921213353-5916 /bin/bash -c "sudo init 0"
	W0921 21:36:59.383089    6080 cli_runner.go:211] docker exec --privileged -t functional-20220921213353-5916 /bin/bash -c "sudo init 0" returned with exit code 1
	I0921 21:36:59.383089    6080 oci.go:646] error shutdown functional-20220921213353-5916: docker exec --privileged -t functional-20220921213353-5916 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: functional-20220921213353-5916
	I0921 21:37:00.397008    6080 cli_runner.go:164] Run: docker container inspect functional-20220921213353-5916 --format={{.State.Status}}
	W0921 21:37:00.576285    6080 cli_runner.go:211] docker container inspect functional-20220921213353-5916 --format={{.State.Status}} returned with exit code 1
	I0921 21:37:00.576285    6080 oci.go:658] temporary error verifying shutdown: unknown state "functional-20220921213353-5916": docker container inspect functional-20220921213353-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220921213353-5916
	I0921 21:37:00.576285    6080 oci.go:660] temporary error: container functional-20220921213353-5916 status is  but expect it to be exited
	I0921 21:37:00.576285    6080 retry.go:31] will retry after 396.557122ms: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20220921213353-5916": docker container inspect functional-20220921213353-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220921213353-5916
	I0921 21:37:00.990531    6080 cli_runner.go:164] Run: docker container inspect functional-20220921213353-5916 --format={{.State.Status}}
	W0921 21:37:01.184713    6080 cli_runner.go:211] docker container inspect functional-20220921213353-5916 --format={{.State.Status}} returned with exit code 1
	I0921 21:37:01.184789    6080 oci.go:658] temporary error verifying shutdown: unknown state "functional-20220921213353-5916": docker container inspect functional-20220921213353-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220921213353-5916
	I0921 21:37:01.184789    6080 oci.go:660] temporary error: container functional-20220921213353-5916 status is  but expect it to be exited
	I0921 21:37:01.184856    6080 retry.go:31] will retry after 597.811922ms: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20220921213353-5916": docker container inspect functional-20220921213353-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220921213353-5916
	I0921 21:37:01.804954    6080 cli_runner.go:164] Run: docker container inspect functional-20220921213353-5916 --format={{.State.Status}}
	W0921 21:37:01.985305    6080 cli_runner.go:211] docker container inspect functional-20220921213353-5916 --format={{.State.Status}} returned with exit code 1
	I0921 21:37:01.985572    6080 oci.go:658] temporary error verifying shutdown: unknown state "functional-20220921213353-5916": docker container inspect functional-20220921213353-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220921213353-5916
	I0921 21:37:01.985572    6080 oci.go:660] temporary error: container functional-20220921213353-5916 status is  but expect it to be exited
	I0921 21:37:01.985572    6080 retry.go:31] will retry after 1.409144665s: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20220921213353-5916": docker container inspect functional-20220921213353-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220921213353-5916
	I0921 21:37:03.408075    6080 cli_runner.go:164] Run: docker container inspect functional-20220921213353-5916 --format={{.State.Status}}
	W0921 21:37:03.630230    6080 cli_runner.go:211] docker container inspect functional-20220921213353-5916 --format={{.State.Status}} returned with exit code 1
	I0921 21:37:03.630431    6080 oci.go:658] temporary error verifying shutdown: unknown state "functional-20220921213353-5916": docker container inspect functional-20220921213353-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220921213353-5916
	I0921 21:37:03.630431    6080 oci.go:660] temporary error: container functional-20220921213353-5916 status is  but expect it to be exited
	I0921 21:37:03.630476    6080 retry.go:31] will retry after 1.192358242s: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20220921213353-5916": docker container inspect functional-20220921213353-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220921213353-5916
	I0921 21:37:04.841021    6080 cli_runner.go:164] Run: docker container inspect functional-20220921213353-5916 --format={{.State.Status}}
	W0921 21:37:05.020192    6080 cli_runner.go:211] docker container inspect functional-20220921213353-5916 --format={{.State.Status}} returned with exit code 1
	I0921 21:37:05.020192    6080 oci.go:658] temporary error verifying shutdown: unknown state "functional-20220921213353-5916": docker container inspect functional-20220921213353-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220921213353-5916
	I0921 21:37:05.020192    6080 oci.go:660] temporary error: container functional-20220921213353-5916 status is  but expect it to be exited
	I0921 21:37:05.020192    6080 retry.go:31] will retry after 3.456004252s: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20220921213353-5916": docker container inspect functional-20220921213353-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220921213353-5916
	I0921 21:37:08.488542    6080 cli_runner.go:164] Run: docker container inspect functional-20220921213353-5916 --format={{.State.Status}}
	W0921 21:37:08.682338    6080 cli_runner.go:211] docker container inspect functional-20220921213353-5916 --format={{.State.Status}} returned with exit code 1
	I0921 21:37:08.682776    6080 oci.go:658] temporary error verifying shutdown: unknown state "functional-20220921213353-5916": docker container inspect functional-20220921213353-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220921213353-5916
	I0921 21:37:08.682776    6080 oci.go:660] temporary error: container functional-20220921213353-5916 status is  but expect it to be exited
	I0921 21:37:08.682776    6080 retry.go:31] will retry after 4.543793083s: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20220921213353-5916": docker container inspect functional-20220921213353-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220921213353-5916
	I0921 21:37:13.247349    6080 cli_runner.go:164] Run: docker container inspect functional-20220921213353-5916 --format={{.State.Status}}
	W0921 21:37:13.441381    6080 cli_runner.go:211] docker container inspect functional-20220921213353-5916 --format={{.State.Status}} returned with exit code 1
	I0921 21:37:13.441645    6080 oci.go:658] temporary error verifying shutdown: unknown state "functional-20220921213353-5916": docker container inspect functional-20220921213353-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220921213353-5916
	I0921 21:37:13.441645    6080 oci.go:660] temporary error: container functional-20220921213353-5916 status is  but expect it to be exited
	I0921 21:37:13.441717    6080 retry.go:31] will retry after 5.830976587s: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20220921213353-5916": docker container inspect functional-20220921213353-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220921213353-5916
	I0921 21:37:19.296323    6080 cli_runner.go:164] Run: docker container inspect functional-20220921213353-5916 --format={{.State.Status}}
	W0921 21:37:19.475077    6080 cli_runner.go:211] docker container inspect functional-20220921213353-5916 --format={{.State.Status}} returned with exit code 1
	I0921 21:37:19.475077    6080 oci.go:658] temporary error verifying shutdown: unknown state "functional-20220921213353-5916": docker container inspect functional-20220921213353-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220921213353-5916
	I0921 21:37:19.475077    6080 oci.go:660] temporary error: container functional-20220921213353-5916 status is  but expect it to be exited
	I0921 21:37:19.475077    6080 oci.go:88] couldn't shut down functional-20220921213353-5916 (might be okay): verify shutdown: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20220921213353-5916": docker container inspect functional-20220921213353-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220921213353-5916
	 
	I0921 21:37:19.482525    6080 cli_runner.go:164] Run: docker rm -f -v functional-20220921213353-5916
	I0921 21:37:19.706192    6080 cli_runner.go:164] Run: docker container inspect -f {{.Id}} functional-20220921213353-5916
	W0921 21:37:19.884491    6080 cli_runner.go:211] docker container inspect -f {{.Id}} functional-20220921213353-5916 returned with exit code 1
	I0921 21:37:19.891491    6080 cli_runner.go:164] Run: docker network inspect functional-20220921213353-5916 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0921 21:37:20.086189    6080 cli_runner.go:211] docker network inspect functional-20220921213353-5916 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0921 21:37:20.093695    6080 network_create.go:272] running [docker network inspect functional-20220921213353-5916] to gather additional debugging logs...
	I0921 21:37:20.093695    6080 cli_runner.go:164] Run: docker network inspect functional-20220921213353-5916
	W0921 21:37:20.274277    6080 cli_runner.go:211] docker network inspect functional-20220921213353-5916 returned with exit code 1
	I0921 21:37:20.274310    6080 network_create.go:275] error running [docker network inspect functional-20220921213353-5916]: docker network inspect functional-20220921213353-5916: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: functional-20220921213353-5916
	I0921 21:37:20.274310    6080 network_create.go:277] output of [docker network inspect functional-20220921213353-5916]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: functional-20220921213353-5916
	
	** /stderr **
	W0921 21:37:20.275203    6080 delete.go:139] delete failed (probably ok) <nil>
	I0921 21:37:20.275203    6080 fix.go:115] Sleeping 1 second for extra luck!
	I0921 21:37:21.288307    6080 start.go:125] createHost starting for "" (driver="docker")
	I0921 21:37:21.293990    6080 out.go:204] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0921 21:37:21.293990    6080 start.go:159] libmachine.API.Create for "functional-20220921213353-5916" (driver="docker")
	I0921 21:37:21.293990    6080 client.go:168] LocalClient.Create starting
	I0921 21:37:21.294780    6080 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I0921 21:37:21.294780    6080 main.go:134] libmachine: Decoding PEM data...
	I0921 21:37:21.294780    6080 main.go:134] libmachine: Parsing certificate...
	I0921 21:37:21.295355    6080 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I0921 21:37:21.295523    6080 main.go:134] libmachine: Decoding PEM data...
	I0921 21:37:21.295523    6080 main.go:134] libmachine: Parsing certificate...
	I0921 21:37:21.303438    6080 cli_runner.go:164] Run: docker network inspect functional-20220921213353-5916 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0921 21:37:21.491193    6080 cli_runner.go:211] docker network inspect functional-20220921213353-5916 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0921 21:37:21.498809    6080 network_create.go:272] running [docker network inspect functional-20220921213353-5916] to gather additional debugging logs...
	I0921 21:37:21.498809    6080 cli_runner.go:164] Run: docker network inspect functional-20220921213353-5916
	W0921 21:37:21.683137    6080 cli_runner.go:211] docker network inspect functional-20220921213353-5916 returned with exit code 1
	I0921 21:37:21.683287    6080 network_create.go:275] error running [docker network inspect functional-20220921213353-5916]: docker network inspect functional-20220921213353-5916: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: functional-20220921213353-5916
	I0921 21:37:21.683355    6080 network_create.go:277] output of [docker network inspect functional-20220921213353-5916]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: functional-20220921213353-5916
	
	** /stderr **
	I0921 21:37:21.690993    6080 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0921 21:37:21.899648    6080 network.go:281] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000406830] amended:false}} dirty:map[] misses:0}
	I0921 21:37:21.899648    6080 network.go:239] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0921 21:37:21.913643    6080 network.go:290] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000406830] amended:true}} dirty:map[192.168.49.0:0xc000406830 192.168.58.0:0xc00048a5b0] misses:0}
	I0921 21:37:21.913643    6080 network.go:236] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0921 21:37:21.913643    6080 network_create.go:115] attempt to create docker network functional-20220921213353-5916 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0921 21:37:21.921446    6080 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=functional-20220921213353-5916 functional-20220921213353-5916
	W0921 21:37:22.121367    6080 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=functional-20220921213353-5916 functional-20220921213353-5916 returned with exit code 1
	E0921 21:37:22.121367    6080 network_create.go:104] error while trying to create docker network functional-20220921213353-5916 192.168.58.0/24: create docker network functional-20220921213353-5916 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=functional-20220921213353-5916 functional-20220921213353-5916: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 086f6ca4b948148b5c753ba2bc2bae51b8309d08fd412973a0dd7526f4b38637 (br-086f6ca4b948): conflicts with network 8a3cd8d165a4af032276277dbee93b52b8ee4aed7ca6d8aceee6814901b31792 (br-8a3cd8d165a4): networks have overlapping IPv4
	W0921 21:37:22.121367    6080 out.go:239] ! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network functional-20220921213353-5916 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=functional-20220921213353-5916 functional-20220921213353-5916: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 086f6ca4b948148b5c753ba2bc2bae51b8309d08fd412973a0dd7526f4b38637 (br-086f6ca4b948): conflicts with network 8a3cd8d165a4af032276277dbee93b52b8ee4aed7ca6d8aceee6814901b31792 (br-8a3cd8d165a4): networks have overlapping IPv4
	
	I0921 21:37:22.135883    6080 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0921 21:37:22.330181    6080 cli_runner.go:164] Run: docker volume create functional-20220921213353-5916 --label name.minikube.sigs.k8s.io=functional-20220921213353-5916 --label created_by.minikube.sigs.k8s.io=true
	W0921 21:37:22.510140    6080 cli_runner.go:211] docker volume create functional-20220921213353-5916 --label name.minikube.sigs.k8s.io=functional-20220921213353-5916 --label created_by.minikube.sigs.k8s.io=true returned with exit code 1
	I0921 21:37:22.510140    6080 client.go:171] LocalClient.Create took 1.2161437s
	I0921 21:37:24.530517    6080 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0921 21:37:24.537521    6080 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916
	W0921 21:37:24.722531    6080 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916 returned with exit code 1
	I0921 21:37:24.722717    6080 retry.go:31] will retry after 164.582069ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220921213353-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220921213353-5916
	I0921 21:37:24.897729    6080 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916
	W0921 21:37:25.107219    6080 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916 returned with exit code 1
	I0921 21:37:25.107219    6080 retry.go:31] will retry after 415.22004ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220921213353-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220921213353-5916
	I0921 21:37:25.536832    6080 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916
	W0921 21:37:25.751197    6080 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916 returned with exit code 1
	W0921 21:37:25.752587    6080 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220921213353-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220921213353-5916
	
	W0921 21:37:25.752659    6080 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220921213353-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220921213353-5916
	I0921 21:37:25.762393    6080 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0921 21:37:25.768401    6080 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916
	W0921 21:37:25.969029    6080 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916 returned with exit code 1
	I0921 21:37:25.969029    6080 retry.go:31] will retry after 144.863405ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220921213353-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220921213353-5916
	I0921 21:37:26.129479    6080 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916
	W0921 21:37:26.308456    6080 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916 returned with exit code 1
	I0921 21:37:26.313644    6080 retry.go:31] will retry after 410.553224ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220921213353-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220921213353-5916
	I0921 21:37:26.748660    6080 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916
	W0921 21:37:26.955542    6080 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916 returned with exit code 1
	I0921 21:37:26.955692    6080 retry.go:31] will retry after 314.505366ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220921213353-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220921213353-5916
	I0921 21:37:27.293970    6080 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916
	W0921 21:37:27.491659    6080 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916 returned with exit code 1
	W0921 21:37:27.491659    6080 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220921213353-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220921213353-5916
	
	W0921 21:37:27.491659    6080 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220921213353-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220921213353-5916
	I0921 21:37:27.491659    6080 start.go:128] duration metric: createHost completed in 6.2033185s
	I0921 21:37:27.501637    6080 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0921 21:37:27.509476    6080 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916
	W0921 21:37:27.693576    6080 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916 returned with exit code 1
	I0921 21:37:27.693648    6080 retry.go:31] will retry after 200.38067ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220921213353-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220921213353-5916
	I0921 21:37:27.913406    6080 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916
	W0921 21:37:28.145850    6080 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916 returned with exit code 1
	I0921 21:37:28.145850    6080 retry.go:31] will retry after 252.474839ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220921213353-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220921213353-5916
	I0921 21:37:28.418010    6080 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916
	W0921 21:37:28.594632    6080 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916 returned with exit code 1
	I0921 21:37:28.595018    6080 retry.go:31] will retry after 585.618668ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220921213353-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220921213353-5916
	I0921 21:37:29.196815    6080 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916
	W0921 21:37:29.393126    6080 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916 returned with exit code 1
	W0921 21:37:29.393302    6080 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220921213353-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220921213353-5916
	
	W0921 21:37:29.393302    6080 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220921213353-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220921213353-5916
	I0921 21:37:29.403887    6080 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0921 21:37:29.409899    6080 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916
	W0921 21:37:29.594849    6080 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916 returned with exit code 1
	I0921 21:37:29.594849    6080 retry.go:31] will retry after 194.626905ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220921213353-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220921213353-5916
	I0921 21:37:29.795743    6080 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916
	W0921 21:37:29.983809    6080 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916 returned with exit code 1
	I0921 21:37:29.983809    6080 retry.go:31] will retry after 346.182076ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220921213353-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220921213353-5916
	I0921 21:37:30.345856    6080 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916
	W0921 21:37:30.539360    6080 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916 returned with exit code 1
	I0921 21:37:30.539702    6080 retry.go:31] will retry after 579.704465ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220921213353-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220921213353-5916
	I0921 21:37:31.130262    6080 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916
	W0921 21:37:31.324127    6080 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916 returned with exit code 1
	W0921 21:37:31.324265    6080 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220921213353-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220921213353-5916
	
	W0921 21:37:31.324362    6080 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220921213353-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220921213353-5916
	I0921 21:37:31.324362    6080 fix.go:57] fixHost completed within 33.1773922s
	I0921 21:37:31.324362    6080 start.go:83] releasing machines lock for "functional-20220921213353-5916", held for 33.1775695s
	W0921 21:37:31.324617    6080 out.go:239] * Failed to start docker container. Running "minikube delete -p functional-20220921213353-5916" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for functional-20220921213353-5916 container: docker volume create functional-20220921213353-5916 --label name.minikube.sigs.k8s.io=functional-20220921213353-5916 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create functional-20220921213353-5916: error while creating volume root path '/var/lib/docker/volumes/functional-20220921213353-5916': mkdir /var/lib/docker/volumes/functional-20220921213353-5916: read-only file system
	
	I0921 21:37:31.329139    6080 out.go:177] 
	W0921 21:37:31.331368    6080 out.go:239] X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for functional-20220921213353-5916 container: docker volume create functional-20220921213353-5916 --label name.minikube.sigs.k8s.io=functional-20220921213353-5916 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create functional-20220921213353-5916: error while creating volume root path '/var/lib/docker/volumes/functional-20220921213353-5916': mkdir /var/lib/docker/volumes/functional-20220921213353-5916: read-only file system
	
	W0921 21:37:31.331368    6080 out.go:239] * Suggestion: Restart Docker
	W0921 21:37:31.331368    6080 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/6825
	I0921 21:37:31.334689    6080 out.go:177] 
	
	* 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "functional-20220921213353-5916": docker container inspect functional-20220921213353-5916 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220921213353-5916
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_logs_80bd2298da0c083373823443180fffe8ad701919_1059.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1230: out/minikube-windows-amd64.exe -p functional-20220921213353-5916 logs failed: exit status 80
functional_test.go:1220: expected minikube logs to include word: -"Linux"- but got 
**** 
* ==> Audit <==
* |---------|------------------------------------------------------------------------------------|-------------------------------------|-------------------|---------|---------------------|---------------------|
| Command |                                        Args                                        |               Profile               |       User        | Version |     Start Time      |      End Time       |
|---------|------------------------------------------------------------------------------------|-------------------------------------|-------------------|---------|---------------------|---------------------|
| start   | -o=json --download-only -p                                                         | download-only-20220921212952-5916   | minikube2\jenkins | v1.27.0 | 21 Sep 22 21:29 GMT |                     |
|         | download-only-20220921212952-5916                                                  |                                     |                   |         |                     |                     |
|         | --force --alsologtostderr                                                          |                                     |                   |         |                     |                     |
|         | --kubernetes-version=v1.16.0                                                       |                                     |                   |         |                     |                     |
|         | --container-runtime=docker                                                         |                                     |                   |         |                     |                     |
|         | --driver=docker                                                                    |                                     |                   |         |                     |                     |
| start   | -o=json --download-only -p                                                         | download-only-20220921212952-5916   | minikube2\jenkins | v1.27.0 | 21 Sep 22 21:30 GMT |                     |
|         | download-only-20220921212952-5916                                                  |                                     |                   |         |                     |                     |
|         | --force --alsologtostderr                                                          |                                     |                   |         |                     |                     |
|         | --kubernetes-version=v1.25.2                                                       |                                     |                   |         |                     |                     |
|         | --container-runtime=docker                                                         |                                     |                   |         |                     |                     |
|         | --driver=docker                                                                    |                                     |                   |         |                     |                     |
| delete  | --all                                                                              | minikube                            | minikube2\jenkins | v1.27.0 | 21 Sep 22 21:30 GMT | 21 Sep 22 21:30 GMT |
| delete  | -p                                                                                 | download-only-20220921212952-5916   | minikube2\jenkins | v1.27.0 | 21 Sep 22 21:30 GMT | 21 Sep 22 21:30 GMT |
|         | download-only-20220921212952-5916                                                  |                                     |                   |         |                     |                     |
| delete  | -p                                                                                 | download-only-20220921212952-5916   | minikube2\jenkins | v1.27.0 | 21 Sep 22 21:30 GMT | 21 Sep 22 21:30 GMT |
|         | download-only-20220921212952-5916                                                  |                                     |                   |         |                     |                     |
| start   | --download-only -p                                                                 | download-docker-20220921213020-5916 | minikube2\jenkins | v1.27.0 | 21 Sep 22 21:30 GMT |                     |
|         | download-docker-20220921213020-5916                                                |                                     |                   |         |                     |                     |
|         | --force --alsologtostderr                                                          |                                     |                   |         |                     |                     |
|         | --driver=docker                                                                    |                                     |                   |         |                     |                     |
| delete  | -p                                                                                 | download-docker-20220921213020-5916 | minikube2\jenkins | v1.27.0 | 21 Sep 22 21:30 GMT | 21 Sep 22 21:30 GMT |
|         | download-docker-20220921213020-5916                                                |                                     |                   |         |                     |                     |
| start   | --download-only -p                                                                 | binary-mirror-20220921213055-5916   | minikube2\jenkins | v1.27.0 | 21 Sep 22 21:30 GMT |                     |
|         | binary-mirror-20220921213055-5916                                                  |                                     |                   |         |                     |                     |
|         | --alsologtostderr --binary-mirror                                                  |                                     |                   |         |                     |                     |
|         | http://127.0.0.1:57904                                                             |                                     |                   |         |                     |                     |
|         | --driver=docker                                                                    |                                     |                   |         |                     |                     |
| delete  | -p                                                                                 | binary-mirror-20220921213055-5916   | minikube2\jenkins | v1.27.0 | 21 Sep 22 21:30 GMT | 21 Sep 22 21:30 GMT |
|         | binary-mirror-20220921213055-5916                                                  |                                     |                   |         |                     |                     |
| start   | -p addons-20220921213059-5916                                                      | addons-20220921213059-5916          | minikube2\jenkins | v1.27.0 | 21 Sep 22 21:31 GMT |                     |
|         | --wait=true --memory=4000                                                          |                                     |                   |         |                     |                     |
|         | --alsologtostderr                                                                  |                                     |                   |         |                     |                     |
|         | --addons=registry                                                                  |                                     |                   |         |                     |                     |
|         | --addons=metrics-server                                                            |                                     |                   |         |                     |                     |
|         | --addons=volumesnapshots                                                           |                                     |                   |         |                     |                     |
|         | --addons=csi-hostpath-driver                                                       |                                     |                   |         |                     |                     |
|         | --addons=gcp-auth                                                                  |                                     |                   |         |                     |                     |
|         | --driver=docker                                                                    |                                     |                   |         |                     |                     |
|         | --addons=ingress                                                                   |                                     |                   |         |                     |                     |
|         | --addons=ingress-dns                                                               |                                     |                   |         |                     |                     |
|         | --addons=helm-tiller                                                               |                                     |                   |         |                     |                     |
| delete  | -p addons-20220921213059-5916                                                      | addons-20220921213059-5916          | minikube2\jenkins | v1.27.0 | 21 Sep 22 21:31 GMT | 21 Sep 22 21:31 GMT |
| start   | -p nospam-20220921213151-5916 -n=1 --memory=2250 --wait=false                      | nospam-20220921213151-5916          | minikube2\jenkins | v1.27.0 | 21 Sep 22 21:31 GMT |                     |
|         | --log_dir=C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-20220921213151-5916 |                                     |                   |         |                     |                     |
|         | --driver=docker                                                                    |                                     |                   |         |                     |                     |
| start   | nospam-20220921213151-5916 --log_dir                                               | nospam-20220921213151-5916          | minikube2\jenkins | v1.27.0 | 21 Sep 22 21:32 GMT |                     |
|         | C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-20220921213151-5916           |                                     |                   |         |                     |                     |
|         | start --dry-run                                                                    |                                     |                   |         |                     |                     |
| start   | nospam-20220921213151-5916 --log_dir                                               | nospam-20220921213151-5916          | minikube2\jenkins | v1.27.0 | 21 Sep 22 21:32 GMT |                     |
|         | C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-20220921213151-5916           |                                     |                   |         |                     |                     |
|         | start --dry-run                                                                    |                                     |                   |         |                     |                     |
| start   | nospam-20220921213151-5916 --log_dir                                               | nospam-20220921213151-5916          | minikube2\jenkins | v1.27.0 | 21 Sep 22 21:32 GMT |                     |
|         | C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-20220921213151-5916           |                                     |                   |         |                     |                     |
|         | start --dry-run                                                                    |                                     |                   |         |                     |                     |
| pause   | nospam-20220921213151-5916 --log_dir                                               | nospam-20220921213151-5916          | minikube2\jenkins | v1.27.0 | 21 Sep 22 21:32 GMT |                     |
|         | C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-20220921213151-5916           |                                     |                   |         |                     |                     |
|         | pause                                                                              |                                     |                   |         |                     |                     |
| pause   | nospam-20220921213151-5916 --log_dir                                               | nospam-20220921213151-5916          | minikube2\jenkins | v1.27.0 | 21 Sep 22 21:32 GMT |                     |
|         | C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-20220921213151-5916           |                                     |                   |         |                     |                     |
|         | pause                                                                              |                                     |                   |         |                     |                     |
| pause   | nospam-20220921213151-5916 --log_dir                                               | nospam-20220921213151-5916          | minikube2\jenkins | v1.27.0 | 21 Sep 22 21:32 GMT |                     |
|         | C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-20220921213151-5916           |                                     |                   |         |                     |                     |
|         | pause                                                                              |                                     |                   |         |                     |                     |
| unpause | nospam-20220921213151-5916 --log_dir                                               | nospam-20220921213151-5916          | minikube2\jenkins | v1.27.0 | 21 Sep 22 21:32 GMT |                     |
|         | C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-20220921213151-5916           |                                     |                   |         |                     |                     |
|         | unpause                                                                            |                                     |                   |         |                     |                     |
| unpause | nospam-20220921213151-5916 --log_dir                                               | nospam-20220921213151-5916          | minikube2\jenkins | v1.27.0 | 21 Sep 22 21:32 GMT |                     |
|         | C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-20220921213151-5916           |                                     |                   |         |                     |                     |
|         | unpause                                                                            |                                     |                   |         |                     |                     |
| unpause | nospam-20220921213151-5916 --log_dir                                               | nospam-20220921213151-5916          | minikube2\jenkins | v1.27.0 | 21 Sep 22 21:32 GMT |                     |
|         | C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-20220921213151-5916           |                                     |                   |         |                     |                     |
|         | unpause                                                                            |                                     |                   |         |                     |                     |
| stop    | nospam-20220921213151-5916 --log_dir                                               | nospam-20220921213151-5916          | minikube2\jenkins | v1.27.0 | 21 Sep 22 21:32 GMT |                     |
|         | C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-20220921213151-5916           |                                     |                   |         |                     |                     |
|         | stop                                                                               |                                     |                   |         |                     |                     |
| stop    | nospam-20220921213151-5916 --log_dir                                               | nospam-20220921213151-5916          | minikube2\jenkins | v1.27.0 | 21 Sep 22 21:33 GMT |                     |
|         | C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-20220921213151-5916           |                                     |                   |         |                     |                     |
|         | stop                                                                               |                                     |                   |         |                     |                     |
| stop    | nospam-20220921213151-5916 --log_dir                                               | nospam-20220921213151-5916          | minikube2\jenkins | v1.27.0 | 21 Sep 22 21:33 GMT |                     |
|         | C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-20220921213151-5916           |                                     |                   |         |                     |                     |
|         | stop                                                                               |                                     |                   |         |                     |                     |
| delete  | -p nospam-20220921213151-5916                                                      | nospam-20220921213151-5916          | minikube2\jenkins | v1.27.0 | 21 Sep 22 21:33 GMT | 21 Sep 22 21:33 GMT |
| start   | -p                                                                                 | functional-20220921213353-5916      | minikube2\jenkins | v1.27.0 | 21 Sep 22 21:33 GMT |                     |
|         | functional-20220921213353-5916                                                     |                                     |                   |         |                     |                     |
|         | --memory=4000                                                                      |                                     |                   |         |                     |                     |
|         | --apiserver-port=8441                                                              |                                     |                   |         |                     |                     |
|         | --wait=all --driver=docker                                                         |                                     |                   |         |                     |                     |
| start   | -p                                                                                 | functional-20220921213353-5916      | minikube2\jenkins | v1.27.0 | 21 Sep 22 21:34 GMT |                     |
|         | functional-20220921213353-5916                                                     |                                     |                   |         |                     |                     |
|         | --alsologtostderr -v=8                                                             |                                     |                   |         |                     |                     |
| cache   | functional-20220921213353-5916                                                     | functional-20220921213353-5916      | minikube2\jenkins | v1.27.0 | 21 Sep 22 21:36 GMT | 21 Sep 22 21:36 GMT |
|         | cache add k8s.gcr.io/pause:3.1                                                     |                                     |                   |         |                     |                     |
| cache   | functional-20220921213353-5916                                                     | functional-20220921213353-5916      | minikube2\jenkins | v1.27.0 | 21 Sep 22 21:36 GMT | 21 Sep 22 21:36 GMT |
|         | cache add k8s.gcr.io/pause:3.3                                                     |                                     |                   |         |                     |                     |
| cache   | functional-20220921213353-5916                                                     | functional-20220921213353-5916      | minikube2\jenkins | v1.27.0 | 21 Sep 22 21:36 GMT | 21 Sep 22 21:36 GMT |
|         | cache add                                                                          |                                     |                   |         |                     |                     |
|         | k8s.gcr.io/pause:latest                                                            |                                     |                   |         |                     |                     |
| cache   | delete k8s.gcr.io/pause:3.3                                                        | minikube                            | minikube2\jenkins | v1.27.0 | 21 Sep 22 21:36 GMT | 21 Sep 22 21:36 GMT |
| cache   | list                                                                               | minikube                            | minikube2\jenkins | v1.27.0 | 21 Sep 22 21:36 GMT | 21 Sep 22 21:36 GMT |
| ssh     | functional-20220921213353-5916                                                     | functional-20220921213353-5916      | minikube2\jenkins | v1.27.0 | 21 Sep 22 21:36 GMT |                     |
|         | ssh sudo crictl images                                                             |                                     |                   |         |                     |                     |
| ssh     | functional-20220921213353-5916                                                     | functional-20220921213353-5916      | minikube2\jenkins | v1.27.0 | 21 Sep 22 21:36 GMT |                     |
|         | ssh sudo docker rmi                                                                |                                     |                   |         |                     |                     |
|         | k8s.gcr.io/pause:latest                                                            |                                     |                   |         |                     |                     |
| ssh     | functional-20220921213353-5916                                                     | functional-20220921213353-5916      | minikube2\jenkins | v1.27.0 | 21 Sep 22 21:36 GMT |                     |
|         | ssh sudo crictl inspecti                                                           |                                     |                   |         |                     |                     |
|         | k8s.gcr.io/pause:latest                                                            |                                     |                   |         |                     |                     |
| cache   | functional-20220921213353-5916                                                     | functional-20220921213353-5916      | minikube2\jenkins | v1.27.0 | 21 Sep 22 21:36 GMT | 21 Sep 22 21:36 GMT |
|         | cache reload                                                                       |                                     |                   |         |                     |                     |
| ssh     | functional-20220921213353-5916                                                     | functional-20220921213353-5916      | minikube2\jenkins | v1.27.0 | 21 Sep 22 21:36 GMT |                     |
|         | ssh sudo crictl inspecti                                                           |                                     |                   |         |                     |                     |
|         | k8s.gcr.io/pause:latest                                                            |                                     |                   |         |                     |                     |
| cache   | delete k8s.gcr.io/pause:3.1                                                        | minikube                            | minikube2\jenkins | v1.27.0 | 21 Sep 22 21:36 GMT | 21 Sep 22 21:36 GMT |
| cache   | delete k8s.gcr.io/pause:latest                                                     | minikube                            | minikube2\jenkins | v1.27.0 | 21 Sep 22 21:36 GMT | 21 Sep 22 21:36 GMT |
| kubectl | functional-20220921213353-5916                                                     | functional-20220921213353-5916      | minikube2\jenkins | v1.27.0 | 21 Sep 22 21:36 GMT |                     |
|         | kubectl -- --context                                                               |                                     |                   |         |                     |                     |
|         | functional-20220921213353-5916                                                     |                                     |                   |         |                     |                     |
|         | get pods                                                                           |                                     |                   |         |                     |                     |
| start   | -p functional-20220921213353-5916                                                  | functional-20220921213353-5916      | minikube2\jenkins | v1.27.0 | 21 Sep 22 21:36 GMT |                     |
|         | --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision           |                                     |                   |         |                     |                     |
|         | --wait=all                                                                         |                                     |                   |         |                     |                     |
|---------|------------------------------------------------------------------------------------|-------------------------------------|-------------------|---------|---------------------|---------------------|

                                                
                                                
* 
* ==> Last Start <==
* Log file created at: 2022/09/21 21:36:16
Running on machine: minikube2
Binary: Built with gc go1.19.1 for windows/amd64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0921 21:36:16.253670    6080 out.go:296] Setting OutFile to fd 992 ...
I0921 21:36:16.306661    6080 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0921 21:36:16.306661    6080 out.go:309] Setting ErrFile to fd 668...
I0921 21:36:16.306661    6080 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0921 21:36:16.326042    6080 out.go:303] Setting JSON to false
I0921 21:36:16.328732    6080 start.go:115] hostinfo: {"hostname":"minikube2","uptime":2244,"bootTime":1663793932,"procs":149,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19044 Build 19044","kernelVersion":"10.0.19044 Build 19044","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"29dced62-21fb-45d8-a34e-472b66ced002"}
W0921 21:36:16.329367    6080 start.go:123] gopshost.Virtualization returned error: not implemented yet
I0921 21:36:16.333798    6080 out.go:177] * [functional-20220921213353-5916] minikube v1.27.0 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
I0921 21:36:16.336178    6080 notify.go:214] Checking for updates...
I0921 21:36:16.338864    6080 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
I0921 21:36:16.341289    6080 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
I0921 21:36:16.343686    6080 out.go:177]   - MINIKUBE_LOCATION=14995
I0921 21:36:16.346193    6080 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0921 21:36:16.351139    6080 config.go:180] Loaded profile config "functional-20220921213353-5916": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.2
I0921 21:36:16.351139    6080 driver.go:365] Setting default libvirt URI to qemu:///system
I0921 21:36:16.637054    6080 docker.go:137] docker version: linux-20.10.17
I0921 21:36:16.644807    6080 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0921 21:36:17.168944    6080 info.go:265] docker info: {ID:6VFI:N2PO:PLBP:KXLR:EN7R:OR7P:564S:UY7N:ZWDA:5V2U:KGL2:KJZ7 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:43 OomKillDisable:true NGoroutines:49 SystemTime:2022-09-21 21:36:16.7990226 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_6
4 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,p
rofile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.10.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.9] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-p
lugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.19.0]] Warnings:<nil>}}
I0921 21:36:17.173954    6080 out.go:177] * Using the docker driver based on existing profile
I0921 21:36:17.176425    6080 start.go:284] selected driver: docker
I0921 21:36:17.176425    6080 start.go:808] validating driver "docker" against &{Name:functional-20220921213353-5916 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.2 ClusterName:functional-20220921213353-5916 Namespace:default APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.25.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptim
izations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
I0921 21:36:17.176425    6080 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0921 21:36:17.189017    6080 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0921 21:36:17.716472    6080 info.go:265] docker info: {ID:6VFI:N2PO:PLBP:KXLR:EN7R:OR7P:564S:UY7N:ZWDA:5V2U:KGL2:KJZ7 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:43 OomKillDisable:true NGoroutines:49 SystemTime:2022-09-21 21:36:17.3434501 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_6
4 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,p
rofile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.10.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.9] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-p
lugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.19.0]] Warnings:<nil>}}
I0921 21:36:17.774790    6080 start_flags.go:867] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0921 21:36:17.774790    6080 cni.go:95] Creating CNI manager for ""
I0921 21:36:17.774860    6080 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
I0921 21:36:17.774860    6080 start_flags.go:316] config:
{Name:functional-20220921213353-5916 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.2 ClusterName:functional-20220921213353-5916 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.25.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimi
zations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
I0921 21:36:17.779515    6080 out.go:177] * Starting control plane node functional-20220921213353-5916 in cluster functional-20220921213353-5916
I0921 21:36:17.781339    6080 cache.go:120] Beginning downloading kic base image for docker with docker
I0921 21:36:17.783952    6080 out.go:177] * Pulling base image ...
I0921 21:36:17.786719    6080 preload.go:132] Checking if preload exists for k8s version v1.25.2 and runtime docker
I0921 21:36:17.786719    6080 preload.go:148] Found local preload: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.25.2-docker-overlay2-amd64.tar.lz4
I0921 21:36:17.786719    6080 cache.go:57] Caching tarball of preloaded images
I0921 21:36:17.786719    6080 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c in local docker daemon
I0921 21:36:17.786719    6080 preload.go:174] Found C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.25.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
I0921 21:36:17.786719    6080 cache.go:60] Finished verifying existence of preloaded tar for  v1.25.2 on docker
I0921 21:36:17.787674    6080 profile.go:148] Saving config to C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\functional-20220921213353-5916\config.json ...
I0921 21:36:17.997663    6080 cache.go:147] Downloading gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c to local cache
I0921 21:36:17.997782    6080 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase_v0.0.34@sha256_f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c.tar
I0921 21:36:17.998107    6080 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase_v0.0.34@sha256_f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c.tar
I0921 21:36:17.998186    6080 image.go:59] Checking for gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c in local cache directory
I0921 21:36:17.998298    6080 image.go:62] Found gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c in local cache directory, skipping pull
I0921 21:36:17.998298    6080 image.go:103] gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c exists in cache, skipping pull
I0921 21:36:17.998502    6080 cache.go:150] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c as a tarball
I0921 21:36:17.998502    6080 cache.go:161] Loading gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c from local cache
I0921 21:36:17.998502    6080 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase_v0.0.34@sha256_f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c.tar
I0921 21:36:20.222960    6080 cache.go:164] successfully loaded gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c from cached tarball
I0921 21:36:20.223038    6080 cache.go:170] Downloading gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c to local daemon
I0921 21:36:20.223101    6080 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c.d.lock -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase_v0.0.34@sha256_f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c.d.lock
I0921 21:36:20.223101    6080 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c in local docker daemon
I0921 21:36:20.407337    6080 image.go:243] Writing gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c to local daemon
I0921 21:36:21.937626    6080 cache.go:177] use image loaded from cache gcr.io/k8s-minikube/kicbase:v0.0.34
W0921 21:36:21.937626    6080 out.go:239] ! minikube was unable to download gcr.io/k8s-minikube/kicbase:v0.0.34, but successfully downloaded gcr.io/k8s-minikube/kicbase:v0.0.34 as a fallback image
I0921 21:36:21.937626    6080 cache.go:208] Successfully downloaded all kic artifacts
I0921 21:36:21.937626    6080 start.go:364] acquiring machines lock for functional-20220921213353-5916: {Name:mk3f5ae8740d25300eb345feb1053ed449398cb0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0921 21:36:21.938173    6080 start.go:368] acquired machines lock for "functional-20220921213353-5916" in 546.6µs
I0921 21:36:21.938354    6080 start.go:96] Skipping create...Using existing machine configuration
I0921 21:36:21.938434    6080 fix.go:55] fixHost starting: 
I0921 21:36:21.952190    6080 cli_runner.go:164] Run: docker container inspect functional-20220921213353-5916 --format={{.State.Status}}
W0921 21:36:22.139760    6080 cli_runner.go:211] docker container inspect functional-20220921213353-5916 --format={{.State.Status}} returned with exit code 1
I0921 21:36:22.139760    6080 fix.go:103] recreateIfNeeded on functional-20220921213353-5916: state= err=unknown state "functional-20220921213353-5916": docker container inspect functional-20220921213353-5916 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220921213353-5916
I0921 21:36:22.139760    6080 fix.go:108] machineExists: false. err=machine does not exist
I0921 21:36:22.143965    6080 out.go:177] * docker "functional-20220921213353-5916" container is missing, will recreate.
I0921 21:36:22.146227    6080 delete.go:124] DEMOLISHING functional-20220921213353-5916 ...
I0921 21:36:22.160033    6080 cli_runner.go:164] Run: docker container inspect functional-20220921213353-5916 --format={{.State.Status}}
W0921 21:36:22.355164    6080 cli_runner.go:211] docker container inspect functional-20220921213353-5916 --format={{.State.Status}} returned with exit code 1
W0921 21:36:22.355291    6080 stop.go:75] unable to get state: unknown state "functional-20220921213353-5916": docker container inspect functional-20220921213353-5916 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220921213353-5916
I0921 21:36:22.355334    6080 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "functional-20220921213353-5916": docker container inspect functional-20220921213353-5916 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220921213353-5916
I0921 21:36:22.368925    6080 cli_runner.go:164] Run: docker container inspect functional-20220921213353-5916 --format={{.State.Status}}
W0921 21:36:22.566986    6080 cli_runner.go:211] docker container inspect functional-20220921213353-5916 --format={{.State.Status}} returned with exit code 1
I0921 21:36:22.567187    6080 delete.go:82] Unable to get host status for functional-20220921213353-5916, assuming it has already been deleted: state: unknown state "functional-20220921213353-5916": docker container inspect functional-20220921213353-5916 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220921213353-5916
I0921 21:36:22.575667    6080 cli_runner.go:164] Run: docker container inspect -f {{.Id}} functional-20220921213353-5916
W0921 21:36:22.754941    6080 cli_runner.go:211] docker container inspect -f {{.Id}} functional-20220921213353-5916 returned with exit code 1
I0921 21:36:22.754992    6080 kic.go:356] could not find the container functional-20220921213353-5916 to remove it. will try anyways
I0921 21:36:22.766306    6080 cli_runner.go:164] Run: docker container inspect functional-20220921213353-5916 --format={{.State.Status}}
W0921 21:36:22.956751    6080 cli_runner.go:211] docker container inspect functional-20220921213353-5916 --format={{.State.Status}} returned with exit code 1
W0921 21:36:22.956751    6080 oci.go:84] error getting container status, will try to delete anyways: unknown state "functional-20220921213353-5916": docker container inspect functional-20220921213353-5916 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220921213353-5916
I0921 21:36:22.967169    6080 cli_runner.go:164] Run: docker exec --privileged -t functional-20220921213353-5916 /bin/bash -c "sudo init 0"
W0921 21:36:23.207048    6080 cli_runner.go:211] docker exec --privileged -t functional-20220921213353-5916 /bin/bash -c "sudo init 0" returned with exit code 1
I0921 21:36:23.207048    6080 oci.go:646] error shutdown functional-20220921213353-5916: docker exec --privileged -t functional-20220921213353-5916 /bin/bash -c "sudo init 0": exit status 1
stdout:

                                                
                                                
stderr:
Error: No such container: functional-20220921213353-5916
I0921 21:36:24.226922    6080 cli_runner.go:164] Run: docker container inspect functional-20220921213353-5916 --format={{.State.Status}}
W0921 21:36:24.405201    6080 cli_runner.go:211] docker container inspect functional-20220921213353-5916 --format={{.State.Status}} returned with exit code 1
I0921 21:36:24.405449    6080 oci.go:658] temporary error verifying shutdown: unknown state "functional-20220921213353-5916": docker container inspect functional-20220921213353-5916 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220921213353-5916
I0921 21:36:24.405449    6080 oci.go:660] temporary error: container functional-20220921213353-5916 status is  but expect it to be exited
I0921 21:36:24.405524    6080 retry.go:31] will retry after 552.330144ms: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20220921213353-5916": docker container inspect functional-20220921213353-5916 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220921213353-5916
I0921 21:36:24.968714    6080 cli_runner.go:164] Run: docker container inspect functional-20220921213353-5916 --format={{.State.Status}}
W0921 21:36:25.206628    6080 cli_runner.go:211] docker container inspect functional-20220921213353-5916 --format={{.State.Status}} returned with exit code 1
I0921 21:36:25.206698    6080 oci.go:658] temporary error verifying shutdown: unknown state "functional-20220921213353-5916": docker container inspect functional-20220921213353-5916 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220921213353-5916
I0921 21:36:25.206698    6080 oci.go:660] temporary error: container functional-20220921213353-5916 status is  but expect it to be exited
I0921 21:36:25.206698    6080 retry.go:31] will retry after 1.080381816s: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20220921213353-5916": docker container inspect functional-20220921213353-5916 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220921213353-5916
I0921 21:36:26.308678    6080 cli_runner.go:164] Run: docker container inspect functional-20220921213353-5916 --format={{.State.Status}}
W0921 21:36:26.501826    6080 cli_runner.go:211] docker container inspect functional-20220921213353-5916 --format={{.State.Status}} returned with exit code 1
I0921 21:36:26.501826    6080 oci.go:658] temporary error verifying shutdown: unknown state "functional-20220921213353-5916": docker container inspect functional-20220921213353-5916 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220921213353-5916
I0921 21:36:26.501826    6080 oci.go:660] temporary error: container functional-20220921213353-5916 status is  but expect it to be exited
I0921 21:36:26.501826    6080 retry.go:31] will retry after 1.31013006s: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20220921213353-5916": docker container inspect functional-20220921213353-5916 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220921213353-5916
I0921 21:36:27.820650    6080 cli_runner.go:164] Run: docker container inspect functional-20220921213353-5916 --format={{.State.Status}}
W0921 21:36:28.014954    6080 cli_runner.go:211] docker container inspect functional-20220921213353-5916 --format={{.State.Status}} returned with exit code 1
I0921 21:36:28.015083    6080 oci.go:658] temporary error verifying shutdown: unknown state "functional-20220921213353-5916": docker container inspect functional-20220921213353-5916 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220921213353-5916
I0921 21:36:28.015083    6080 oci.go:660] temporary error: container functional-20220921213353-5916 status is  but expect it to be exited
I0921 21:36:28.015083    6080 retry.go:31] will retry after 1.582392691s: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20220921213353-5916": docker container inspect functional-20220921213353-5916 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220921213353-5916
I0921 21:36:29.609977    6080 cli_runner.go:164] Run: docker container inspect functional-20220921213353-5916 --format={{.State.Status}}
W0921 21:36:29.803271    6080 cli_runner.go:211] docker container inspect functional-20220921213353-5916 --format={{.State.Status}} returned with exit code 1
I0921 21:36:29.803271    6080 oci.go:658] temporary error verifying shutdown: unknown state "functional-20220921213353-5916": docker container inspect functional-20220921213353-5916 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220921213353-5916
I0921 21:36:29.803271    6080 oci.go:660] temporary error: container functional-20220921213353-5916 status is  but expect it to be exited
I0921 21:36:29.803271    6080 retry.go:31] will retry after 2.340488664s: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20220921213353-5916": docker container inspect functional-20220921213353-5916 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220921213353-5916
I0921 21:36:32.151484    6080 cli_runner.go:164] Run: docker container inspect functional-20220921213353-5916 --format={{.State.Status}}
W0921 21:36:32.330326    6080 cli_runner.go:211] docker container inspect functional-20220921213353-5916 --format={{.State.Status}} returned with exit code 1
I0921 21:36:32.330326    6080 oci.go:658] temporary error verifying shutdown: unknown state "functional-20220921213353-5916": docker container inspect functional-20220921213353-5916 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220921213353-5916
I0921 21:36:32.330326    6080 oci.go:660] temporary error: container functional-20220921213353-5916 status is  but expect it to be exited
I0921 21:36:32.330326    6080 retry.go:31] will retry after 4.506218855s: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20220921213353-5916": docker container inspect functional-20220921213353-5916 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220921213353-5916
I0921 21:36:36.847004    6080 cli_runner.go:164] Run: docker container inspect functional-20220921213353-5916 --format={{.State.Status}}
W0921 21:36:37.044165    6080 cli_runner.go:211] docker container inspect functional-20220921213353-5916 --format={{.State.Status}} returned with exit code 1
I0921 21:36:37.044165    6080 oci.go:658] temporary error verifying shutdown: unknown state "functional-20220921213353-5916": docker container inspect functional-20220921213353-5916 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220921213353-5916
I0921 21:36:37.044165    6080 oci.go:660] temporary error: container functional-20220921213353-5916 status is  but expect it to be exited
I0921 21:36:37.044165    6080 retry.go:31] will retry after 3.221479586s: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20220921213353-5916": docker container inspect functional-20220921213353-5916 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220921213353-5916
I0921 21:36:40.277013    6080 cli_runner.go:164] Run: docker container inspect functional-20220921213353-5916 --format={{.State.Status}}
W0921 21:36:40.455025    6080 cli_runner.go:211] docker container inspect functional-20220921213353-5916 --format={{.State.Status}} returned with exit code 1
I0921 21:36:40.455025    6080 oci.go:658] temporary error verifying shutdown: unknown state "functional-20220921213353-5916": docker container inspect functional-20220921213353-5916 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220921213353-5916
I0921 21:36:40.455025    6080 oci.go:660] temporary error: container functional-20220921213353-5916 status is  but expect it to be exited
I0921 21:36:40.455025    6080 oci.go:88] couldn't shut down functional-20220921213353-5916 (might be okay): verify shutdown: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20220921213353-5916": docker container inspect functional-20220921213353-5916 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220921213353-5916

                                                
                                                
I0921 21:36:40.462755    6080 cli_runner.go:164] Run: docker rm -f -v functional-20220921213353-5916
I0921 21:36:40.679812    6080 cli_runner.go:164] Run: docker container inspect -f {{.Id}} functional-20220921213353-5916
W0921 21:36:40.858916    6080 cli_runner.go:211] docker container inspect -f {{.Id}} functional-20220921213353-5916 returned with exit code 1
I0921 21:36:40.867031    6080 cli_runner.go:164] Run: docker network inspect functional-20220921213353-5916 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W0921 21:36:41.061541    6080 cli_runner.go:211] docker network inspect functional-20220921213353-5916 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I0921 21:36:41.069860    6080 network_create.go:272] running [docker network inspect functional-20220921213353-5916] to gather additional debugging logs...
I0921 21:36:41.069860    6080 cli_runner.go:164] Run: docker network inspect functional-20220921213353-5916
W0921 21:36:41.250007    6080 cli_runner.go:211] docker network inspect functional-20220921213353-5916 returned with exit code 1
I0921 21:36:41.250007    6080 network_create.go:275] error running [docker network inspect functional-20220921213353-5916]: docker network inspect functional-20220921213353-5916: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error: No such network: functional-20220921213353-5916
I0921 21:36:41.250007    6080 network_create.go:277] output of [docker network inspect functional-20220921213353-5916]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error: No such network: functional-20220921213353-5916

                                                
                                                
** /stderr **
W0921 21:36:41.250911    6080 delete.go:139] delete failed (probably ok) <nil>
I0921 21:36:41.250911    6080 fix.go:115] Sleeping 1 second for extra luck!
I0921 21:36:42.257634    6080 start.go:125] createHost starting for "" (driver="docker")
I0921 21:36:42.261811    6080 out.go:204] * Creating docker container (CPUs=2, Memory=4000MB) ...
I0921 21:36:42.262397    6080 start.go:159] libmachine.API.Create for "functional-20220921213353-5916" (driver="docker")
I0921 21:36:42.262460    6080 client.go:168] LocalClient.Create starting
I0921 21:36:42.263164    6080 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
I0921 21:36:42.263330    6080 main.go:134] libmachine: Decoding PEM data...
I0921 21:36:42.263330    6080 main.go:134] libmachine: Parsing certificate...
I0921 21:36:42.263575    6080 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
I0921 21:36:42.263854    6080 main.go:134] libmachine: Decoding PEM data...
I0921 21:36:42.263854    6080 main.go:134] libmachine: Parsing certificate...
I0921 21:36:42.272090    6080 cli_runner.go:164] Run: docker network inspect functional-20220921213353-5916 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W0921 21:36:42.459821    6080 cli_runner.go:211] docker network inspect functional-20220921213353-5916 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I0921 21:36:42.465249    6080 network_create.go:272] running [docker network inspect functional-20220921213353-5916] to gather additional debugging logs...
I0921 21:36:42.466249    6080 cli_runner.go:164] Run: docker network inspect functional-20220921213353-5916
W0921 21:36:42.661406    6080 cli_runner.go:211] docker network inspect functional-20220921213353-5916 returned with exit code 1
I0921 21:36:42.661406    6080 network_create.go:275] error running [docker network inspect functional-20220921213353-5916]: docker network inspect functional-20220921213353-5916: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error: No such network: functional-20220921213353-5916
I0921 21:36:42.661406    6080 network_create.go:277] output of [docker network inspect functional-20220921213353-5916]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error: No such network: functional-20220921213353-5916

                                                
                                                
** /stderr **
I0921 21:36:42.668467    6080 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0921 21:36:42.883642    6080 network.go:290] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc000406830] misses:0}
I0921 21:36:42.883642    6080 network.go:236] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
I0921 21:36:42.884322    6080 network_create.go:115] attempt to create docker network functional-20220921213353-5916 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
I0921 21:36:42.891741    6080 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=functional-20220921213353-5916 functional-20220921213353-5916
W0921 21:36:43.107737    6080 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=functional-20220921213353-5916 functional-20220921213353-5916 returned with exit code 1
E0921 21:36:43.107952    6080 network_create.go:104] error while trying to create docker network functional-20220921213353-5916 192.168.49.0/24: create docker network functional-20220921213353-5916 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=functional-20220921213353-5916 functional-20220921213353-5916: exit status 1
stdout:

                                                
                                                
stderr:
Error response from daemon: cannot create network f714dcf8a6e38001b3c606bcfef85df748989420f42256b76baa4bc8f6fcda81 (br-f714dcf8a6e3): conflicts with network a04d36bfb3cf4b1b1ab643a2ec70fadfc4a485878b32a275b4c7653037d66dd0 (br-a04d36bfb3cf): networks have overlapping IPv4
W0921 21:36:43.107952    6080 out.go:239] ! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network functional-20220921213353-5916 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=functional-20220921213353-5916 functional-20220921213353-5916: exit status 1
stdout:

                                                
                                                
stderr:
Error response from daemon: cannot create network f714dcf8a6e38001b3c606bcfef85df748989420f42256b76baa4bc8f6fcda81 (br-f714dcf8a6e3): conflicts with network a04d36bfb3cf4b1b1ab643a2ec70fadfc4a485878b32a275b4c7653037d66dd0 (br-a04d36bfb3cf): networks have overlapping IPv4

                                                
                                                
I0921 21:36:43.124052    6080 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
I0921 21:36:43.349497    6080 cli_runner.go:164] Run: docker volume create functional-20220921213353-5916 --label name.minikube.sigs.k8s.io=functional-20220921213353-5916 --label created_by.minikube.sigs.k8s.io=true
W0921 21:36:43.528251    6080 cli_runner.go:211] docker volume create functional-20220921213353-5916 --label name.minikube.sigs.k8s.io=functional-20220921213353-5916 --label created_by.minikube.sigs.k8s.io=true returned with exit code 1
I0921 21:36:43.528251    6080 client.go:171] LocalClient.Create took 1.2657847s
I0921 21:36:45.547272    6080 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I0921 21:36:45.554271    6080 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916
W0921 21:36:45.756359    6080 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916 returned with exit code 1
I0921 21:36:45.756684    6080 retry.go:31] will retry after 149.242379ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220921213353-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220921213353-5916
I0921 21:36:45.922450    6080 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916
W0921 21:36:46.130089    6080 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916 returned with exit code 1
I0921 21:36:46.130621    6080 retry.go:31] will retry after 300.341948ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220921213353-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220921213353-5916
I0921 21:36:46.443946    6080 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916
W0921 21:36:46.650140    6080 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916 returned with exit code 1
I0921 21:36:46.650140    6080 retry.go:31] will retry after 571.057104ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220921213353-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220921213353-5916
I0921 21:36:47.236794    6080 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916
W0921 21:36:47.416282    6080 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916 returned with exit code 1
W0921 21:36:47.416449    6080 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220921213353-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220921213353-5916

                                                
                                                
W0921 21:36:47.416543    6080 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220921213353-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220921213353-5916
I0921 21:36:47.426387    6080 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
I0921 21:36:47.431388    6080 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916
W0921 21:36:47.625368    6080 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916 returned with exit code 1
I0921 21:36:47.625368    6080 retry.go:31] will retry after 178.565968ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220921213353-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220921213353-5916
I0921 21:36:47.825346    6080 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916
W0921 21:36:48.016391    6080 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916 returned with exit code 1
I0921 21:36:48.016535    6080 retry.go:31] will retry after 330.246446ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220921213353-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220921213353-5916
I0921 21:36:48.364223    6080 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916
W0921 21:36:48.547800    6080 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916 returned with exit code 1
I0921 21:36:48.547800    6080 retry.go:31] will retry after 460.157723ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220921213353-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220921213353-5916
I0921 21:36:49.022989    6080 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916
W0921 21:36:49.209142    6080 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916 returned with exit code 1
W0921 21:36:49.209142    6080 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220921213353-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220921213353-5916

                                                
                                                
W0921 21:36:49.209142    6080 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220921213353-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220921213353-5916
I0921 21:36:49.209142    6080 start.go:128] duration metric: createHost completed in 6.9514718s
I0921 21:36:49.221157    6080 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I0921 21:36:49.226882    6080 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916
W0921 21:36:49.411553    6080 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916 returned with exit code 1
I0921 21:36:49.411553    6080 retry.go:31] will retry after 195.758538ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220921213353-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220921213353-5916
I0921 21:36:49.618547    6080 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916
W0921 21:36:49.808476    6080 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916 returned with exit code 1
I0921 21:36:49.808476    6080 retry.go:31] will retry after 297.413196ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220921213353-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220921213353-5916
I0921 21:36:50.118067    6080 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916
W0921 21:36:50.296678    6080 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916 returned with exit code 1
I0921 21:36:50.297113    6080 retry.go:31] will retry after 663.23513ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220921213353-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220921213353-5916
I0921 21:36:50.980153    6080 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916
W0921 21:36:51.186661    6080 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916 returned with exit code 1
W0921 21:36:51.186882    6080 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220921213353-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220921213353-5916

                                                
                                                
W0921 21:36:51.186882    6080 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220921213353-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220921213353-5916
I0921 21:36:51.197365    6080 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
I0921 21:36:51.203311    6080 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916
W0921 21:36:51.387794    6080 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916 returned with exit code 1
I0921 21:36:51.387794    6080 retry.go:31] will retry after 175.796719ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220921213353-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220921213353-5916
I0921 21:36:51.584761    6080 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916
W0921 21:36:51.777183    6080 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916 returned with exit code 1
I0921 21:36:51.777695    6080 retry.go:31] will retry after 322.826781ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220921213353-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220921213353-5916
I0921 21:36:52.122330    6080 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916
W0921 21:36:52.302625    6080 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916 returned with exit code 1
I0921 21:36:52.302625    6080 retry.go:31] will retry after 602.253718ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220921213353-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220921213353-5916
I0921 21:36:52.924004    6080 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916
W0921 21:36:53.134372    6080 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916 returned with exit code 1
W0921 21:36:53.134372    6080 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220921213353-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220921213353-5916

                                                
                                                
W0921 21:36:53.134372    6080 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220921213353-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220921213353-5916
I0921 21:36:53.134372    6080 fix.go:57] fixHost completed within 31.1958556s
I0921 21:36:53.134372    6080 start.go:83] releasing machines lock for "functional-20220921213353-5916", held for 31.196037s
W0921 21:36:53.134372    6080 start.go:602] error starting host: recreate: creating host: create: creating: setting up container node: creating volume for functional-20220921213353-5916 container: docker volume create functional-20220921213353-5916 --label name.minikube.sigs.k8s.io=functional-20220921213353-5916 --label created_by.minikube.sigs.k8s.io=true: exit status 1
stdout:

                                                
                                                
stderr:
Error response from daemon: create functional-20220921213353-5916: error while creating volume root path '/var/lib/docker/volumes/functional-20220921213353-5916': mkdir /var/lib/docker/volumes/functional-20220921213353-5916: read-only file system
W0921 21:36:53.134372    6080 out.go:239] ! StartHost failed, but will try again: recreate: creating host: create: creating: setting up container node: creating volume for functional-20220921213353-5916 container: docker volume create functional-20220921213353-5916 --label name.minikube.sigs.k8s.io=functional-20220921213353-5916 --label created_by.minikube.sigs.k8s.io=true: exit status 1
stdout:

                                                
                                                
stderr:
Error response from daemon: create functional-20220921213353-5916: error while creating volume root path '/var/lib/docker/volumes/functional-20220921213353-5916': mkdir /var/lib/docker/volumes/functional-20220921213353-5916: read-only file system

                                                
                                                
I0921 21:36:53.134372    6080 start.go:617] Will try again in 5 seconds ...
I0921 21:36:58.145977    6080 start.go:364] acquiring machines lock for functional-20220921213353-5916: {Name:mk3f5ae8740d25300eb345feb1053ed449398cb0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0921 21:36:58.146618    6080 start.go:368] acquired machines lock for "functional-20220921213353-5916" in 475.4µs
I0921 21:36:58.146795    6080 start.go:96] Skipping create...Using existing machine configuration
I0921 21:36:58.146795    6080 fix.go:55] fixHost starting: 
I0921 21:36:58.161804    6080 cli_runner.go:164] Run: docker container inspect functional-20220921213353-5916 --format={{.State.Status}}
W0921 21:36:58.363602    6080 cli_runner.go:211] docker container inspect functional-20220921213353-5916 --format={{.State.Status}} returned with exit code 1
I0921 21:36:58.363628    6080 fix.go:103] recreateIfNeeded on functional-20220921213353-5916: state= err=unknown state "functional-20220921213353-5916": docker container inspect functional-20220921213353-5916 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220921213353-5916
I0921 21:36:58.363628    6080 fix.go:108] machineExists: false. err=machine does not exist
I0921 21:36:58.368237    6080 out.go:177] * docker "functional-20220921213353-5916" container is missing, will recreate.
I0921 21:36:58.370531    6080 delete.go:124] DEMOLISHING functional-20220921213353-5916 ...
I0921 21:36:58.383951    6080 cli_runner.go:164] Run: docker container inspect functional-20220921213353-5916 --format={{.State.Status}}
W0921 21:36:58.567537    6080 cli_runner.go:211] docker container inspect functional-20220921213353-5916 --format={{.State.Status}} returned with exit code 1
W0921 21:36:58.567537    6080 stop.go:75] unable to get state: unknown state "functional-20220921213353-5916": docker container inspect functional-20220921213353-5916 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220921213353-5916
I0921 21:36:58.567537    6080 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "functional-20220921213353-5916": docker container inspect functional-20220921213353-5916 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220921213353-5916
I0921 21:36:58.584383    6080 cli_runner.go:164] Run: docker container inspect functional-20220921213353-5916 --format={{.State.Status}}
W0921 21:36:58.770316    6080 cli_runner.go:211] docker container inspect functional-20220921213353-5916 --format={{.State.Status}} returned with exit code 1
I0921 21:36:58.770580    6080 delete.go:82] Unable to get host status for functional-20220921213353-5916, assuming it has already been deleted: state: unknown state "functional-20220921213353-5916": docker container inspect functional-20220921213353-5916 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220921213353-5916
I0921 21:36:58.778192    6080 cli_runner.go:164] Run: docker container inspect -f {{.Id}} functional-20220921213353-5916
W0921 21:36:58.980023    6080 cli_runner.go:211] docker container inspect -f {{.Id}} functional-20220921213353-5916 returned with exit code 1
I0921 21:36:58.980076    6080 kic.go:356] could not find the container functional-20220921213353-5916 to remove it. will try anyways
I0921 21:36:58.988937    6080 cli_runner.go:164] Run: docker container inspect functional-20220921213353-5916 --format={{.State.Status}}
W0921 21:36:59.182351    6080 cli_runner.go:211] docker container inspect functional-20220921213353-5916 --format={{.State.Status}} returned with exit code 1
W0921 21:36:59.182351    6080 oci.go:84] error getting container status, will try to delete anyways: unknown state "functional-20220921213353-5916": docker container inspect functional-20220921213353-5916 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220921213353-5916
I0921 21:36:59.189987    6080 cli_runner.go:164] Run: docker exec --privileged -t functional-20220921213353-5916 /bin/bash -c "sudo init 0"
W0921 21:36:59.383089    6080 cli_runner.go:211] docker exec --privileged -t functional-20220921213353-5916 /bin/bash -c "sudo init 0" returned with exit code 1
I0921 21:36:59.383089    6080 oci.go:646] error shutdown functional-20220921213353-5916: docker exec --privileged -t functional-20220921213353-5916 /bin/bash -c "sudo init 0": exit status 1
stdout:

                                                
                                                
stderr:
Error: No such container: functional-20220921213353-5916
I0921 21:37:00.397008    6080 cli_runner.go:164] Run: docker container inspect functional-20220921213353-5916 --format={{.State.Status}}
W0921 21:37:00.576285    6080 cli_runner.go:211] docker container inspect functional-20220921213353-5916 --format={{.State.Status}} returned with exit code 1
I0921 21:37:00.576285    6080 oci.go:658] temporary error verifying shutdown: unknown state "functional-20220921213353-5916": docker container inspect functional-20220921213353-5916 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220921213353-5916
I0921 21:37:00.576285    6080 oci.go:660] temporary error: container functional-20220921213353-5916 status is  but expect it to be exited
I0921 21:37:00.576285    6080 retry.go:31] will retry after 396.557122ms: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20220921213353-5916": docker container inspect functional-20220921213353-5916 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220921213353-5916
I0921 21:37:00.990531    6080 cli_runner.go:164] Run: docker container inspect functional-20220921213353-5916 --format={{.State.Status}}
W0921 21:37:01.184713    6080 cli_runner.go:211] docker container inspect functional-20220921213353-5916 --format={{.State.Status}} returned with exit code 1
I0921 21:37:01.184789    6080 oci.go:658] temporary error verifying shutdown: unknown state "functional-20220921213353-5916": docker container inspect functional-20220921213353-5916 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220921213353-5916
I0921 21:37:01.184789    6080 oci.go:660] temporary error: container functional-20220921213353-5916 status is  but expect it to be exited
I0921 21:37:01.184856    6080 retry.go:31] will retry after 597.811922ms: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20220921213353-5916": docker container inspect functional-20220921213353-5916 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220921213353-5916
I0921 21:37:01.804954    6080 cli_runner.go:164] Run: docker container inspect functional-20220921213353-5916 --format={{.State.Status}}
W0921 21:37:01.985305    6080 cli_runner.go:211] docker container inspect functional-20220921213353-5916 --format={{.State.Status}} returned with exit code 1
I0921 21:37:01.985572    6080 oci.go:658] temporary error verifying shutdown: unknown state "functional-20220921213353-5916": docker container inspect functional-20220921213353-5916 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220921213353-5916
I0921 21:37:01.985572    6080 oci.go:660] temporary error: container functional-20220921213353-5916 status is  but expect it to be exited
I0921 21:37:01.985572    6080 retry.go:31] will retry after 1.409144665s: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20220921213353-5916": docker container inspect functional-20220921213353-5916 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220921213353-5916
I0921 21:37:03.408075    6080 cli_runner.go:164] Run: docker container inspect functional-20220921213353-5916 --format={{.State.Status}}
W0921 21:37:03.630230    6080 cli_runner.go:211] docker container inspect functional-20220921213353-5916 --format={{.State.Status}} returned with exit code 1
I0921 21:37:03.630431    6080 oci.go:658] temporary error verifying shutdown: unknown state "functional-20220921213353-5916": docker container inspect functional-20220921213353-5916 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220921213353-5916
I0921 21:37:03.630431    6080 oci.go:660] temporary error: container functional-20220921213353-5916 status is  but expect it to be exited
I0921 21:37:03.630476    6080 retry.go:31] will retry after 1.192358242s: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20220921213353-5916": docker container inspect functional-20220921213353-5916 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220921213353-5916
I0921 21:37:04.841021    6080 cli_runner.go:164] Run: docker container inspect functional-20220921213353-5916 --format={{.State.Status}}
W0921 21:37:05.020192    6080 cli_runner.go:211] docker container inspect functional-20220921213353-5916 --format={{.State.Status}} returned with exit code 1
I0921 21:37:05.020192    6080 oci.go:658] temporary error verifying shutdown: unknown state "functional-20220921213353-5916": docker container inspect functional-20220921213353-5916 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220921213353-5916
I0921 21:37:05.020192    6080 oci.go:660] temporary error: container functional-20220921213353-5916 status is  but expect it to be exited
I0921 21:37:05.020192    6080 retry.go:31] will retry after 3.456004252s: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20220921213353-5916": docker container inspect functional-20220921213353-5916 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220921213353-5916
I0921 21:37:08.488542    6080 cli_runner.go:164] Run: docker container inspect functional-20220921213353-5916 --format={{.State.Status}}
W0921 21:37:08.682338    6080 cli_runner.go:211] docker container inspect functional-20220921213353-5916 --format={{.State.Status}} returned with exit code 1
I0921 21:37:08.682776    6080 oci.go:658] temporary error verifying shutdown: unknown state "functional-20220921213353-5916": docker container inspect functional-20220921213353-5916 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220921213353-5916
I0921 21:37:08.682776    6080 oci.go:660] temporary error: container functional-20220921213353-5916 status is  but expect it to be exited
I0921 21:37:08.682776    6080 retry.go:31] will retry after 4.543793083s: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20220921213353-5916": docker container inspect functional-20220921213353-5916 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220921213353-5916
I0921 21:37:13.247349    6080 cli_runner.go:164] Run: docker container inspect functional-20220921213353-5916 --format={{.State.Status}}
W0921 21:37:13.441381    6080 cli_runner.go:211] docker container inspect functional-20220921213353-5916 --format={{.State.Status}} returned with exit code 1
I0921 21:37:13.441645    6080 oci.go:658] temporary error verifying shutdown: unknown state "functional-20220921213353-5916": docker container inspect functional-20220921213353-5916 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220921213353-5916
I0921 21:37:13.441645    6080 oci.go:660] temporary error: container functional-20220921213353-5916 status is  but expect it to be exited
I0921 21:37:13.441717    6080 retry.go:31] will retry after 5.830976587s: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20220921213353-5916": docker container inspect functional-20220921213353-5916 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220921213353-5916
I0921 21:37:19.296323    6080 cli_runner.go:164] Run: docker container inspect functional-20220921213353-5916 --format={{.State.Status}}
W0921 21:37:19.475077    6080 cli_runner.go:211] docker container inspect functional-20220921213353-5916 --format={{.State.Status}} returned with exit code 1
I0921 21:37:19.475077    6080 oci.go:658] temporary error verifying shutdown: unknown state "functional-20220921213353-5916": docker container inspect functional-20220921213353-5916 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220921213353-5916
I0921 21:37:19.475077    6080 oci.go:660] temporary error: container functional-20220921213353-5916 status is  but expect it to be exited
I0921 21:37:19.475077    6080 oci.go:88] couldn't shut down functional-20220921213353-5916 (might be okay): verify shutdown: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20220921213353-5916": docker container inspect functional-20220921213353-5916 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220921213353-5916

                                                
                                                
I0921 21:37:19.482525    6080 cli_runner.go:164] Run: docker rm -f -v functional-20220921213353-5916
I0921 21:37:19.706192    6080 cli_runner.go:164] Run: docker container inspect -f {{.Id}} functional-20220921213353-5916
W0921 21:37:19.884491    6080 cli_runner.go:211] docker container inspect -f {{.Id}} functional-20220921213353-5916 returned with exit code 1
I0921 21:37:19.891491    6080 cli_runner.go:164] Run: docker network inspect functional-20220921213353-5916 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W0921 21:37:20.086189    6080 cli_runner.go:211] docker network inspect functional-20220921213353-5916 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I0921 21:37:20.093695    6080 network_create.go:272] running [docker network inspect functional-20220921213353-5916] to gather additional debugging logs...
I0921 21:37:20.093695    6080 cli_runner.go:164] Run: docker network inspect functional-20220921213353-5916
W0921 21:37:20.274277    6080 cli_runner.go:211] docker network inspect functional-20220921213353-5916 returned with exit code 1
I0921 21:37:20.274310    6080 network_create.go:275] error running [docker network inspect functional-20220921213353-5916]: docker network inspect functional-20220921213353-5916: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error: No such network: functional-20220921213353-5916
I0921 21:37:20.274310    6080 network_create.go:277] output of [docker network inspect functional-20220921213353-5916]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error: No such network: functional-20220921213353-5916

                                                
                                                
** /stderr **
W0921 21:37:20.275203    6080 delete.go:139] delete failed (probably ok) <nil>
I0921 21:37:20.275203    6080 fix.go:115] Sleeping 1 second for extra luck!
I0921 21:37:21.288307    6080 start.go:125] createHost starting for "" (driver="docker")
I0921 21:37:21.293990    6080 out.go:204] * Creating docker container (CPUs=2, Memory=4000MB) ...
I0921 21:37:21.293990    6080 start.go:159] libmachine.API.Create for "functional-20220921213353-5916" (driver="docker")
I0921 21:37:21.293990    6080 client.go:168] LocalClient.Create starting
I0921 21:37:21.294780    6080 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
I0921 21:37:21.294780    6080 main.go:134] libmachine: Decoding PEM data...
I0921 21:37:21.294780    6080 main.go:134] libmachine: Parsing certificate...
I0921 21:37:21.295355    6080 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
I0921 21:37:21.295523    6080 main.go:134] libmachine: Decoding PEM data...
I0921 21:37:21.295523    6080 main.go:134] libmachine: Parsing certificate...
I0921 21:37:21.303438    6080 cli_runner.go:164] Run: docker network inspect functional-20220921213353-5916 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W0921 21:37:21.491193    6080 cli_runner.go:211] docker network inspect functional-20220921213353-5916 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I0921 21:37:21.498809    6080 network_create.go:272] running [docker network inspect functional-20220921213353-5916] to gather additional debugging logs...
I0921 21:37:21.498809    6080 cli_runner.go:164] Run: docker network inspect functional-20220921213353-5916
W0921 21:37:21.683137    6080 cli_runner.go:211] docker network inspect functional-20220921213353-5916 returned with exit code 1
I0921 21:37:21.683287    6080 network_create.go:275] error running [docker network inspect functional-20220921213353-5916]: docker network inspect functional-20220921213353-5916: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error: No such network: functional-20220921213353-5916
I0921 21:37:21.683355    6080 network_create.go:277] output of [docker network inspect functional-20220921213353-5916]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error: No such network: functional-20220921213353-5916

                                                
                                                
** /stderr **
I0921 21:37:21.690993    6080 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0921 21:37:21.899648    6080 network.go:281] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000406830] amended:false}} dirty:map[] misses:0}
I0921 21:37:21.899648    6080 network.go:239] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
I0921 21:37:21.913643    6080 network.go:290] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000406830] amended:true}} dirty:map[192.168.49.0:0xc000406830 192.168.58.0:0xc00048a5b0] misses:0}
I0921 21:37:21.913643    6080 network.go:236] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
I0921 21:37:21.913643    6080 network_create.go:115] attempt to create docker network functional-20220921213353-5916 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I0921 21:37:21.921446    6080 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=functional-20220921213353-5916 functional-20220921213353-5916
W0921 21:37:22.121367    6080 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=functional-20220921213353-5916 functional-20220921213353-5916 returned with exit code 1
E0921 21:37:22.121367    6080 network_create.go:104] error while trying to create docker network functional-20220921213353-5916 192.168.58.0/24: create docker network functional-20220921213353-5916 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=functional-20220921213353-5916 functional-20220921213353-5916: exit status 1
stdout:

                                                
                                                
stderr:
Error response from daemon: cannot create network 086f6ca4b948148b5c753ba2bc2bae51b8309d08fd412973a0dd7526f4b38637 (br-086f6ca4b948): conflicts with network 8a3cd8d165a4af032276277dbee93b52b8ee4aed7ca6d8aceee6814901b31792 (br-8a3cd8d165a4): networks have overlapping IPv4
W0921 21:37:22.121367    6080 out.go:239] ! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network functional-20220921213353-5916 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=functional-20220921213353-5916 functional-20220921213353-5916: exit status 1
stdout:

                                                
                                                
stderr:
Error response from daemon: cannot create network 086f6ca4b948148b5c753ba2bc2bae51b8309d08fd412973a0dd7526f4b38637 (br-086f6ca4b948): conflicts with network 8a3cd8d165a4af032276277dbee93b52b8ee4aed7ca6d8aceee6814901b31792 (br-8a3cd8d165a4): networks have overlapping IPv4

                                                
                                                
I0921 21:37:22.135883    6080 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
I0921 21:37:22.330181    6080 cli_runner.go:164] Run: docker volume create functional-20220921213353-5916 --label name.minikube.sigs.k8s.io=functional-20220921213353-5916 --label created_by.minikube.sigs.k8s.io=true
W0921 21:37:22.510140    6080 cli_runner.go:211] docker volume create functional-20220921213353-5916 --label name.minikube.sigs.k8s.io=functional-20220921213353-5916 --label created_by.minikube.sigs.k8s.io=true returned with exit code 1
I0921 21:37:22.510140    6080 client.go:171] LocalClient.Create took 1.2161437s
I0921 21:37:24.530517    6080 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I0921 21:37:24.537521    6080 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916
W0921 21:37:24.722531    6080 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916 returned with exit code 1
I0921 21:37:24.722717    6080 retry.go:31] will retry after 164.582069ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220921213353-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220921213353-5916
I0921 21:37:24.897729    6080 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916
W0921 21:37:25.107219    6080 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916 returned with exit code 1
I0921 21:37:25.107219    6080 retry.go:31] will retry after 415.22004ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220921213353-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220921213353-5916
I0921 21:37:25.536832    6080 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916
W0921 21:37:25.751197    6080 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916 returned with exit code 1
W0921 21:37:25.752587    6080 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220921213353-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220921213353-5916

                                                
                                                
W0921 21:37:25.752659    6080 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220921213353-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220921213353-5916
I0921 21:37:25.762393    6080 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
I0921 21:37:25.768401    6080 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916
W0921 21:37:25.969029    6080 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916 returned with exit code 1
I0921 21:37:25.969029    6080 retry.go:31] will retry after 144.863405ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220921213353-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220921213353-5916
I0921 21:37:26.129479    6080 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916
W0921 21:37:26.308456    6080 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916 returned with exit code 1
I0921 21:37:26.313644    6080 retry.go:31] will retry after 410.553224ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220921213353-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220921213353-5916
I0921 21:37:26.748660    6080 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916
W0921 21:37:26.955542    6080 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916 returned with exit code 1
I0921 21:37:26.955692    6080 retry.go:31] will retry after 314.505366ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220921213353-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220921213353-5916
I0921 21:37:27.293970    6080 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916
W0921 21:37:27.491659    6080 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916 returned with exit code 1
W0921 21:37:27.491659    6080 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220921213353-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220921213353-5916

                                                
                                                
W0921 21:37:27.491659    6080 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220921213353-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220921213353-5916
I0921 21:37:27.491659    6080 start.go:128] duration metric: createHost completed in 6.2033185s
I0921 21:37:27.501637    6080 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I0921 21:37:27.509476    6080 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916
W0921 21:37:27.693576    6080 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916 returned with exit code 1
I0921 21:37:27.693648    6080 retry.go:31] will retry after 200.38067ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220921213353-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220921213353-5916
I0921 21:37:27.913406    6080 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916
W0921 21:37:28.145850    6080 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916 returned with exit code 1
I0921 21:37:28.145850    6080 retry.go:31] will retry after 252.474839ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220921213353-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220921213353-5916
I0921 21:37:28.418010    6080 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916
W0921 21:37:28.594632    6080 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916 returned with exit code 1
I0921 21:37:28.595018    6080 retry.go:31] will retry after 585.618668ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220921213353-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220921213353-5916
I0921 21:37:29.196815    6080 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916
W0921 21:37:29.393126    6080 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916 returned with exit code 1
W0921 21:37:29.393302    6080 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220921213353-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220921213353-5916

                                                
                                                
W0921 21:37:29.393302    6080 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220921213353-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220921213353-5916
I0921 21:37:29.403887    6080 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
I0921 21:37:29.409899    6080 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916
W0921 21:37:29.594849    6080 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916 returned with exit code 1
I0921 21:37:29.594849    6080 retry.go:31] will retry after 194.626905ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220921213353-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220921213353-5916
I0921 21:37:29.795743    6080 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916
W0921 21:37:29.983809    6080 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916 returned with exit code 1
I0921 21:37:29.983809    6080 retry.go:31] will retry after 346.182076ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220921213353-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220921213353-5916
I0921 21:37:30.345856    6080 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916
W0921 21:37:30.539360    6080 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916 returned with exit code 1
I0921 21:37:30.539702    6080 retry.go:31] will retry after 579.704465ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220921213353-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220921213353-5916
I0921 21:37:31.130262    6080 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916
W0921 21:37:31.324127    6080 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916 returned with exit code 1
W0921 21:37:31.324265    6080 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220921213353-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220921213353-5916

                                                
                                                
W0921 21:37:31.324362    6080 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220921213353-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220921213353-5916
I0921 21:37:31.324362    6080 fix.go:57] fixHost completed within 33.1773922s
I0921 21:37:31.324362    6080 start.go:83] releasing machines lock for "functional-20220921213353-5916", held for 33.1775695s
W0921 21:37:31.324617    6080 out.go:239] * Failed to start docker container. Running "minikube delete -p functional-20220921213353-5916" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for functional-20220921213353-5916 container: docker volume create functional-20220921213353-5916 --label name.minikube.sigs.k8s.io=functional-20220921213353-5916 --label created_by.minikube.sigs.k8s.io=true: exit status 1
stdout:

                                                
                                                
stderr:
Error response from daemon: create functional-20220921213353-5916: error while creating volume root path '/var/lib/docker/volumes/functional-20220921213353-5916': mkdir /var/lib/docker/volumes/functional-20220921213353-5916: read-only file system

                                                
                                                
I0921 21:37:31.329139    6080 out.go:177] 
W0921 21:37:31.331368    6080 out.go:239] X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for functional-20220921213353-5916 container: docker volume create functional-20220921213353-5916 --label name.minikube.sigs.k8s.io=functional-20220921213353-5916 --label created_by.minikube.sigs.k8s.io=true: exit status 1
stdout:

                                                
                                                
stderr:
Error response from daemon: create functional-20220921213353-5916: error while creating volume root path '/var/lib/docker/volumes/functional-20220921213353-5916': mkdir /var/lib/docker/volumes/functional-20220921213353-5916: read-only file system

                                                
                                                
W0921 21:37:31.331368    6080 out.go:239] * Suggestion: Restart Docker
W0921 21:37:31.331368    6080 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/6825
I0921 21:37:31.334689    6080 out.go:177] 

                                                
                                                
* 

                                                
                                                

                                                
                                                
***
--- FAIL: TestFunctional/serial/LogsCmd (1.65s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.49s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1242: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220921213353-5916 logs --file C:\Users\jenkins.minikube2\AppData\Local\Temp\TestFunctionalserialLogsFileCmd1985313701\001\logs.txt
functional_test.go:1242: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-20220921213353-5916 logs --file C:\Users\jenkins.minikube2\AppData\Local\Temp\TestFunctionalserialLogsFileCmd1985313701\001\logs.txt: exit status 80 (1.3336199s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "functional-20220921213353-5916": docker container inspect functional-20220921213353-5916 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220921213353-5916
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_logs_80bd2298da0c083373823443180fffe8ad701919_1059.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1244: out/minikube-windows-amd64.exe -p functional-20220921213353-5916 logs --file C:\Users\jenkins.minikube2\AppData\Local\Temp\TestFunctionalserialLogsFileCmd1985313701\001\logs.txt failed: exit status 80
functional_test.go:1247: expected empty minikube logs output, but got: 
***
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "functional-20220921213353-5916": docker container inspect functional-20220921213353-5916 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220921213353-5916
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_logs_80bd2298da0c083373823443180fffe8ad701919_1059.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr *****
functional_test.go:1220: expected minikube logs to include word: -"Linux"- but got 
**** 
* ==> Audit <==
* |---------|------------------------------------------------------------------------------------|-------------------------------------|-------------------|---------|---------------------|---------------------|
| Command |                                        Args                                        |               Profile               |       User        | Version |     Start Time      |      End Time       |
|---------|------------------------------------------------------------------------------------|-------------------------------------|-------------------|---------|---------------------|---------------------|
| start   | -o=json --download-only -p                                                         | download-only-20220921212952-5916   | minikube2\jenkins | v1.27.0 | 21 Sep 22 21:29 GMT |                     |
|         | download-only-20220921212952-5916                                                  |                                     |                   |         |                     |                     |
|         | --force --alsologtostderr                                                          |                                     |                   |         |                     |                     |
|         | --kubernetes-version=v1.16.0                                                       |                                     |                   |         |                     |                     |
|         | --container-runtime=docker                                                         |                                     |                   |         |                     |                     |
|         | --driver=docker                                                                    |                                     |                   |         |                     |                     |
| start   | -o=json --download-only -p                                                         | download-only-20220921212952-5916   | minikube2\jenkins | v1.27.0 | 21 Sep 22 21:30 GMT |                     |
|         | download-only-20220921212952-5916                                                  |                                     |                   |         |                     |                     |
|         | --force --alsologtostderr                                                          |                                     |                   |         |                     |                     |
|         | --kubernetes-version=v1.25.2                                                       |                                     |                   |         |                     |                     |
|         | --container-runtime=docker                                                         |                                     |                   |         |                     |                     |
|         | --driver=docker                                                                    |                                     |                   |         |                     |                     |
| delete  | --all                                                                              | minikube                            | minikube2\jenkins | v1.27.0 | 21 Sep 22 21:30 GMT | 21 Sep 22 21:30 GMT |
| delete  | -p                                                                                 | download-only-20220921212952-5916   | minikube2\jenkins | v1.27.0 | 21 Sep 22 21:30 GMT | 21 Sep 22 21:30 GMT |
|         | download-only-20220921212952-5916                                                  |                                     |                   |         |                     |                     |
| delete  | -p                                                                                 | download-only-20220921212952-5916   | minikube2\jenkins | v1.27.0 | 21 Sep 22 21:30 GMT | 21 Sep 22 21:30 GMT |
|         | download-only-20220921212952-5916                                                  |                                     |                   |         |                     |                     |
| start   | --download-only -p                                                                 | download-docker-20220921213020-5916 | minikube2\jenkins | v1.27.0 | 21 Sep 22 21:30 GMT |                     |
|         | download-docker-20220921213020-5916                                                |                                     |                   |         |                     |                     |
|         | --force --alsologtostderr                                                          |                                     |                   |         |                     |                     |
|         | --driver=docker                                                                    |                                     |                   |         |                     |                     |
| delete  | -p                                                                                 | download-docker-20220921213020-5916 | minikube2\jenkins | v1.27.0 | 21 Sep 22 21:30 GMT | 21 Sep 22 21:30 GMT |
|         | download-docker-20220921213020-5916                                                |                                     |                   |         |                     |                     |
| start   | --download-only -p                                                                 | binary-mirror-20220921213055-5916   | minikube2\jenkins | v1.27.0 | 21 Sep 22 21:30 GMT |                     |
|         | binary-mirror-20220921213055-5916                                                  |                                     |                   |         |                     |                     |
|         | --alsologtostderr --binary-mirror                                                  |                                     |                   |         |                     |                     |
|         | http://127.0.0.1:57904                                                             |                                     |                   |         |                     |                     |
|         | --driver=docker                                                                    |                                     |                   |         |                     |                     |
| delete  | -p                                                                                 | binary-mirror-20220921213055-5916   | minikube2\jenkins | v1.27.0 | 21 Sep 22 21:30 GMT | 21 Sep 22 21:30 GMT |
|         | binary-mirror-20220921213055-5916                                                  |                                     |                   |         |                     |                     |
| start   | -p addons-20220921213059-5916                                                      | addons-20220921213059-5916          | minikube2\jenkins | v1.27.0 | 21 Sep 22 21:31 GMT |                     |
|         | --wait=true --memory=4000                                                          |                                     |                   |         |                     |                     |
|         | --alsologtostderr                                                                  |                                     |                   |         |                     |                     |
|         | --addons=registry                                                                  |                                     |                   |         |                     |                     |
|         | --addons=metrics-server                                                            |                                     |                   |         |                     |                     |
|         | --addons=volumesnapshots                                                           |                                     |                   |         |                     |                     |
|         | --addons=csi-hostpath-driver                                                       |                                     |                   |         |                     |                     |
|         | --addons=gcp-auth                                                                  |                                     |                   |         |                     |                     |
|         | --driver=docker                                                                    |                                     |                   |         |                     |                     |
|         | --addons=ingress                                                                   |                                     |                   |         |                     |                     |
|         | --addons=ingress-dns                                                               |                                     |                   |         |                     |                     |
|         | --addons=helm-tiller                                                               |                                     |                   |         |                     |                     |
| delete  | -p addons-20220921213059-5916                                                      | addons-20220921213059-5916          | minikube2\jenkins | v1.27.0 | 21 Sep 22 21:31 GMT | 21 Sep 22 21:31 GMT |
| start   | -p nospam-20220921213151-5916 -n=1 --memory=2250 --wait=false                      | nospam-20220921213151-5916          | minikube2\jenkins | v1.27.0 | 21 Sep 22 21:31 GMT |                     |
|         | --log_dir=C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-20220921213151-5916 |                                     |                   |         |                     |                     |
|         | --driver=docker                                                                    |                                     |                   |         |                     |                     |
| start   | nospam-20220921213151-5916 --log_dir                                               | nospam-20220921213151-5916          | minikube2\jenkins | v1.27.0 | 21 Sep 22 21:32 GMT |                     |
|         | C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-20220921213151-5916           |                                     |                   |         |                     |                     |
|         | start --dry-run                                                                    |                                     |                   |         |                     |                     |
| start   | nospam-20220921213151-5916 --log_dir                                               | nospam-20220921213151-5916          | minikube2\jenkins | v1.27.0 | 21 Sep 22 21:32 GMT |                     |
|         | C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-20220921213151-5916           |                                     |                   |         |                     |                     |
|         | start --dry-run                                                                    |                                     |                   |         |                     |                     |
| start   | nospam-20220921213151-5916 --log_dir                                               | nospam-20220921213151-5916          | minikube2\jenkins | v1.27.0 | 21 Sep 22 21:32 GMT |                     |
|         | C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-20220921213151-5916           |                                     |                   |         |                     |                     |
|         | start --dry-run                                                                    |                                     |                   |         |                     |                     |
| pause   | nospam-20220921213151-5916 --log_dir                                               | nospam-20220921213151-5916          | minikube2\jenkins | v1.27.0 | 21 Sep 22 21:32 GMT |                     |
|         | C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-20220921213151-5916           |                                     |                   |         |                     |                     |
|         | pause                                                                              |                                     |                   |         |                     |                     |
| pause   | nospam-20220921213151-5916 --log_dir                                               | nospam-20220921213151-5916          | minikube2\jenkins | v1.27.0 | 21 Sep 22 21:32 GMT |                     |
|         | C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-20220921213151-5916           |                                     |                   |         |                     |                     |
|         | pause                                                                              |                                     |                   |         |                     |                     |
| pause   | nospam-20220921213151-5916 --log_dir                                               | nospam-20220921213151-5916          | minikube2\jenkins | v1.27.0 | 21 Sep 22 21:32 GMT |                     |
|         | C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-20220921213151-5916           |                                     |                   |         |                     |                     |
|         | pause                                                                              |                                     |                   |         |                     |                     |
| unpause | nospam-20220921213151-5916 --log_dir                                               | nospam-20220921213151-5916          | minikube2\jenkins | v1.27.0 | 21 Sep 22 21:32 GMT |                     |
|         | C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-20220921213151-5916           |                                     |                   |         |                     |                     |
|         | unpause                                                                            |                                     |                   |         |                     |                     |
| unpause | nospam-20220921213151-5916 --log_dir                                               | nospam-20220921213151-5916          | minikube2\jenkins | v1.27.0 | 21 Sep 22 21:32 GMT |                     |
|         | C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-20220921213151-5916           |                                     |                   |         |                     |                     |
|         | unpause                                                                            |                                     |                   |         |                     |                     |
| unpause | nospam-20220921213151-5916 --log_dir                                               | nospam-20220921213151-5916          | minikube2\jenkins | v1.27.0 | 21 Sep 22 21:32 GMT |                     |
|         | C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-20220921213151-5916           |                                     |                   |         |                     |                     |
|         | unpause                                                                            |                                     |                   |         |                     |                     |
| stop    | nospam-20220921213151-5916 --log_dir                                               | nospam-20220921213151-5916          | minikube2\jenkins | v1.27.0 | 21 Sep 22 21:32 GMT |                     |
|         | C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-20220921213151-5916           |                                     |                   |         |                     |                     |
|         | stop                                                                               |                                     |                   |         |                     |                     |
| stop    | nospam-20220921213151-5916 --log_dir                                               | nospam-20220921213151-5916          | minikube2\jenkins | v1.27.0 | 21 Sep 22 21:33 GMT |                     |
|         | C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-20220921213151-5916           |                                     |                   |         |                     |                     |
|         | stop                                                                               |                                     |                   |         |                     |                     |
| stop    | nospam-20220921213151-5916 --log_dir                                               | nospam-20220921213151-5916          | minikube2\jenkins | v1.27.0 | 21 Sep 22 21:33 GMT |                     |
|         | C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-20220921213151-5916           |                                     |                   |         |                     |                     |
|         | stop                                                                               |                                     |                   |         |                     |                     |
| delete  | -p nospam-20220921213151-5916                                                      | nospam-20220921213151-5916          | minikube2\jenkins | v1.27.0 | 21 Sep 22 21:33 GMT | 21 Sep 22 21:33 GMT |
| start   | -p                                                                                 | functional-20220921213353-5916      | minikube2\jenkins | v1.27.0 | 21 Sep 22 21:33 GMT |                     |
|         | functional-20220921213353-5916                                                     |                                     |                   |         |                     |                     |
|         | --memory=4000                                                                      |                                     |                   |         |                     |                     |
|         | --apiserver-port=8441                                                              |                                     |                   |         |                     |                     |
|         | --wait=all --driver=docker                                                         |                                     |                   |         |                     |                     |
| start   | -p                                                                                 | functional-20220921213353-5916      | minikube2\jenkins | v1.27.0 | 21 Sep 22 21:34 GMT |                     |
|         | functional-20220921213353-5916                                                     |                                     |                   |         |                     |                     |
|         | --alsologtostderr -v=8                                                             |                                     |                   |         |                     |                     |
| cache   | functional-20220921213353-5916                                                     | functional-20220921213353-5916      | minikube2\jenkins | v1.27.0 | 21 Sep 22 21:36 GMT | 21 Sep 22 21:36 GMT |
|         | cache add k8s.gcr.io/pause:3.1                                                     |                                     |                   |         |                     |                     |
| cache   | functional-20220921213353-5916                                                     | functional-20220921213353-5916      | minikube2\jenkins | v1.27.0 | 21 Sep 22 21:36 GMT | 21 Sep 22 21:36 GMT |
|         | cache add k8s.gcr.io/pause:3.3                                                     |                                     |                   |         |                     |                     |
| cache   | functional-20220921213353-5916                                                     | functional-20220921213353-5916      | minikube2\jenkins | v1.27.0 | 21 Sep 22 21:36 GMT | 21 Sep 22 21:36 GMT |
|         | cache add                                                                          |                                     |                   |         |                     |                     |
|         | k8s.gcr.io/pause:latest                                                            |                                     |                   |         |                     |                     |
| cache   | delete k8s.gcr.io/pause:3.3                                                        | minikube                            | minikube2\jenkins | v1.27.0 | 21 Sep 22 21:36 GMT | 21 Sep 22 21:36 GMT |
| cache   | list                                                                               | minikube                            | minikube2\jenkins | v1.27.0 | 21 Sep 22 21:36 GMT | 21 Sep 22 21:36 GMT |
| ssh     | functional-20220921213353-5916                                                     | functional-20220921213353-5916      | minikube2\jenkins | v1.27.0 | 21 Sep 22 21:36 GMT |                     |
|         | ssh sudo crictl images                                                             |                                     |                   |         |                     |                     |
| ssh     | functional-20220921213353-5916                                                     | functional-20220921213353-5916      | minikube2\jenkins | v1.27.0 | 21 Sep 22 21:36 GMT |                     |
|         | ssh sudo docker rmi                                                                |                                     |                   |         |                     |                     |
|         | k8s.gcr.io/pause:latest                                                            |                                     |                   |         |                     |                     |
| ssh     | functional-20220921213353-5916                                                     | functional-20220921213353-5916      | minikube2\jenkins | v1.27.0 | 21 Sep 22 21:36 GMT |                     |
|         | ssh sudo crictl inspecti                                                           |                                     |                   |         |                     |                     |
|         | k8s.gcr.io/pause:latest                                                            |                                     |                   |         |                     |                     |
| cache   | functional-20220921213353-5916                                                     | functional-20220921213353-5916      | minikube2\jenkins | v1.27.0 | 21 Sep 22 21:36 GMT | 21 Sep 22 21:36 GMT |
|         | cache reload                                                                       |                                     |                   |         |                     |                     |
| ssh     | functional-20220921213353-5916                                                     | functional-20220921213353-5916      | minikube2\jenkins | v1.27.0 | 21 Sep 22 21:36 GMT |                     |
|         | ssh sudo crictl inspecti                                                           |                                     |                   |         |                     |                     |
|         | k8s.gcr.io/pause:latest                                                            |                                     |                   |         |                     |                     |
| cache   | delete k8s.gcr.io/pause:3.1                                                        | minikube                            | minikube2\jenkins | v1.27.0 | 21 Sep 22 21:36 GMT | 21 Sep 22 21:36 GMT |
| cache   | delete k8s.gcr.io/pause:latest                                                     | minikube                            | minikube2\jenkins | v1.27.0 | 21 Sep 22 21:36 GMT | 21 Sep 22 21:36 GMT |
| kubectl | functional-20220921213353-5916                                                     | functional-20220921213353-5916      | minikube2\jenkins | v1.27.0 | 21 Sep 22 21:36 GMT |                     |
|         | kubectl -- --context                                                               |                                     |                   |         |                     |                     |
|         | functional-20220921213353-5916                                                     |                                     |                   |         |                     |                     |
|         | get pods                                                                           |                                     |                   |         |                     |                     |
| start   | -p functional-20220921213353-5916                                                  | functional-20220921213353-5916      | minikube2\jenkins | v1.27.0 | 21 Sep 22 21:36 GMT |                     |
|         | --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision           |                                     |                   |         |                     |                     |
|         | --wait=all                                                                         |                                     |                   |         |                     |                     |
|---------|------------------------------------------------------------------------------------|-------------------------------------|-------------------|---------|---------------------|---------------------|

                                                
                                                
* 
* ==> Last Start <==
* Log file created at: 2022/09/21 21:36:16
Running on machine: minikube2
Binary: Built with gc go1.19.1 for windows/amd64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0921 21:36:16.253670    6080 out.go:296] Setting OutFile to fd 992 ...
I0921 21:36:16.306661    6080 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0921 21:36:16.306661    6080 out.go:309] Setting ErrFile to fd 668...
I0921 21:36:16.306661    6080 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0921 21:36:16.326042    6080 out.go:303] Setting JSON to false
I0921 21:36:16.328732    6080 start.go:115] hostinfo: {"hostname":"minikube2","uptime":2244,"bootTime":1663793932,"procs":149,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19044 Build 19044","kernelVersion":"10.0.19044 Build 19044","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"29dced62-21fb-45d8-a34e-472b66ced002"}
W0921 21:36:16.329367    6080 start.go:123] gopshost.Virtualization returned error: not implemented yet
I0921 21:36:16.333798    6080 out.go:177] * [functional-20220921213353-5916] minikube v1.27.0 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
I0921 21:36:16.336178    6080 notify.go:214] Checking for updates...
I0921 21:36:16.338864    6080 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
I0921 21:36:16.341289    6080 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
I0921 21:36:16.343686    6080 out.go:177]   - MINIKUBE_LOCATION=14995
I0921 21:36:16.346193    6080 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0921 21:36:16.351139    6080 config.go:180] Loaded profile config "functional-20220921213353-5916": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.2
I0921 21:36:16.351139    6080 driver.go:365] Setting default libvirt URI to qemu:///system
I0921 21:36:16.637054    6080 docker.go:137] docker version: linux-20.10.17
I0921 21:36:16.644807    6080 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0921 21:36:17.168944    6080 info.go:265] docker info: {ID:6VFI:N2PO:PLBP:KXLR:EN7R:OR7P:564S:UY7N:ZWDA:5V2U:KGL2:KJZ7 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:43 OomKillDisable:true NGoroutines:49 SystemTime:2022-09-21 21:36:16.7990226 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_6
4 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,p
rofile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.10.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.9] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-p
lugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.19.0]] Warnings:<nil>}}
I0921 21:36:17.173954    6080 out.go:177] * Using the docker driver based on existing profile
I0921 21:36:17.176425    6080 start.go:284] selected driver: docker
I0921 21:36:17.176425    6080 start.go:808] validating driver "docker" against &{Name:functional-20220921213353-5916 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.2 ClusterName:functional-20220921213353-5916 Namespace:default APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.25.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptim
izations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
I0921 21:36:17.176425    6080 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0921 21:36:17.189017    6080 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0921 21:36:17.716472    6080 info.go:265] docker info: {ID:6VFI:N2PO:PLBP:KXLR:EN7R:OR7P:564S:UY7N:ZWDA:5V2U:KGL2:KJZ7 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:43 OomKillDisable:true NGoroutines:49 SystemTime:2022-09-21 21:36:17.3434501 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_6
4 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,p
rofile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.10.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.9] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-p
lugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.19.0]] Warnings:<nil>}}
I0921 21:36:17.774790    6080 start_flags.go:867] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0921 21:36:17.774790    6080 cni.go:95] Creating CNI manager for ""
I0921 21:36:17.774860    6080 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
I0921 21:36:17.774860    6080 start_flags.go:316] config:
{Name:functional-20220921213353-5916 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.2 ClusterName:functional-20220921213353-5916 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.25.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimi
zations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
I0921 21:36:17.779515    6080 out.go:177] * Starting control plane node functional-20220921213353-5916 in cluster functional-20220921213353-5916
I0921 21:36:17.781339    6080 cache.go:120] Beginning downloading kic base image for docker with docker
I0921 21:36:17.783952    6080 out.go:177] * Pulling base image ...
I0921 21:36:17.786719    6080 preload.go:132] Checking if preload exists for k8s version v1.25.2 and runtime docker
I0921 21:36:17.786719    6080 preload.go:148] Found local preload: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.25.2-docker-overlay2-amd64.tar.lz4
I0921 21:36:17.786719    6080 cache.go:57] Caching tarball of preloaded images
I0921 21:36:17.786719    6080 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c in local docker daemon
I0921 21:36:17.786719    6080 preload.go:174] Found C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.25.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
I0921 21:36:17.786719    6080 cache.go:60] Finished verifying existence of preloaded tar for  v1.25.2 on docker
I0921 21:36:17.787674    6080 profile.go:148] Saving config to C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\functional-20220921213353-5916\config.json ...
I0921 21:36:17.997663    6080 cache.go:147] Downloading gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c to local cache
I0921 21:36:17.997782    6080 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase_v0.0.34@sha256_f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c.tar
I0921 21:36:17.998107    6080 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase_v0.0.34@sha256_f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c.tar
I0921 21:36:17.998186    6080 image.go:59] Checking for gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c in local cache directory
I0921 21:36:17.998298    6080 image.go:62] Found gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c in local cache directory, skipping pull
I0921 21:36:17.998298    6080 image.go:103] gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c exists in cache, skipping pull
I0921 21:36:17.998502    6080 cache.go:150] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c as a tarball
I0921 21:36:17.998502    6080 cache.go:161] Loading gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c from local cache
I0921 21:36:17.998502    6080 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase_v0.0.34@sha256_f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c.tar
I0921 21:36:20.222960    6080 cache.go:164] successfully loaded gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c from cached tarball
I0921 21:36:20.223038    6080 cache.go:170] Downloading gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c to local daemon
I0921 21:36:20.223101    6080 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c.d.lock -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase_v0.0.34@sha256_f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c.d.lock
I0921 21:36:20.223101    6080 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c in local docker daemon
I0921 21:36:20.407337    6080 image.go:243] Writing gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c to local daemon
I0921 21:36:21.937626    6080 cache.go:177] use image loaded from cache gcr.io/k8s-minikube/kicbase:v0.0.34
W0921 21:36:21.937626    6080 out.go:239] ! minikube was unable to download gcr.io/k8s-minikube/kicbase:v0.0.34, but successfully downloaded gcr.io/k8s-minikube/kicbase:v0.0.34 as a fallback image
I0921 21:36:21.937626    6080 cache.go:208] Successfully downloaded all kic artifacts
I0921 21:36:21.937626    6080 start.go:364] acquiring machines lock for functional-20220921213353-5916: {Name:mk3f5ae8740d25300eb345feb1053ed449398cb0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0921 21:36:21.938173    6080 start.go:368] acquired machines lock for "functional-20220921213353-5916" in 546.6µs
I0921 21:36:21.938354    6080 start.go:96] Skipping create...Using existing machine configuration
I0921 21:36:21.938434    6080 fix.go:55] fixHost starting: 
I0921 21:36:21.952190    6080 cli_runner.go:164] Run: docker container inspect functional-20220921213353-5916 --format={{.State.Status}}
W0921 21:36:22.139760    6080 cli_runner.go:211] docker container inspect functional-20220921213353-5916 --format={{.State.Status}} returned with exit code 1
I0921 21:36:22.139760    6080 fix.go:103] recreateIfNeeded on functional-20220921213353-5916: state= err=unknown state "functional-20220921213353-5916": docker container inspect functional-20220921213353-5916 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220921213353-5916
I0921 21:36:22.139760    6080 fix.go:108] machineExists: false. err=machine does not exist
I0921 21:36:22.143965    6080 out.go:177] * docker "functional-20220921213353-5916" container is missing, will recreate.
I0921 21:36:22.146227    6080 delete.go:124] DEMOLISHING functional-20220921213353-5916 ...
I0921 21:36:22.160033    6080 cli_runner.go:164] Run: docker container inspect functional-20220921213353-5916 --format={{.State.Status}}
W0921 21:36:22.355164    6080 cli_runner.go:211] docker container inspect functional-20220921213353-5916 --format={{.State.Status}} returned with exit code 1
W0921 21:36:22.355291    6080 stop.go:75] unable to get state: unknown state "functional-20220921213353-5916": docker container inspect functional-20220921213353-5916 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220921213353-5916
I0921 21:36:22.355334    6080 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "functional-20220921213353-5916": docker container inspect functional-20220921213353-5916 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220921213353-5916
I0921 21:36:22.368925    6080 cli_runner.go:164] Run: docker container inspect functional-20220921213353-5916 --format={{.State.Status}}
W0921 21:36:22.566986    6080 cli_runner.go:211] docker container inspect functional-20220921213353-5916 --format={{.State.Status}} returned with exit code 1
I0921 21:36:22.567187    6080 delete.go:82] Unable to get host status for functional-20220921213353-5916, assuming it has already been deleted: state: unknown state "functional-20220921213353-5916": docker container inspect functional-20220921213353-5916 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220921213353-5916
I0921 21:36:22.575667    6080 cli_runner.go:164] Run: docker container inspect -f {{.Id}} functional-20220921213353-5916
W0921 21:36:22.754941    6080 cli_runner.go:211] docker container inspect -f {{.Id}} functional-20220921213353-5916 returned with exit code 1
I0921 21:36:22.754992    6080 kic.go:356] could not find the container functional-20220921213353-5916 to remove it. will try anyways
I0921 21:36:22.766306    6080 cli_runner.go:164] Run: docker container inspect functional-20220921213353-5916 --format={{.State.Status}}
W0921 21:36:22.956751    6080 cli_runner.go:211] docker container inspect functional-20220921213353-5916 --format={{.State.Status}} returned with exit code 1
W0921 21:36:22.956751    6080 oci.go:84] error getting container status, will try to delete anyways: unknown state "functional-20220921213353-5916": docker container inspect functional-20220921213353-5916 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220921213353-5916
I0921 21:36:22.967169    6080 cli_runner.go:164] Run: docker exec --privileged -t functional-20220921213353-5916 /bin/bash -c "sudo init 0"
W0921 21:36:23.207048    6080 cli_runner.go:211] docker exec --privileged -t functional-20220921213353-5916 /bin/bash -c "sudo init 0" returned with exit code 1
I0921 21:36:23.207048    6080 oci.go:646] error shutdown functional-20220921213353-5916: docker exec --privileged -t functional-20220921213353-5916 /bin/bash -c "sudo init 0": exit status 1
stdout:

                                                
                                                
stderr:
Error: No such container: functional-20220921213353-5916
I0921 21:36:24.226922    6080 cli_runner.go:164] Run: docker container inspect functional-20220921213353-5916 --format={{.State.Status}}
W0921 21:36:24.405201    6080 cli_runner.go:211] docker container inspect functional-20220921213353-5916 --format={{.State.Status}} returned with exit code 1
I0921 21:36:24.405449    6080 oci.go:658] temporary error verifying shutdown: unknown state "functional-20220921213353-5916": docker container inspect functional-20220921213353-5916 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220921213353-5916
I0921 21:36:24.405449    6080 oci.go:660] temporary error: container functional-20220921213353-5916 status is  but expect it to be exited
I0921 21:36:24.405524    6080 retry.go:31] will retry after 552.330144ms: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20220921213353-5916": docker container inspect functional-20220921213353-5916 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220921213353-5916
I0921 21:36:24.968714    6080 cli_runner.go:164] Run: docker container inspect functional-20220921213353-5916 --format={{.State.Status}}
W0921 21:36:25.206628    6080 cli_runner.go:211] docker container inspect functional-20220921213353-5916 --format={{.State.Status}} returned with exit code 1
I0921 21:36:25.206698    6080 oci.go:658] temporary error verifying shutdown: unknown state "functional-20220921213353-5916": docker container inspect functional-20220921213353-5916 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220921213353-5916
I0921 21:36:25.206698    6080 oci.go:660] temporary error: container functional-20220921213353-5916 status is  but expect it to be exited
I0921 21:36:25.206698    6080 retry.go:31] will retry after 1.080381816s: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20220921213353-5916": docker container inspect functional-20220921213353-5916 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220921213353-5916
I0921 21:36:26.308678    6080 cli_runner.go:164] Run: docker container inspect functional-20220921213353-5916 --format={{.State.Status}}
W0921 21:36:26.501826    6080 cli_runner.go:211] docker container inspect functional-20220921213353-5916 --format={{.State.Status}} returned with exit code 1
I0921 21:36:26.501826    6080 oci.go:658] temporary error verifying shutdown: unknown state "functional-20220921213353-5916": docker container inspect functional-20220921213353-5916 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220921213353-5916
I0921 21:36:26.501826    6080 oci.go:660] temporary error: container functional-20220921213353-5916 status is  but expect it to be exited
I0921 21:36:26.501826    6080 retry.go:31] will retry after 1.31013006s: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20220921213353-5916": docker container inspect functional-20220921213353-5916 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220921213353-5916
I0921 21:36:27.820650    6080 cli_runner.go:164] Run: docker container inspect functional-20220921213353-5916 --format={{.State.Status}}
W0921 21:36:28.014954    6080 cli_runner.go:211] docker container inspect functional-20220921213353-5916 --format={{.State.Status}} returned with exit code 1
I0921 21:36:28.015083    6080 oci.go:658] temporary error verifying shutdown: unknown state "functional-20220921213353-5916": docker container inspect functional-20220921213353-5916 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220921213353-5916
I0921 21:36:28.015083    6080 oci.go:660] temporary error: container functional-20220921213353-5916 status is  but expect it to be exited
I0921 21:36:28.015083    6080 retry.go:31] will retry after 1.582392691s: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20220921213353-5916": docker container inspect functional-20220921213353-5916 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220921213353-5916
I0921 21:36:29.609977    6080 cli_runner.go:164] Run: docker container inspect functional-20220921213353-5916 --format={{.State.Status}}
W0921 21:36:29.803271    6080 cli_runner.go:211] docker container inspect functional-20220921213353-5916 --format={{.State.Status}} returned with exit code 1
I0921 21:36:29.803271    6080 oci.go:658] temporary error verifying shutdown: unknown state "functional-20220921213353-5916": docker container inspect functional-20220921213353-5916 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220921213353-5916
I0921 21:36:29.803271    6080 oci.go:660] temporary error: container functional-20220921213353-5916 status is  but expect it to be exited
I0921 21:36:29.803271    6080 retry.go:31] will retry after 2.340488664s: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20220921213353-5916": docker container inspect functional-20220921213353-5916 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220921213353-5916
I0921 21:36:32.151484    6080 cli_runner.go:164] Run: docker container inspect functional-20220921213353-5916 --format={{.State.Status}}
W0921 21:36:32.330326    6080 cli_runner.go:211] docker container inspect functional-20220921213353-5916 --format={{.State.Status}} returned with exit code 1
I0921 21:36:32.330326    6080 oci.go:658] temporary error verifying shutdown: unknown state "functional-20220921213353-5916": docker container inspect functional-20220921213353-5916 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220921213353-5916
I0921 21:36:32.330326    6080 oci.go:660] temporary error: container functional-20220921213353-5916 status is  but expect it to be exited
I0921 21:36:32.330326    6080 retry.go:31] will retry after 4.506218855s: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20220921213353-5916": docker container inspect functional-20220921213353-5916 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220921213353-5916
I0921 21:36:36.847004    6080 cli_runner.go:164] Run: docker container inspect functional-20220921213353-5916 --format={{.State.Status}}
W0921 21:36:37.044165    6080 cli_runner.go:211] docker container inspect functional-20220921213353-5916 --format={{.State.Status}} returned with exit code 1
I0921 21:36:37.044165    6080 oci.go:658] temporary error verifying shutdown: unknown state "functional-20220921213353-5916": docker container inspect functional-20220921213353-5916 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220921213353-5916
I0921 21:36:37.044165    6080 oci.go:660] temporary error: container functional-20220921213353-5916 status is  but expect it to be exited
I0921 21:36:37.044165    6080 retry.go:31] will retry after 3.221479586s: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20220921213353-5916": docker container inspect functional-20220921213353-5916 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220921213353-5916
I0921 21:36:40.277013    6080 cli_runner.go:164] Run: docker container inspect functional-20220921213353-5916 --format={{.State.Status}}
W0921 21:36:40.455025    6080 cli_runner.go:211] docker container inspect functional-20220921213353-5916 --format={{.State.Status}} returned with exit code 1
I0921 21:36:40.455025    6080 oci.go:658] temporary error verifying shutdown: unknown state "functional-20220921213353-5916": docker container inspect functional-20220921213353-5916 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220921213353-5916
I0921 21:36:40.455025    6080 oci.go:660] temporary error: container functional-20220921213353-5916 status is  but expect it to be exited
I0921 21:36:40.455025    6080 oci.go:88] couldn't shut down functional-20220921213353-5916 (might be okay): verify shutdown: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20220921213353-5916": docker container inspect functional-20220921213353-5916 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220921213353-5916

                                                
                                                
I0921 21:36:40.462755    6080 cli_runner.go:164] Run: docker rm -f -v functional-20220921213353-5916
I0921 21:36:40.679812    6080 cli_runner.go:164] Run: docker container inspect -f {{.Id}} functional-20220921213353-5916
W0921 21:36:40.858916    6080 cli_runner.go:211] docker container inspect -f {{.Id}} functional-20220921213353-5916 returned with exit code 1
I0921 21:36:40.867031    6080 cli_runner.go:164] Run: docker network inspect functional-20220921213353-5916 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W0921 21:36:41.061541    6080 cli_runner.go:211] docker network inspect functional-20220921213353-5916 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I0921 21:36:41.069860    6080 network_create.go:272] running [docker network inspect functional-20220921213353-5916] to gather additional debugging logs...
I0921 21:36:41.069860    6080 cli_runner.go:164] Run: docker network inspect functional-20220921213353-5916
W0921 21:36:41.250007    6080 cli_runner.go:211] docker network inspect functional-20220921213353-5916 returned with exit code 1
I0921 21:36:41.250007    6080 network_create.go:275] error running [docker network inspect functional-20220921213353-5916]: docker network inspect functional-20220921213353-5916: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error: No such network: functional-20220921213353-5916
I0921 21:36:41.250007    6080 network_create.go:277] output of [docker network inspect functional-20220921213353-5916]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error: No such network: functional-20220921213353-5916

                                                
                                                
** /stderr **
W0921 21:36:41.250911    6080 delete.go:139] delete failed (probably ok) <nil>
I0921 21:36:41.250911    6080 fix.go:115] Sleeping 1 second for extra luck!
I0921 21:36:42.257634    6080 start.go:125] createHost starting for "" (driver="docker")
I0921 21:36:42.261811    6080 out.go:204] * Creating docker container (CPUs=2, Memory=4000MB) ...
I0921 21:36:42.262397    6080 start.go:159] libmachine.API.Create for "functional-20220921213353-5916" (driver="docker")
I0921 21:36:42.262460    6080 client.go:168] LocalClient.Create starting
I0921 21:36:42.263164    6080 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
I0921 21:36:42.263330    6080 main.go:134] libmachine: Decoding PEM data...
I0921 21:36:42.263330    6080 main.go:134] libmachine: Parsing certificate...
I0921 21:36:42.263575    6080 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
I0921 21:36:42.263854    6080 main.go:134] libmachine: Decoding PEM data...
I0921 21:36:42.263854    6080 main.go:134] libmachine: Parsing certificate...
I0921 21:36:42.272090    6080 cli_runner.go:164] Run: docker network inspect functional-20220921213353-5916 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W0921 21:36:42.459821    6080 cli_runner.go:211] docker network inspect functional-20220921213353-5916 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I0921 21:36:42.465249    6080 network_create.go:272] running [docker network inspect functional-20220921213353-5916] to gather additional debugging logs...
I0921 21:36:42.466249    6080 cli_runner.go:164] Run: docker network inspect functional-20220921213353-5916
W0921 21:36:42.661406    6080 cli_runner.go:211] docker network inspect functional-20220921213353-5916 returned with exit code 1
I0921 21:36:42.661406    6080 network_create.go:275] error running [docker network inspect functional-20220921213353-5916]: docker network inspect functional-20220921213353-5916: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error: No such network: functional-20220921213353-5916
I0921 21:36:42.661406    6080 network_create.go:277] output of [docker network inspect functional-20220921213353-5916]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error: No such network: functional-20220921213353-5916

                                                
                                                
** /stderr **
I0921 21:36:42.668467    6080 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0921 21:36:42.883642    6080 network.go:290] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc000406830] misses:0}
I0921 21:36:42.883642    6080 network.go:236] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
I0921 21:36:42.884322    6080 network_create.go:115] attempt to create docker network functional-20220921213353-5916 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
I0921 21:36:42.891741    6080 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=functional-20220921213353-5916 functional-20220921213353-5916
W0921 21:36:43.107737    6080 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=functional-20220921213353-5916 functional-20220921213353-5916 returned with exit code 1
E0921 21:36:43.107952    6080 network_create.go:104] error while trying to create docker network functional-20220921213353-5916 192.168.49.0/24: create docker network functional-20220921213353-5916 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=functional-20220921213353-5916 functional-20220921213353-5916: exit status 1
stdout:

                                                
                                                
stderr:
Error response from daemon: cannot create network f714dcf8a6e38001b3c606bcfef85df748989420f42256b76baa4bc8f6fcda81 (br-f714dcf8a6e3): conflicts with network a04d36bfb3cf4b1b1ab643a2ec70fadfc4a485878b32a275b4c7653037d66dd0 (br-a04d36bfb3cf): networks have overlapping IPv4
W0921 21:36:43.107952    6080 out.go:239] ! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network functional-20220921213353-5916 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=functional-20220921213353-5916 functional-20220921213353-5916: exit status 1
stdout:

                                                
                                                
stderr:
Error response from daemon: cannot create network f714dcf8a6e38001b3c606bcfef85df748989420f42256b76baa4bc8f6fcda81 (br-f714dcf8a6e3): conflicts with network a04d36bfb3cf4b1b1ab643a2ec70fadfc4a485878b32a275b4c7653037d66dd0 (br-a04d36bfb3cf): networks have overlapping IPv4

                                                
                                                
I0921 21:36:43.124052    6080 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
I0921 21:36:43.349497    6080 cli_runner.go:164] Run: docker volume create functional-20220921213353-5916 --label name.minikube.sigs.k8s.io=functional-20220921213353-5916 --label created_by.minikube.sigs.k8s.io=true
W0921 21:36:43.528251    6080 cli_runner.go:211] docker volume create functional-20220921213353-5916 --label name.minikube.sigs.k8s.io=functional-20220921213353-5916 --label created_by.minikube.sigs.k8s.io=true returned with exit code 1
I0921 21:36:43.528251    6080 client.go:171] LocalClient.Create took 1.2657847s
I0921 21:36:45.547272    6080 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I0921 21:36:45.554271    6080 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916
W0921 21:36:45.756359    6080 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916 returned with exit code 1
I0921 21:36:45.756684    6080 retry.go:31] will retry after 149.242379ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220921213353-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220921213353-5916
I0921 21:36:45.922450    6080 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916
W0921 21:36:46.130089    6080 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916 returned with exit code 1
I0921 21:36:46.130621    6080 retry.go:31] will retry after 300.341948ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220921213353-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220921213353-5916
I0921 21:36:46.443946    6080 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916
W0921 21:36:46.650140    6080 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916 returned with exit code 1
I0921 21:36:46.650140    6080 retry.go:31] will retry after 571.057104ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220921213353-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220921213353-5916
I0921 21:36:47.236794    6080 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916
W0921 21:36:47.416282    6080 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916 returned with exit code 1
W0921 21:36:47.416449    6080 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220921213353-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220921213353-5916

                                                
                                                
W0921 21:36:47.416543    6080 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220921213353-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220921213353-5916
I0921 21:36:47.426387    6080 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
I0921 21:36:47.431388    6080 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916
W0921 21:36:47.625368    6080 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916 returned with exit code 1
I0921 21:36:47.625368    6080 retry.go:31] will retry after 178.565968ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220921213353-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220921213353-5916
I0921 21:36:47.825346    6080 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916
W0921 21:36:48.016391    6080 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916 returned with exit code 1
I0921 21:36:48.016535    6080 retry.go:31] will retry after 330.246446ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220921213353-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220921213353-5916
I0921 21:36:48.364223    6080 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916
W0921 21:36:48.547800    6080 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916 returned with exit code 1
I0921 21:36:48.547800    6080 retry.go:31] will retry after 460.157723ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220921213353-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220921213353-5916
I0921 21:36:49.022989    6080 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916
W0921 21:36:49.209142    6080 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916 returned with exit code 1
W0921 21:36:49.209142    6080 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220921213353-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220921213353-5916

                                                
                                                
W0921 21:36:49.209142    6080 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220921213353-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220921213353-5916
I0921 21:36:49.209142    6080 start.go:128] duration metric: createHost completed in 6.9514718s
I0921 21:36:49.221157    6080 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I0921 21:36:49.226882    6080 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916
W0921 21:36:49.411553    6080 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916 returned with exit code 1
I0921 21:36:49.411553    6080 retry.go:31] will retry after 195.758538ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220921213353-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220921213353-5916
I0921 21:36:49.618547    6080 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916
W0921 21:36:49.808476    6080 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916 returned with exit code 1
I0921 21:36:49.808476    6080 retry.go:31] will retry after 297.413196ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220921213353-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220921213353-5916
I0921 21:36:50.118067    6080 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916
W0921 21:36:50.296678    6080 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916 returned with exit code 1
I0921 21:36:50.297113    6080 retry.go:31] will retry after 663.23513ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220921213353-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220921213353-5916
I0921 21:36:50.980153    6080 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916
W0921 21:36:51.186661    6080 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916 returned with exit code 1
W0921 21:36:51.186882    6080 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220921213353-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220921213353-5916

                                                
                                                
W0921 21:36:51.186882    6080 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220921213353-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220921213353-5916
I0921 21:36:51.197365    6080 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
I0921 21:36:51.203311    6080 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916
W0921 21:36:51.387794    6080 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916 returned with exit code 1
I0921 21:36:51.387794    6080 retry.go:31] will retry after 175.796719ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220921213353-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220921213353-5916
I0921 21:36:51.584761    6080 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916
W0921 21:36:51.777183    6080 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916 returned with exit code 1
I0921 21:36:51.777695    6080 retry.go:31] will retry after 322.826781ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220921213353-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220921213353-5916
I0921 21:36:52.122330    6080 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916
W0921 21:36:52.302625    6080 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916 returned with exit code 1
I0921 21:36:52.302625    6080 retry.go:31] will retry after 602.253718ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220921213353-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220921213353-5916
I0921 21:36:52.924004    6080 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916
W0921 21:36:53.134372    6080 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916 returned with exit code 1
W0921 21:36:53.134372    6080 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220921213353-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220921213353-5916

                                                
                                                
W0921 21:36:53.134372    6080 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220921213353-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220921213353-5916
I0921 21:36:53.134372    6080 fix.go:57] fixHost completed within 31.1958556s
I0921 21:36:53.134372    6080 start.go:83] releasing machines lock for "functional-20220921213353-5916", held for 31.196037s
W0921 21:36:53.134372    6080 start.go:602] error starting host: recreate: creating host: create: creating: setting up container node: creating volume for functional-20220921213353-5916 container: docker volume create functional-20220921213353-5916 --label name.minikube.sigs.k8s.io=functional-20220921213353-5916 --label created_by.minikube.sigs.k8s.io=true: exit status 1
stdout:

                                                
                                                
stderr:
Error response from daemon: create functional-20220921213353-5916: error while creating volume root path '/var/lib/docker/volumes/functional-20220921213353-5916': mkdir /var/lib/docker/volumes/functional-20220921213353-5916: read-only file system
W0921 21:36:53.134372    6080 out.go:239] ! StartHost failed, but will try again: recreate: creating host: create: creating: setting up container node: creating volume for functional-20220921213353-5916 container: docker volume create functional-20220921213353-5916 --label name.minikube.sigs.k8s.io=functional-20220921213353-5916 --label created_by.minikube.sigs.k8s.io=true: exit status 1
stdout:

                                                
                                                
stderr:
Error response from daemon: create functional-20220921213353-5916: error while creating volume root path '/var/lib/docker/volumes/functional-20220921213353-5916': mkdir /var/lib/docker/volumes/functional-20220921213353-5916: read-only file system

                                                
                                                
I0921 21:36:53.134372    6080 start.go:617] Will try again in 5 seconds ...
I0921 21:36:58.145977    6080 start.go:364] acquiring machines lock for functional-20220921213353-5916: {Name:mk3f5ae8740d25300eb345feb1053ed449398cb0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0921 21:36:58.146618    6080 start.go:368] acquired machines lock for "functional-20220921213353-5916" in 475.4µs
I0921 21:36:58.146795    6080 start.go:96] Skipping create...Using existing machine configuration
I0921 21:36:58.146795    6080 fix.go:55] fixHost starting: 
I0921 21:36:58.161804    6080 cli_runner.go:164] Run: docker container inspect functional-20220921213353-5916 --format={{.State.Status}}
W0921 21:36:58.363602    6080 cli_runner.go:211] docker container inspect functional-20220921213353-5916 --format={{.State.Status}} returned with exit code 1
I0921 21:36:58.363628    6080 fix.go:103] recreateIfNeeded on functional-20220921213353-5916: state= err=unknown state "functional-20220921213353-5916": docker container inspect functional-20220921213353-5916 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220921213353-5916
I0921 21:36:58.363628    6080 fix.go:108] machineExists: false. err=machine does not exist
I0921 21:36:58.368237    6080 out.go:177] * docker "functional-20220921213353-5916" container is missing, will recreate.
I0921 21:36:58.370531    6080 delete.go:124] DEMOLISHING functional-20220921213353-5916 ...
I0921 21:36:58.383951    6080 cli_runner.go:164] Run: docker container inspect functional-20220921213353-5916 --format={{.State.Status}}
W0921 21:36:58.567537    6080 cli_runner.go:211] docker container inspect functional-20220921213353-5916 --format={{.State.Status}} returned with exit code 1
W0921 21:36:58.567537    6080 stop.go:75] unable to get state: unknown state "functional-20220921213353-5916": docker container inspect functional-20220921213353-5916 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220921213353-5916
I0921 21:36:58.567537    6080 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "functional-20220921213353-5916": docker container inspect functional-20220921213353-5916 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220921213353-5916
I0921 21:36:58.584383    6080 cli_runner.go:164] Run: docker container inspect functional-20220921213353-5916 --format={{.State.Status}}
W0921 21:36:58.770316    6080 cli_runner.go:211] docker container inspect functional-20220921213353-5916 --format={{.State.Status}} returned with exit code 1
I0921 21:36:58.770580    6080 delete.go:82] Unable to get host status for functional-20220921213353-5916, assuming it has already been deleted: state: unknown state "functional-20220921213353-5916": docker container inspect functional-20220921213353-5916 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220921213353-5916
I0921 21:36:58.778192    6080 cli_runner.go:164] Run: docker container inspect -f {{.Id}} functional-20220921213353-5916
W0921 21:36:58.980023    6080 cli_runner.go:211] docker container inspect -f {{.Id}} functional-20220921213353-5916 returned with exit code 1
I0921 21:36:58.980076    6080 kic.go:356] could not find the container functional-20220921213353-5916 to remove it. will try anyways
I0921 21:36:58.988937    6080 cli_runner.go:164] Run: docker container inspect functional-20220921213353-5916 --format={{.State.Status}}
W0921 21:36:59.182351    6080 cli_runner.go:211] docker container inspect functional-20220921213353-5916 --format={{.State.Status}} returned with exit code 1
W0921 21:36:59.182351    6080 oci.go:84] error getting container status, will try to delete anyways: unknown state "functional-20220921213353-5916": docker container inspect functional-20220921213353-5916 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220921213353-5916
I0921 21:36:59.189987    6080 cli_runner.go:164] Run: docker exec --privileged -t functional-20220921213353-5916 /bin/bash -c "sudo init 0"
W0921 21:36:59.383089    6080 cli_runner.go:211] docker exec --privileged -t functional-20220921213353-5916 /bin/bash -c "sudo init 0" returned with exit code 1
I0921 21:36:59.383089    6080 oci.go:646] error shutdown functional-20220921213353-5916: docker exec --privileged -t functional-20220921213353-5916 /bin/bash -c "sudo init 0": exit status 1
stdout:

                                                
                                                
stderr:
Error: No such container: functional-20220921213353-5916
I0921 21:37:00.397008    6080 cli_runner.go:164] Run: docker container inspect functional-20220921213353-5916 --format={{.State.Status}}
W0921 21:37:00.576285    6080 cli_runner.go:211] docker container inspect functional-20220921213353-5916 --format={{.State.Status}} returned with exit code 1
I0921 21:37:00.576285    6080 oci.go:658] temporary error verifying shutdown: unknown state "functional-20220921213353-5916": docker container inspect functional-20220921213353-5916 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220921213353-5916
I0921 21:37:00.576285    6080 oci.go:660] temporary error: container functional-20220921213353-5916 status is  but expect it to be exited
I0921 21:37:00.576285    6080 retry.go:31] will retry after 396.557122ms: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20220921213353-5916": docker container inspect functional-20220921213353-5916 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220921213353-5916
I0921 21:37:00.990531    6080 cli_runner.go:164] Run: docker container inspect functional-20220921213353-5916 --format={{.State.Status}}
W0921 21:37:01.184713    6080 cli_runner.go:211] docker container inspect functional-20220921213353-5916 --format={{.State.Status}} returned with exit code 1
I0921 21:37:01.184789    6080 oci.go:658] temporary error verifying shutdown: unknown state "functional-20220921213353-5916": docker container inspect functional-20220921213353-5916 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220921213353-5916
I0921 21:37:01.184789    6080 oci.go:660] temporary error: container functional-20220921213353-5916 status is  but expect it to be exited
I0921 21:37:01.184856    6080 retry.go:31] will retry after 597.811922ms: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20220921213353-5916": docker container inspect functional-20220921213353-5916 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220921213353-5916
I0921 21:37:01.804954    6080 cli_runner.go:164] Run: docker container inspect functional-20220921213353-5916 --format={{.State.Status}}
W0921 21:37:01.985305    6080 cli_runner.go:211] docker container inspect functional-20220921213353-5916 --format={{.State.Status}} returned with exit code 1
I0921 21:37:01.985572    6080 oci.go:658] temporary error verifying shutdown: unknown state "functional-20220921213353-5916": docker container inspect functional-20220921213353-5916 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220921213353-5916
I0921 21:37:01.985572    6080 oci.go:660] temporary error: container functional-20220921213353-5916 status is  but expect it to be exited
I0921 21:37:01.985572    6080 retry.go:31] will retry after 1.409144665s: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20220921213353-5916": docker container inspect functional-20220921213353-5916 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220921213353-5916
I0921 21:37:03.408075    6080 cli_runner.go:164] Run: docker container inspect functional-20220921213353-5916 --format={{.State.Status}}
W0921 21:37:03.630230    6080 cli_runner.go:211] docker container inspect functional-20220921213353-5916 --format={{.State.Status}} returned with exit code 1
I0921 21:37:03.630431    6080 oci.go:658] temporary error verifying shutdown: unknown state "functional-20220921213353-5916": docker container inspect functional-20220921213353-5916 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220921213353-5916
I0921 21:37:03.630431    6080 oci.go:660] temporary error: container functional-20220921213353-5916 status is  but expect it to be exited
I0921 21:37:03.630476    6080 retry.go:31] will retry after 1.192358242s: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20220921213353-5916": docker container inspect functional-20220921213353-5916 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220921213353-5916
I0921 21:37:04.841021    6080 cli_runner.go:164] Run: docker container inspect functional-20220921213353-5916 --format={{.State.Status}}
W0921 21:37:05.020192    6080 cli_runner.go:211] docker container inspect functional-20220921213353-5916 --format={{.State.Status}} returned with exit code 1
I0921 21:37:05.020192    6080 oci.go:658] temporary error verifying shutdown: unknown state "functional-20220921213353-5916": docker container inspect functional-20220921213353-5916 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220921213353-5916
I0921 21:37:05.020192    6080 oci.go:660] temporary error: container functional-20220921213353-5916 status is  but expect it to be exited
I0921 21:37:05.020192    6080 retry.go:31] will retry after 3.456004252s: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20220921213353-5916": docker container inspect functional-20220921213353-5916 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220921213353-5916
I0921 21:37:08.488542    6080 cli_runner.go:164] Run: docker container inspect functional-20220921213353-5916 --format={{.State.Status}}
W0921 21:37:08.682338    6080 cli_runner.go:211] docker container inspect functional-20220921213353-5916 --format={{.State.Status}} returned with exit code 1
I0921 21:37:08.682776    6080 oci.go:658] temporary error verifying shutdown: unknown state "functional-20220921213353-5916": docker container inspect functional-20220921213353-5916 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220921213353-5916
I0921 21:37:08.682776    6080 oci.go:660] temporary error: container functional-20220921213353-5916 status is  but expect it to be exited
I0921 21:37:08.682776    6080 retry.go:31] will retry after 4.543793083s: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20220921213353-5916": docker container inspect functional-20220921213353-5916 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220921213353-5916
I0921 21:37:13.247349    6080 cli_runner.go:164] Run: docker container inspect functional-20220921213353-5916 --format={{.State.Status}}
W0921 21:37:13.441381    6080 cli_runner.go:211] docker container inspect functional-20220921213353-5916 --format={{.State.Status}} returned with exit code 1
I0921 21:37:13.441645    6080 oci.go:658] temporary error verifying shutdown: unknown state "functional-20220921213353-5916": docker container inspect functional-20220921213353-5916 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220921213353-5916
I0921 21:37:13.441645    6080 oci.go:660] temporary error: container functional-20220921213353-5916 status is  but expect it to be exited
I0921 21:37:13.441717    6080 retry.go:31] will retry after 5.830976587s: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20220921213353-5916": docker container inspect functional-20220921213353-5916 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220921213353-5916
I0921 21:37:19.296323    6080 cli_runner.go:164] Run: docker container inspect functional-20220921213353-5916 --format={{.State.Status}}
W0921 21:37:19.475077    6080 cli_runner.go:211] docker container inspect functional-20220921213353-5916 --format={{.State.Status}} returned with exit code 1
I0921 21:37:19.475077    6080 oci.go:658] temporary error verifying shutdown: unknown state "functional-20220921213353-5916": docker container inspect functional-20220921213353-5916 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220921213353-5916
I0921 21:37:19.475077    6080 oci.go:660] temporary error: container functional-20220921213353-5916 status is  but expect it to be exited
I0921 21:37:19.475077    6080 oci.go:88] couldn't shut down functional-20220921213353-5916 (might be okay): verify shutdown: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20220921213353-5916": docker container inspect functional-20220921213353-5916 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220921213353-5916

                                                
                                                
I0921 21:37:19.482525    6080 cli_runner.go:164] Run: docker rm -f -v functional-20220921213353-5916
I0921 21:37:19.706192    6080 cli_runner.go:164] Run: docker container inspect -f {{.Id}} functional-20220921213353-5916
W0921 21:37:19.884491    6080 cli_runner.go:211] docker container inspect -f {{.Id}} functional-20220921213353-5916 returned with exit code 1
I0921 21:37:19.891491    6080 cli_runner.go:164] Run: docker network inspect functional-20220921213353-5916 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W0921 21:37:20.086189    6080 cli_runner.go:211] docker network inspect functional-20220921213353-5916 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I0921 21:37:20.093695    6080 network_create.go:272] running [docker network inspect functional-20220921213353-5916] to gather additional debugging logs...
I0921 21:37:20.093695    6080 cli_runner.go:164] Run: docker network inspect functional-20220921213353-5916
W0921 21:37:20.274277    6080 cli_runner.go:211] docker network inspect functional-20220921213353-5916 returned with exit code 1
I0921 21:37:20.274310    6080 network_create.go:275] error running [docker network inspect functional-20220921213353-5916]: docker network inspect functional-20220921213353-5916: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error: No such network: functional-20220921213353-5916
I0921 21:37:20.274310    6080 network_create.go:277] output of [docker network inspect functional-20220921213353-5916]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error: No such network: functional-20220921213353-5916

                                                
                                                
** /stderr **
W0921 21:37:20.275203    6080 delete.go:139] delete failed (probably ok) <nil>
I0921 21:37:20.275203    6080 fix.go:115] Sleeping 1 second for extra luck!
I0921 21:37:21.288307    6080 start.go:125] createHost starting for "" (driver="docker")
I0921 21:37:21.293990    6080 out.go:204] * Creating docker container (CPUs=2, Memory=4000MB) ...
I0921 21:37:21.293990    6080 start.go:159] libmachine.API.Create for "functional-20220921213353-5916" (driver="docker")
I0921 21:37:21.293990    6080 client.go:168] LocalClient.Create starting
I0921 21:37:21.294780    6080 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
I0921 21:37:21.294780    6080 main.go:134] libmachine: Decoding PEM data...
I0921 21:37:21.294780    6080 main.go:134] libmachine: Parsing certificate...
I0921 21:37:21.295355    6080 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
I0921 21:37:21.295523    6080 main.go:134] libmachine: Decoding PEM data...
I0921 21:37:21.295523    6080 main.go:134] libmachine: Parsing certificate...
I0921 21:37:21.303438    6080 cli_runner.go:164] Run: docker network inspect functional-20220921213353-5916 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W0921 21:37:21.491193    6080 cli_runner.go:211] docker network inspect functional-20220921213353-5916 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I0921 21:37:21.498809    6080 network_create.go:272] running [docker network inspect functional-20220921213353-5916] to gather additional debugging logs...
I0921 21:37:21.498809    6080 cli_runner.go:164] Run: docker network inspect functional-20220921213353-5916
W0921 21:37:21.683137    6080 cli_runner.go:211] docker network inspect functional-20220921213353-5916 returned with exit code 1
I0921 21:37:21.683287    6080 network_create.go:275] error running [docker network inspect functional-20220921213353-5916]: docker network inspect functional-20220921213353-5916: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error: No such network: functional-20220921213353-5916
I0921 21:37:21.683355    6080 network_create.go:277] output of [docker network inspect functional-20220921213353-5916]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error: No such network: functional-20220921213353-5916

                                                
                                                
** /stderr **
I0921 21:37:21.690993    6080 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0921 21:37:21.899648    6080 network.go:281] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000406830] amended:false}} dirty:map[] misses:0}
I0921 21:37:21.899648    6080 network.go:239] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
I0921 21:37:21.913643    6080 network.go:290] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000406830] amended:true}} dirty:map[192.168.49.0:0xc000406830 192.168.58.0:0xc00048a5b0] misses:0}
I0921 21:37:21.913643    6080 network.go:236] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
I0921 21:37:21.913643    6080 network_create.go:115] attempt to create docker network functional-20220921213353-5916 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I0921 21:37:21.921446    6080 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=functional-20220921213353-5916 functional-20220921213353-5916
W0921 21:37:22.121367    6080 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=functional-20220921213353-5916 functional-20220921213353-5916 returned with exit code 1
E0921 21:37:22.121367    6080 network_create.go:104] error while trying to create docker network functional-20220921213353-5916 192.168.58.0/24: create docker network functional-20220921213353-5916 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=functional-20220921213353-5916 functional-20220921213353-5916: exit status 1
stdout:

                                                
                                                
stderr:
Error response from daemon: cannot create network 086f6ca4b948148b5c753ba2bc2bae51b8309d08fd412973a0dd7526f4b38637 (br-086f6ca4b948): conflicts with network 8a3cd8d165a4af032276277dbee93b52b8ee4aed7ca6d8aceee6814901b31792 (br-8a3cd8d165a4): networks have overlapping IPv4
W0921 21:37:22.121367    6080 out.go:239] ! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network functional-20220921213353-5916 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=functional-20220921213353-5916 functional-20220921213353-5916: exit status 1
stdout:

                                                
                                                
stderr:
Error response from daemon: cannot create network 086f6ca4b948148b5c753ba2bc2bae51b8309d08fd412973a0dd7526f4b38637 (br-086f6ca4b948): conflicts with network 8a3cd8d165a4af032276277dbee93b52b8ee4aed7ca6d8aceee6814901b31792 (br-8a3cd8d165a4): networks have overlapping IPv4

                                                
                                                
I0921 21:37:22.135883    6080 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
I0921 21:37:22.330181    6080 cli_runner.go:164] Run: docker volume create functional-20220921213353-5916 --label name.minikube.sigs.k8s.io=functional-20220921213353-5916 --label created_by.minikube.sigs.k8s.io=true
W0921 21:37:22.510140    6080 cli_runner.go:211] docker volume create functional-20220921213353-5916 --label name.minikube.sigs.k8s.io=functional-20220921213353-5916 --label created_by.minikube.sigs.k8s.io=true returned with exit code 1
I0921 21:37:22.510140    6080 client.go:171] LocalClient.Create took 1.2161437s
I0921 21:37:24.530517    6080 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I0921 21:37:24.537521    6080 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916
W0921 21:37:24.722531    6080 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916 returned with exit code 1
I0921 21:37:24.722717    6080 retry.go:31] will retry after 164.582069ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220921213353-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220921213353-5916
I0921 21:37:24.897729    6080 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916
W0921 21:37:25.107219    6080 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916 returned with exit code 1
I0921 21:37:25.107219    6080 retry.go:31] will retry after 415.22004ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220921213353-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220921213353-5916
I0921 21:37:25.536832    6080 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916
W0921 21:37:25.751197    6080 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916 returned with exit code 1
W0921 21:37:25.752587    6080 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220921213353-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220921213353-5916

                                                
                                                
W0921 21:37:25.752659    6080 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220921213353-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220921213353-5916
I0921 21:37:25.762393    6080 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
I0921 21:37:25.768401    6080 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916
W0921 21:37:25.969029    6080 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916 returned with exit code 1
I0921 21:37:25.969029    6080 retry.go:31] will retry after 144.863405ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220921213353-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220921213353-5916
I0921 21:37:26.129479    6080 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916
W0921 21:37:26.308456    6080 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916 returned with exit code 1
I0921 21:37:26.313644    6080 retry.go:31] will retry after 410.553224ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220921213353-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220921213353-5916
I0921 21:37:26.748660    6080 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916
W0921 21:37:26.955542    6080 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916 returned with exit code 1
I0921 21:37:26.955692    6080 retry.go:31] will retry after 314.505366ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220921213353-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220921213353-5916
I0921 21:37:27.293970    6080 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916
W0921 21:37:27.491659    6080 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916 returned with exit code 1
W0921 21:37:27.491659    6080 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220921213353-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220921213353-5916

                                                
                                                
W0921 21:37:27.491659    6080 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220921213353-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220921213353-5916
I0921 21:37:27.491659    6080 start.go:128] duration metric: createHost completed in 6.2033185s
I0921 21:37:27.501637    6080 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I0921 21:37:27.509476    6080 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916
W0921 21:37:27.693576    6080 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916 returned with exit code 1
I0921 21:37:27.693648    6080 retry.go:31] will retry after 200.38067ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220921213353-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220921213353-5916
I0921 21:37:27.913406    6080 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916
W0921 21:37:28.145850    6080 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916 returned with exit code 1
I0921 21:37:28.145850    6080 retry.go:31] will retry after 252.474839ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220921213353-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220921213353-5916
I0921 21:37:28.418010    6080 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916
W0921 21:37:28.594632    6080 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916 returned with exit code 1
I0921 21:37:28.595018    6080 retry.go:31] will retry after 585.618668ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220921213353-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220921213353-5916
I0921 21:37:29.196815    6080 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916
W0921 21:37:29.393126    6080 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916 returned with exit code 1
W0921 21:37:29.393302    6080 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220921213353-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220921213353-5916

                                                
                                                
W0921 21:37:29.393302    6080 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220921213353-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220921213353-5916
I0921 21:37:29.403887    6080 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
I0921 21:37:29.409899    6080 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916
W0921 21:37:29.594849    6080 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916 returned with exit code 1
I0921 21:37:29.594849    6080 retry.go:31] will retry after 194.626905ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220921213353-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220921213353-5916
I0921 21:37:29.795743    6080 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916
W0921 21:37:29.983809    6080 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916 returned with exit code 1
I0921 21:37:29.983809    6080 retry.go:31] will retry after 346.182076ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220921213353-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220921213353-5916
I0921 21:37:30.345856    6080 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916
W0921 21:37:30.539360    6080 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916 returned with exit code 1
I0921 21:37:30.539702    6080 retry.go:31] will retry after 579.704465ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220921213353-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220921213353-5916
I0921 21:37:31.130262    6080 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916
W0921 21:37:31.324127    6080 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916 returned with exit code 1
W0921 21:37:31.324265    6080 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220921213353-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220921213353-5916

                                                
                                                
W0921 21:37:31.324362    6080 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20220921213353-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220921213353-5916: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20220921213353-5916
I0921 21:37:31.324362    6080 fix.go:57] fixHost completed within 33.1773922s
I0921 21:37:31.324362    6080 start.go:83] releasing machines lock for "functional-20220921213353-5916", held for 33.1775695s
W0921 21:37:31.324617    6080 out.go:239] * Failed to start docker container. Running "minikube delete -p functional-20220921213353-5916" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for functional-20220921213353-5916 container: docker volume create functional-20220921213353-5916 --label name.minikube.sigs.k8s.io=functional-20220921213353-5916 --label created_by.minikube.sigs.k8s.io=true: exit status 1
stdout:

                                                
                                                
stderr:
Error response from daemon: create functional-20220921213353-5916: error while creating volume root path '/var/lib/docker/volumes/functional-20220921213353-5916': mkdir /var/lib/docker/volumes/functional-20220921213353-5916: read-only file system

                                                
                                                
I0921 21:37:31.329139    6080 out.go:177] 
W0921 21:37:31.331368    6080 out.go:239] X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for functional-20220921213353-5916 container: docker volume create functional-20220921213353-5916 --label name.minikube.sigs.k8s.io=functional-20220921213353-5916 --label created_by.minikube.sigs.k8s.io=true: exit status 1
stdout:

                                                
                                                
stderr:
Error response from daemon: create functional-20220921213353-5916: error while creating volume root path '/var/lib/docker/volumes/functional-20220921213353-5916': mkdir /var/lib/docker/volumes/functional-20220921213353-5916: read-only file system

                                                
                                                
W0921 21:37:31.331368    6080 out.go:239] * Suggestion: Restart Docker
W0921 21:37:31.331368    6080 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/6825
I0921 21:37:31.334689    6080 out.go:177] 

                                                
                                                
* 
***
--- FAIL: TestFunctional/serial/LogsFileCmd (1.49s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (2.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:846: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220921213353-5916 status

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:846: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-20220921213353-5916 status: exit status 7 (632.2359ms)

                                                
                                                
-- stdout --
	functional-20220921213353-5916
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0921 21:37:38.153066     904 status.go:258] status error: host: state: unknown state "functional-20220921213353-5916": docker container inspect functional-20220921213353-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220921213353-5916
	E0921 21:37:38.153066     904 status.go:261] The "functional-20220921213353-5916" host does not exist!

                                                
                                                
** /stderr **
functional_test.go:848: failed to run minikube status. args "out/minikube-windows-amd64.exe -p functional-20220921213353-5916 status" : exit status 7
functional_test.go:852: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220921213353-5916 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:852: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-20220921213353-5916 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}: exit status 7 (585.7147ms)

                                                
                                                
-- stdout --
	host:Nonexistent,kublet:Nonexistent,apiserver:Nonexistent,kubeconfig:Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0921 21:37:38.723720    7612 status.go:258] status error: host: state: unknown state "functional-20220921213353-5916": docker container inspect functional-20220921213353-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220921213353-5916
	E0921 21:37:38.723750    7612 status.go:261] The "functional-20220921213353-5916" host does not exist!

                                                
                                                
** /stderr **
functional_test.go:854: failed to run minikube status with custom format: args "out/minikube-windows-amd64.exe -p functional-20220921213353-5916 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}": exit status 7
functional_test.go:864: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220921213353-5916 status -o json

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:864: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-20220921213353-5916 status -o json: exit status 7 (603.045ms)

                                                
                                                
-- stdout --
	{"Name":"functional-20220921213353-5916","Host":"Nonexistent","Kubelet":"Nonexistent","APIServer":"Nonexistent","Kubeconfig":"Nonexistent","Worker":false}

                                                
                                                
-- /stdout --
** stderr ** 
	E0921 21:37:39.341919    8436 status.go:258] status error: host: state: unknown state "functional-20220921213353-5916": docker container inspect functional-20220921213353-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220921213353-5916
	E0921 21:37:39.341919    8436 status.go:261] The "functional-20220921213353-5916" host does not exist!

                                                
                                                
** /stderr **
functional_test.go:866: failed to run minikube status with json output. args "out/minikube-windows-amd64.exe -p functional-20220921213353-5916 status -o json" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestFunctional/parallel/StatusCmd]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect functional-20220921213353-5916
helpers_test.go:231: (dbg) Non-zero exit: docker inspect functional-20220921213353-5916: exit status 1 (243.3147ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: functional-20220921213353-5916

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-20220921213353-5916 -n functional-20220921213353-5916

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-20220921213353-5916 -n functional-20220921213353-5916: exit status 7 (591.5664ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0921 21:37:40.183722    8752 status.go:247] status error: host: state: unknown state "functional-20220921213353-5916": docker container inspect functional-20220921213353-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220921213353-5916

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-20220921213353-5916" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestFunctional/parallel/StatusCmd (2.66s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd (1.98s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd
=== PAUSE TestFunctional/parallel/ServiceCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1432: (dbg) Run:  kubectl --context functional-20220921213353-5916 create deployment hello-node --image=k8s.gcr.io/echoserver:1.8

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1432: (dbg) Non-zero exit: kubectl --context functional-20220921213353-5916 create deployment hello-node --image=k8s.gcr.io/echoserver:1.8: exit status 1 (244.4143ms)

                                                
                                                
** stderr ** 
	W0921 21:37:40.584384    3044 loader.go:223] Config not found: C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	error: context "functional-20220921213353-5916" does not exist

                                                
                                                
** /stderr **
functional_test.go:1436: failed to create hello-node deployment with this command "kubectl --context functional-20220921213353-5916 create deployment hello-node --image=k8s.gcr.io/echoserver:1.8": exit status 1.
functional_test.go:1401: service test failed - dumping debug information
functional_test.go:1402: -----------------------service failure post-mortem--------------------------------
functional_test.go:1405: (dbg) Run:  kubectl --context functional-20220921213353-5916 describe po hello-node

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1405: (dbg) Non-zero exit: kubectl --context functional-20220921213353-5916 describe po hello-node: exit status 1 (271.4131ms)

                                                
                                                
** stderr ** 
	W0921 21:37:40.854002    7236 loader.go:223] Config not found: C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	Error in configuration: 
	* context was not found for specified context: functional-20220921213353-5916
	* cluster has no server defined

                                                
                                                
** /stderr **
functional_test.go:1407: "kubectl --context functional-20220921213353-5916 describe po hello-node" failed: exit status 1
functional_test.go:1409: hello-node pod describe:
functional_test.go:1411: (dbg) Run:  kubectl --context functional-20220921213353-5916 logs -l app=hello-node

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1411: (dbg) Non-zero exit: kubectl --context functional-20220921213353-5916 logs -l app=hello-node: exit status 1 (293.271ms)

                                                
                                                
** stderr ** 
	W0921 21:37:41.145956    6212 loader.go:223] Config not found: C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	Error in configuration: 
	* context was not found for specified context: functional-20220921213353-5916
	* cluster has no server defined

                                                
                                                
** /stderr **
functional_test.go:1413: "kubectl --context functional-20220921213353-5916 logs -l app=hello-node" failed: exit status 1
functional_test.go:1415: hello-node logs:
functional_test.go:1417: (dbg) Run:  kubectl --context functional-20220921213353-5916 describe svc hello-node

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1417: (dbg) Non-zero exit: kubectl --context functional-20220921213353-5916 describe svc hello-node: exit status 1 (243.2889ms)

                                                
                                                
** stderr ** 
	W0921 21:37:41.402403    8776 loader.go:223] Config not found: C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	Error in configuration: 
	* context was not found for specified context: functional-20220921213353-5916
	* cluster has no server defined

                                                
                                                
** /stderr **
functional_test.go:1419: "kubectl --context functional-20220921213353-5916 describe svc hello-node" failed: exit status 1
functional_test.go:1421: hello-node svc describe:
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestFunctional/parallel/ServiceCmd]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect functional-20220921213353-5916

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
helpers_test.go:231: (dbg) Non-zero exit: docker inspect functional-20220921213353-5916: exit status 1 (272.7715ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: functional-20220921213353-5916

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-20220921213353-5916 -n functional-20220921213353-5916

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-20220921213353-5916 -n functional-20220921213353-5916: exit status 7 (620.3368ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0921 21:37:42.385332     968 status.go:247] status error: host: state: unknown state "functional-20220921213353-5916": docker container inspect functional-20220921213353-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220921213353-5916

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-20220921213353-5916" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestFunctional/parallel/ServiceCmd (1.98s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (1.98s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1558: (dbg) Run:  kubectl --context functional-20220921213353-5916 create deployment hello-node-connect --image=k8s.gcr.io/echoserver:1.8

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1558: (dbg) Non-zero exit: kubectl --context functional-20220921213353-5916 create deployment hello-node-connect --image=k8s.gcr.io/echoserver:1.8: exit status 1 (262.4945ms)

                                                
                                                
** stderr ** 
	W0921 21:37:40.519767    1480 loader.go:223] Config not found: C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	error: context "functional-20220921213353-5916" does not exist

                                                
                                                
** /stderr **
functional_test.go:1562: failed to create hello-node deployment with this command "kubectl --context functional-20220921213353-5916 create deployment hello-node-connect --image=k8s.gcr.io/echoserver:1.8": exit status 1.
functional_test.go:1527: service test failed - dumping debug information
functional_test.go:1528: -----------------------service failure post-mortem--------------------------------
functional_test.go:1531: (dbg) Run:  kubectl --context functional-20220921213353-5916 describe po hello-node-connect

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1531: (dbg) Non-zero exit: kubectl --context functional-20220921213353-5916 describe po hello-node-connect: exit status 1 (271.8296ms)

                                                
                                                
** stderr ** 
	W0921 21:37:40.806068    3372 loader.go:223] Config not found: C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	Error in configuration: 
	* context was not found for specified context: functional-20220921213353-5916
	* cluster has no server defined

                                                
                                                
** /stderr **
functional_test.go:1533: "kubectl --context functional-20220921213353-5916 describe po hello-node-connect" failed: exit status 1
functional_test.go:1535: hello-node pod describe:
functional_test.go:1537: (dbg) Run:  kubectl --context functional-20220921213353-5916 logs -l app=hello-node-connect

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1537: (dbg) Non-zero exit: kubectl --context functional-20220921213353-5916 logs -l app=hello-node-connect: exit status 1 (292.3018ms)

                                                
                                                
** stderr ** 
	W0921 21:37:41.075259     904 loader.go:223] Config not found: C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	Error in configuration: 
	* context was not found for specified context: functional-20220921213353-5916
	* cluster has no server defined

                                                
                                                
** /stderr **
functional_test.go:1539: "kubectl --context functional-20220921213353-5916 logs -l app=hello-node-connect" failed: exit status 1
functional_test.go:1541: hello-node logs:
functional_test.go:1543: (dbg) Run:  kubectl --context functional-20220921213353-5916 describe svc hello-node-connect

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1543: (dbg) Non-zero exit: kubectl --context functional-20220921213353-5916 describe svc hello-node-connect: exit status 1 (260.2939ms)

                                                
                                                
** stderr ** 
	W0921 21:37:41.368395    3624 loader.go:223] Config not found: C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	Error in configuration: 
	* context was not found for specified context: functional-20220921213353-5916
	* cluster has no server defined

                                                
                                                
** /stderr **
functional_test.go:1545: "kubectl --context functional-20220921213353-5916 describe svc hello-node-connect" failed: exit status 1
functional_test.go:1547: hello-node svc describe:
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect functional-20220921213353-5916

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
helpers_test.go:231: (dbg) Non-zero exit: docker inspect functional-20220921213353-5916: exit status 1 (276.5456ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: functional-20220921213353-5916

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-20220921213353-5916 -n functional-20220921213353-5916

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-20220921213353-5916 -n functional-20220921213353-5916: exit status 7 (582.9427ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0921 21:37:42.316665    4536 status.go:247] status error: host: state: unknown state "functional-20220921213353-5916": docker container inspect functional-20220921213353-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220921213353-5916

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-20220921213353-5916" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (1.98s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (0.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:45: failed waiting for storage-provisioner: client config: context "functional-20220921213353-5916" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect functional-20220921213353-5916

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
helpers_test.go:231: (dbg) Non-zero exit: docker inspect functional-20220921213353-5916: exit status 1 (248.3093ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: functional-20220921213353-5916

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-20220921213353-5916 -n functional-20220921213353-5916

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-20220921213353-5916 -n functional-20220921213353-5916: exit status 7 (630.7944ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0921 21:37:45.712386    9184 status.go:247] status error: host: state: unknown state "functional-20220921213353-5916": docker container inspect functional-20220921213353-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220921213353-5916

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-20220921213353-5916" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestFunctional/parallel/PersistentVolumeClaim (0.89s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (3.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1654: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220921213353-5916 ssh "echo hello"

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1654: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-20220921213353-5916 ssh "echo hello": exit status 80 (1.2099835s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "functional-20220921213353-5916": docker container inspect functional-20220921213353-5916 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220921213353-5916
	
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                        │
	│    * If the above advice does not help, please let us know:                                                            │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                          │
	│                                                                                                                        │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                               │
	│    * Please also attach the following file to the GitHub issue:                                                        │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_config_5e6c3abd9ac0062476f7bc1a2bb10e26d0fcd439_1.log    │
	│                                                                                                                        │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1659: failed to run an ssh command. args "out/minikube-windows-amd64.exe -p functional-20220921213353-5916 ssh \"echo hello\"" : exit status 80
functional_test.go:1663: expected minikube ssh command output to be -"hello"- but got *"\n\n"*. args "out/minikube-windows-amd64.exe -p functional-20220921213353-5916 ssh \"echo hello\""
functional_test.go:1671: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220921213353-5916 ssh "cat /etc/hostname"

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1671: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-20220921213353-5916 ssh "cat /etc/hostname": exit status 80 (1.213457s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "functional-20220921213353-5916": docker container inspect functional-20220921213353-5916 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220921213353-5916
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_status_abcabdb3ea89e0e0cb5bb0e0976767ebe71062f4_70.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1677: failed to run an ssh command. args "out/minikube-windows-amd64.exe -p functional-20220921213353-5916 ssh \"cat /etc/hostname\"" : exit status 80
functional_test.go:1681: expected minikube ssh command output to be -"functional-20220921213353-5916"- but got *"\n\n"*. args "out/minikube-windows-amd64.exe -p functional-20220921213353-5916 ssh \"cat /etc/hostname\""
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestFunctional/parallel/SSHCmd]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect functional-20220921213353-5916
helpers_test.go:231: (dbg) Non-zero exit: docker inspect functional-20220921213353-5916: exit status 1 (251.1285ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: functional-20220921213353-5916

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-20220921213353-5916 -n functional-20220921213353-5916

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-20220921213353-5916 -n functional-20220921213353-5916: exit status 7 (614.1332ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0921 21:37:39.751215    2388 status.go:247] status error: host: state: unknown state "functional-20220921213353-5916": docker container inspect functional-20220921213353-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220921213353-5916

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-20220921213353-5916" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestFunctional/parallel/SSHCmd (3.30s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (4.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:554: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220921213353-5916 cp testdata\cp-test.txt /home/docker/cp-test.txt

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:554: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-20220921213353-5916 cp testdata\cp-test.txt /home/docker/cp-test.txt: exit status 80 (1.1409489s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "functional-20220921213353-5916": docker container inspect functional-20220921213353-5916 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220921213353-5916
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                     │
	│    * If the above advice does not help, please let us know:                                                         │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                       │
	│                                                                                                                     │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                            │
	│    * Please also attach the following file to the GitHub issue:                                                     │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_ssh_3fca102df916c8614fa73144aa64855c45eac5a1_0.log    │
	│                                                                                                                     │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
helpers_test.go:559: failed to run an cp command. args "out/minikube-windows-amd64.exe -p functional-20220921213353-5916 cp testdata\\cp-test.txt /home/docker/cp-test.txt" : exit status 80
helpers_test.go:532: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220921213353-5916 ssh -n functional-20220921213353-5916 "sudo cat /home/docker/cp-test.txt"

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:532: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-20220921213353-5916 ssh -n functional-20220921213353-5916 "sudo cat /home/docker/cp-test.txt": exit status 80 (1.1626108s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "functional-20220921213353-5916": docker container inspect functional-20220921213353-5916 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220921213353-5916
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                     │
	│    * If the above advice does not help, please let us know:                                                         │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                       │
	│                                                                                                                     │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                            │
	│    * Please also attach the following file to the GitHub issue:                                                     │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_ssh_044a2db5fb68dde6cbeed007e5e5f9ee411e4400_0.log    │
	│                                                                                                                     │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
helpers_test.go:537: failed to run an cp command. args "out/minikube-windows-amd64.exe -p functional-20220921213353-5916 ssh -n functional-20220921213353-5916 \"sudo cat /home/docker/cp-test.txt\"" : exit status 80
helpers_test.go:571: /testdata/cp-test.txt content mismatch (-want +got):
  string(
- 	"Test file for checking file cp process",
+ 	"\n\n",
  )
helpers_test.go:554: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220921213353-5916 cp functional-20220921213353-5916:/home/docker/cp-test.txt C:\Users\jenkins.minikube2\AppData\Local\Temp\TestFunctionalparallelCpCmd1472142648\001\cp-test.txt

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:554: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-20220921213353-5916 cp functional-20220921213353-5916:/home/docker/cp-test.txt C:\Users\jenkins.minikube2\AppData\Local\Temp\TestFunctionalparallelCpCmd1472142648\001\cp-test.txt: exit status 80 (1.1303554s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "functional-20220921213353-5916": docker container inspect functional-20220921213353-5916 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220921213353-5916
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                                 │
	│    * If the above advice does not help, please let us know:                                                                     │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                                   │
	│                                                                                                                                 │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                        │
	│    * Please also attach the following file to the GitHub issue:                                                                 │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_update-context_9738a94781505e531269d5196158beef5ee79b06_11.log    │
	│                                                                                                                                 │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
helpers_test.go:559: failed to run an cp command. args "out/minikube-windows-amd64.exe -p functional-20220921213353-5916 cp functional-20220921213353-5916:/home/docker/cp-test.txt C:\\Users\\jenkins.minikube2\\AppData\\Local\\Temp\\TestFunctionalparallelCpCmd1472142648\\001\\cp-test.txt" : exit status 80
helpers_test.go:532: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220921213353-5916 ssh -n functional-20220921213353-5916 "sudo cat /home/docker/cp-test.txt"

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:532: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-20220921213353-5916 ssh -n functional-20220921213353-5916 "sudo cat /home/docker/cp-test.txt": exit status 80 (1.1499543s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "functional-20220921213353-5916": docker container inspect functional-20220921213353-5916 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220921213353-5916
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                                 │
	│    * If the above advice does not help, please let us know:                                                                     │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                                   │
	│                                                                                                                                 │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                        │
	│    * Please also attach the following file to the GitHub issue:                                                                 │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_update-context_9738a94781505e531269d5196158beef5ee79b06_11.log    │
	│                                                                                                                                 │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
helpers_test.go:537: failed to run an cp command. args "out/minikube-windows-amd64.exe -p functional-20220921213353-5916 ssh -n functional-20220921213353-5916 \"sudo cat /home/docker/cp-test.txt\"" : exit status 80
helpers_test.go:526: failed to read test file 'testdata/cp-test.txt' : open C:\Users\jenkins.minikube2\AppData\Local\Temp\TestFunctionalparallelCpCmd1472142648\001\cp-test.txt: The system cannot find the file specified.
helpers_test.go:571: /testdata/cp-test.txt content mismatch (-want +got):
  string(
- 	"\n\n",
+ 	"",
  )
--- FAIL: TestFunctional/parallel/CpCmd (4.59s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (1.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1719: (dbg) Run:  kubectl --context functional-20220921213353-5916 replace --force -f testdata\mysql.yaml

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1719: (dbg) Non-zero exit: kubectl --context functional-20220921213353-5916 replace --force -f testdata\mysql.yaml: exit status 1 (262.7566ms)

                                                
                                                
** stderr ** 
	W0921 21:37:48.016179    4100 loader.go:223] Config not found: C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	error: context "functional-20220921213353-5916" does not exist

                                                
                                                
** /stderr **
functional_test.go:1721: failed to kubectl replace mysql: args "kubectl --context functional-20220921213353-5916 replace --force -f testdata\\mysql.yaml" failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestFunctional/parallel/MySQL]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect functional-20220921213353-5916
helpers_test.go:231: (dbg) Non-zero exit: docker inspect functional-20220921213353-5916: exit status 1 (269.2627ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: functional-20220921213353-5916

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-20220921213353-5916 -n functional-20220921213353-5916

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-20220921213353-5916 -n functional-20220921213353-5916: exit status 7 (596.4293ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0921 21:37:48.946452    8304 status.go:247] status error: host: state: unknown state "functional-20220921213353-5916": docker container inspect functional-20220921213353-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220921213353-5916

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-20220921213353-5916" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestFunctional/parallel/MySQL (1.14s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (2.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1855: Checking for existence of /etc/test/nested/copy/5916/hosts within VM
functional_test.go:1857: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220921213353-5916 ssh "sudo cat /etc/test/nested/copy/5916/hosts"

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1857: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-20220921213353-5916 ssh "sudo cat /etc/test/nested/copy/5916/hosts": exit status 80 (1.2157097s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "functional-20220921213353-5916": docker container inspect functional-20220921213353-5916 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220921213353-5916
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                     │
	│    * If the above advice does not help, please let us know:                                                         │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                       │
	│                                                                                                                     │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                            │
	│    * Please also attach the following file to the GitHub issue:                                                     │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_ssh_5559a5416b35d3b7c6f5b45c301a37a07abec847_0.log    │
	│                                                                                                                     │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1859: out/minikube-windows-amd64.exe -p functional-20220921213353-5916 ssh "sudo cat /etc/test/nested/copy/5916/hosts" failed: exit status 80
functional_test.go:1862: file sync test content: 

                                                
                                                
functional_test.go:1872: /etc/sync.test content mismatch (-want +got):
  string(
- 	"Test file for checking file sync process",
+ 	"\n\n",
  )
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestFunctional/parallel/FileSync]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect functional-20220921213353-5916
helpers_test.go:231: (dbg) Non-zero exit: docker inspect functional-20220921213353-5916: exit status 1 (241.5725ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: functional-20220921213353-5916

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-20220921213353-5916 -n functional-20220921213353-5916

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-20220921213353-5916 -n functional-20220921213353-5916: exit status 7 (623.3392ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0921 21:37:47.803272    8788 status.go:247] status error: host: state: unknown state "functional-20220921213353-5916": docker container inspect functional-20220921213353-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220921213353-5916

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-20220921213353-5916" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestFunctional/parallel/FileSync (2.09s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (7.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1898: Checking for existence of /etc/ssl/certs/5916.pem within VM
functional_test.go:1899: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220921213353-5916 ssh "sudo cat /etc/ssl/certs/5916.pem"

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1899: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-20220921213353-5916 ssh "sudo cat /etc/ssl/certs/5916.pem": exit status 80 (1.1871415s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "functional-20220921213353-5916": docker container inspect functional-20220921213353-5916 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220921213353-5916
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                     │
	│    * If the above advice does not help, please let us know:                                                         │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                       │
	│                                                                                                                     │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                            │
	│    * Please also attach the following file to the GitHub issue:                                                     │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_ssh_327d5e162571bec0fd8a1b14a66b1e7d07f28b91_0.log    │
	│                                                                                                                     │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1901: failed to check existence of "/etc/ssl/certs/5916.pem" inside minikube. args "out/minikube-windows-amd64.exe -p functional-20220921213353-5916 ssh \"sudo cat /etc/ssl/certs/5916.pem\"": exit status 80
functional_test.go:1907: failed verify pem file. minikube_test.pem -> /etc/ssl/certs/5916.pem mismatch (-want +got):
  string(
- 	"-----BEGIN CERTIFICATE-----\nMIIDsDCCApgCCQD5n0OIsOYIjDANBgkqhkiG9w0BAQsFADCBmTELMAkGA1UEBhMC\nVVMxEzARBgNVBAgMCkNhbGlmb3JuaWExFjAUBgNVBAcMDVNhbiBGcmFuY2lzY28x\nETAPBgNVBAoMCG1pbmlrdWJlMRYwFAYDVQQLDA1QYXJ0eSBQYXJyb3RzMREwDwYD\nVQQDDAhtaW5pa3ViZTEfMB0GCSqGSIb3D"...,
+ 	"\n\n",
  )
functional_test.go:1898: Checking for existence of /usr/share/ca-certificates/5916.pem within VM
functional_test.go:1899: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220921213353-5916 ssh "sudo cat /usr/share/ca-certificates/5916.pem"

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1899: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-20220921213353-5916 ssh "sudo cat /usr/share/ca-certificates/5916.pem": exit status 80 (1.1547949s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "functional-20220921213353-5916": docker container inspect functional-20220921213353-5916 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220921213353-5916
	
	* 
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                       │
	│    * If the above advice does not help, please let us know:                                                           │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                         │
	│                                                                                                                       │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                              │
	│    * Please also attach the following file to the GitHub issue:                                                       │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_image_a78140c03c3b09812c6a7604319e02506584116c_0.log    │
	│                                                                                                                       │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1901: failed to check existence of "/usr/share/ca-certificates/5916.pem" inside minikube. args "out/minikube-windows-amd64.exe -p functional-20220921213353-5916 ssh \"sudo cat /usr/share/ca-certificates/5916.pem\"": exit status 80
functional_test.go:1907: failed verify pem file. minikube_test.pem -> /usr/share/ca-certificates/5916.pem mismatch (-want +got):
  string(
- 	"-----BEGIN CERTIFICATE-----\nMIIDsDCCApgCCQD5n0OIsOYIjDANBgkqhkiG9w0BAQsFADCBmTELMAkGA1UEBhMC\nVVMxEzARBgNVBAgMCkNhbGlmb3JuaWExFjAUBgNVBAcMDVNhbiBGcmFuY2lzY28x\nETAPBgNVBAoMCG1pbmlrdWJlMRYwFAYDVQQLDA1QYXJ0eSBQYXJyb3RzMREwDwYD\nVQQDDAhtaW5pa3ViZTEfMB0GCSqGSIb3D"...,
+ 	"\n\n",
  )
functional_test.go:1898: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1899: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220921213353-5916 ssh "sudo cat /etc/ssl/certs/51391683.0"

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1899: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-20220921213353-5916 ssh "sudo cat /etc/ssl/certs/51391683.0": exit status 80 (1.1379639s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "functional-20220921213353-5916": docker container inspect functional-20220921213353-5916 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220921213353-5916
	
	* 
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                       │
	│    * If the above advice does not help, please let us know:                                                           │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                         │
	│                                                                                                                       │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                              │
	│    * Please also attach the following file to the GitHub issue:                                                       │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_image_4f97aa0f12ba576a16ca2b05292f7afcda49921e_4.log    │
	│                                                                                                                       │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1901: failed to check existence of "/etc/ssl/certs/51391683.0" inside minikube. args "out/minikube-windows-amd64.exe -p functional-20220921213353-5916 ssh \"sudo cat /etc/ssl/certs/51391683.0\"": exit status 80
functional_test.go:1907: failed verify pem file. minikube_test.pem -> /etc/ssl/certs/51391683.0 mismatch (-want +got):
  string(
- 	"-----BEGIN CERTIFICATE-----\nMIIDsDCCApgCCQD5n0OIsOYIjDANBgkqhkiG9w0BAQsFADCBmTELMAkGA1UEBhMC\nVVMxEzARBgNVBAgMCkNhbGlmb3JuaWExFjAUBgNVBAcMDVNhbiBGcmFuY2lzY28x\nETAPBgNVBAoMCG1pbmlrdWJlMRYwFAYDVQQLDA1QYXJ0eSBQYXJyb3RzMREwDwYD\nVQQDDAhtaW5pa3ViZTEfMB0GCSqGSIb3D"...,
+ 	"\n\n",
  )
functional_test.go:1925: Checking for existence of /etc/ssl/certs/59162.pem within VM
functional_test.go:1926: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220921213353-5916 ssh "sudo cat /etc/ssl/certs/59162.pem"

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1926: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-20220921213353-5916 ssh "sudo cat /etc/ssl/certs/59162.pem": exit status 80 (1.1057381s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "functional-20220921213353-5916": docker container inspect functional-20220921213353-5916 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220921213353-5916
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_version_584df66c7473738ba6bddab8b00bad09d875c20e_5.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1928: failed to check existence of "/etc/ssl/certs/59162.pem" inside minikube. args "out/minikube-windows-amd64.exe -p functional-20220921213353-5916 ssh \"sudo cat /etc/ssl/certs/59162.pem\"": exit status 80
functional_test.go:1934: failed verify pem file. minikube_test2.pem -> /etc/ssl/certs/59162.pem mismatch (-want +got):
  string(
- 	"-----BEGIN CERTIFICATE-----\nMIIEwDCCAqgCCQCUeXrVemI4eTANBgkqhkiG9w0BAQsFADAiMQswCQYDVQQGEwJV\nUzETMBEGA1UECAwKQ2FsaWZvcm5pYTAeFw0yMTA3MjEyMDM4MDdaFw0yMTA4MjAy\nMDM4MDdaMCIxCzAJBgNVBAYTAlVTMRMwEQYDVQQIDApDYWxpZm9ybmlhMIICIjAN\nBgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCA"...,
+ 	"\n\n",
  )
functional_test.go:1925: Checking for existence of /usr/share/ca-certificates/59162.pem within VM
functional_test.go:1926: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220921213353-5916 ssh "sudo cat /usr/share/ca-certificates/59162.pem"

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1926: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-20220921213353-5916 ssh "sudo cat /usr/share/ca-certificates/59162.pem": exit status 80 (1.1163439s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "functional-20220921213353-5916": docker container inspect functional-20220921213353-5916 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220921213353-5916
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_version_584df66c7473738ba6bddab8b00bad09d875c20e_5.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1928: failed to check existence of "/usr/share/ca-certificates/59162.pem" inside minikube. args "out/minikube-windows-amd64.exe -p functional-20220921213353-5916 ssh \"sudo cat /usr/share/ca-certificates/59162.pem\"": exit status 80
functional_test.go:1934: failed verify pem file. minikube_test2.pem -> /usr/share/ca-certificates/59162.pem mismatch (-want +got):
  string(
- 	"-----BEGIN CERTIFICATE-----\nMIIEwDCCAqgCCQCUeXrVemI4eTANBgkqhkiG9w0BAQsFADAiMQswCQYDVQQGEwJV\nUzETMBEGA1UECAwKQ2FsaWZvcm5pYTAeFw0yMTA3MjEyMDM4MDdaFw0yMTA4MjAy\nMDM4MDdaMCIxCzAJBgNVBAYTAlVTMRMwEQYDVQQIDApDYWxpZm9ybmlhMIICIjAN\nBgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCA"...,
+ 	"\n\n",
  )
functional_test.go:1925: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1926: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220921213353-5916 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1926: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-20220921213353-5916 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0": exit status 80 (1.1203232s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "functional-20220921213353-5916": docker container inspect functional-20220921213353-5916 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220921213353-5916
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                                 │
	│    * If the above advice does not help, please let us know:                                                                     │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                                   │
	│                                                                                                                                 │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                        │
	│    * Please also attach the following file to the GitHub issue:                                                                 │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_update-context_9738a94781505e531269d5196158beef5ee79b06_11.log    │
	│                                                                                                                                 │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1928: failed to check existence of "/etc/ssl/certs/3ec20f2e.0" inside minikube. args "out/minikube-windows-amd64.exe -p functional-20220921213353-5916 ssh \"sudo cat /etc/ssl/certs/3ec20f2e.0\"": exit status 80
functional_test.go:1934: failed verify pem file. minikube_test2.pem -> /etc/ssl/certs/3ec20f2e.0 mismatch (-want +got):
  string(
- 	"-----BEGIN CERTIFICATE-----\nMIIEwDCCAqgCCQCUeXrVemI4eTANBgkqhkiG9w0BAQsFADAiMQswCQYDVQQGEwJV\nUzETMBEGA1UECAwKQ2FsaWZvcm5pYTAeFw0yMTA3MjEyMDM4MDdaFw0yMTA4MjAy\nMDM4MDdaMCIxCzAJBgNVBAYTAlVTMRMwEQYDVQQIDApDYWxpZm9ybmlhMIICIjAN\nBgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCA"...,
+ 	"\n\n",
  )
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestFunctional/parallel/CertSync]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect functional-20220921213353-5916
helpers_test.go:231: (dbg) Non-zero exit: docker inspect functional-20220921213353-5916: exit status 1 (226.0359ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: functional-20220921213353-5916

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-20220921213353-5916 -n functional-20220921213353-5916

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-20220921213353-5916 -n functional-20220921213353-5916: exit status 7 (563.5599ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0921 21:37:53.665043    8428 status.go:247] status error: host: state: unknown state "functional-20220921213353-5916": docker container inspect functional-20220921213353-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220921213353-5916

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-20220921213353-5916" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestFunctional/parallel/CertSync (7.63s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (1.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:214: (dbg) Run:  kubectl --context functional-20220921213353-5916 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
functional_test.go:214: (dbg) Non-zero exit: kubectl --context functional-20220921213353-5916 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": exit status 1 (249.7961ms)

                                                
                                                
** stderr ** 
	W0921 21:37:43.801672    3112 loader.go:223] Config not found: C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	Error in configuration: 
	* context was not found for specified context: functional-20220921213353-5916
	* cluster has no server defined

                                                
                                                
** /stderr **
functional_test.go:216: failed to 'kubectl get nodes' with args "kubectl --context functional-20220921213353-5916 get nodes --output=go-template \"--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'\"": exit status 1
functional_test.go:222: expected to have label "minikube.k8s.io/commit" in node labels but got : 
** stderr ** 
	W0921 21:37:43.801672    3112 loader.go:223] Config not found: C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	Error in configuration: 
	* context was not found for specified context: functional-20220921213353-5916
	* cluster has no server defined

                                                
                                                
** /stderr **
functional_test.go:222: expected to have label "minikube.k8s.io/version" in node labels but got : 
** stderr ** 
	W0921 21:37:43.801672    3112 loader.go:223] Config not found: C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	Error in configuration: 
	* context was not found for specified context: functional-20220921213353-5916
	* cluster has no server defined

                                                
                                                
** /stderr **
functional_test.go:222: expected to have label "minikube.k8s.io/updated_at" in node labels but got : 
** stderr ** 
	W0921 21:37:43.801672    3112 loader.go:223] Config not found: C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	Error in configuration: 
	* context was not found for specified context: functional-20220921213353-5916
	* cluster has no server defined

                                                
                                                
** /stderr **
functional_test.go:222: expected to have label "minikube.k8s.io/name" in node labels but got : 
** stderr ** 
	W0921 21:37:43.801672    3112 loader.go:223] Config not found: C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	Error in configuration: 
	* context was not found for specified context: functional-20220921213353-5916
	* cluster has no server defined

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestFunctional/parallel/NodeLabels]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect functional-20220921213353-5916
helpers_test.go:231: (dbg) Non-zero exit: docker inspect functional-20220921213353-5916: exit status 1 (252.0625ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: functional-20220921213353-5916

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-20220921213353-5916 -n functional-20220921213353-5916

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-20220921213353-5916 -n functional-20220921213353-5916: exit status 7 (551.714ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0921 21:37:44.683826    1640 status.go:247] status error: host: state: unknown state "functional-20220921213353-5916": docker container inspect functional-20220921213353-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220921213353-5916

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-20220921213353-5916" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestFunctional/parallel/NodeLabels (1.08s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (1.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:1953: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220921213353-5916 ssh "sudo systemctl is-active crio"

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:1953: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-20220921213353-5916 ssh "sudo systemctl is-active crio": exit status 80 (1.2359841s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "functional-20220921213353-5916": docker container inspect functional-20220921213353-5916 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220921213353-5916
	
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                            │
	│    * If the above advice does not help, please let us know:                                                                │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                              │
	│                                                                                                                            │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                   │
	│    * Please also attach the following file to the GitHub issue:                                                            │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_docker-env_547776f721aba6dceba24106cb61c1127a06fa4f_6.log    │
	│                                                                                                                            │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1956: output of 
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "functional-20220921213353-5916": docker container inspect functional-20220921213353-5916 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220921213353-5916
	
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                            │
	│    * If the above advice does not help, please let us know:                                                                │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                              │
	│                                                                                                                            │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                   │
	│    * Please also attach the following file to the GitHub issue:                                                            │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_docker-env_547776f721aba6dceba24106cb61c1127a06fa4f_6.log    │
	│                                                                                                                            │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **: exit status 80
functional_test.go:1959: For runtime "docker": expected "crio" to be inactive but got "\n\n" 
--- FAIL: TestFunctional/parallel/NonActiveRuntimeDisabled (1.24s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/powershell (2.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/powershell
functional_test.go:491: (dbg) Run:  powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-20220921213353-5916 docker-env | Invoke-Expression ; out/minikube-windows-amd64.exe status -p functional-20220921213353-5916"

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv/powershell
functional_test.go:491: (dbg) Non-zero exit: powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-20220921213353-5916 docker-env | Invoke-Expression ; out/minikube-windows-amd64.exe status -p functional-20220921213353-5916": exit status 1 (2.8481576s)

                                                
                                                
-- stdout --
	functional-20220921213353-5916
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "functional-20220921213353-5916": docker container inspect functional-20220921213353-5916 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220921213353-5916
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                     │
	│    * If the above advice does not help, please let us know:                                                         │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                       │
	│                                                                                                                     │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                            │
	│    * Please also attach the following file to the GitHub issue:                                                     │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_ssh_4c116c6236290140afdbb5dcaafee8e0c3250b76_3.log    │
	│                                                                                                                     │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	false : The term 'false' is not recognized as the name of a cmdlet, function, script file, or operable program. Check 
	the spelling of the name, or if a path was included, verify that the path is correct and try again.
	At line:1 char:1
	+ false exit code 80
	+ ~~~~~
	    + CategoryInfo          : ObjectNotFound: (false:String) [], CommandNotFoundException
	    + FullyQualifiedErrorId : CommandNotFoundException
	 
	E0921 21:37:45.144649    2532 status.go:258] status error: host: state: unknown state "functional-20220921213353-5916": docker container inspect functional-20220921213353-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220921213353-5916
	E0921 21:37:45.144649    2532 status.go:261] The "functional-20220921213353-5916" host does not exist!

                                                
                                                
** /stderr **
functional_test.go:497: failed to do status after eval-ing docker-env. error: exit status 1
--- FAIL: TestFunctional/parallel/DockerEnv/powershell (2.85s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2196: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220921213353-5916 version -o=json --components

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2196: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-20220921213353-5916 version -o=json --components: exit status 80 (1.1384237s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "functional-20220921213353-5916": docker container inspect functional-20220921213353-5916 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220921213353-5916
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                     │
	│    * If the above advice does not help, please let us know:                                                         │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                       │
	│                                                                                                                     │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                            │
	│    * Please also attach the following file to the GitHub issue:                                                     │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_ssh_044a2db5fb68dde6cbeed007e5e5f9ee411e4400_0.log    │
	│                                                                                                                     │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2198: error version: exit status 80
functional_test.go:2203: expected to see "buildctl" in the minikube version --components but got:

                                                
                                                

                                                
                                                
functional_test.go:2203: expected to see "commit" in the minikube version --components but got:

                                                
                                                

                                                
                                                
functional_test.go:2203: expected to see "containerd" in the minikube version --components but got:

                                                
                                                

                                                
                                                
functional_test.go:2203: expected to see "crictl" in the minikube version --components but got:

                                                
                                                

                                                
                                                
functional_test.go:2203: expected to see "crio" in the minikube version --components but got:

                                                
                                                

                                                
                                                
functional_test.go:2203: expected to see "ctr" in the minikube version --components but got:

                                                
                                                

                                                
                                                
functional_test.go:2203: expected to see "docker" in the minikube version --components but got:

                                                
                                                

                                                
                                                
functional_test.go:2203: expected to see "minikubeVersion" in the minikube version --components but got:

                                                
                                                

                                                
                                                
functional_test.go:2203: expected to see "podman" in the minikube version --components but got:

                                                
                                                

                                                
                                                
functional_test.go:2203: expected to see "run" in the minikube version --components but got:

                                                
                                                

                                                
                                                
functional_test.go:2203: expected to see "crun" in the minikube version --components but got:

                                                
                                                

                                                
                                                
--- FAIL: TestFunctional/parallel/Version/components (1.14s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:256: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220921213353-5916 image ls --format short

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-20220921213353-5916 image ls --format short:

                                                
                                                
functional_test.go:270: expected k8s.gcr.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListShort (0.59s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:256: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220921213353-5916 image ls --format table

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-20220921213353-5916 image ls --format table:
|-------|-----|----------|------|
| Image | Tag | Image ID | Size |
|-------|-----|----------|------|
|-------|-----|----------|------|
functional_test.go:270: expected | k8s.gcr.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListTable (0.62s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:256: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220921213353-5916 image ls --format json

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-20220921213353-5916 image ls --format json:
[]
functional_test.go:270: expected ["k8s.gcr.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListJson (0.62s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:256: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220921213353-5916 image ls --format yaml

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-20220921213353-5916 image ls --format yaml:
[]

                                                
                                                
functional_test.go:270: expected - k8s.gcr.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListYaml (0.60s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (2.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:303: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220921213353-5916 ssh pgrep buildkitd

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:303: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-20220921213353-5916 ssh pgrep buildkitd: exit status 80 (1.1631876s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "functional-20220921213353-5916": docker container inspect functional-20220921213353-5916 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220921213353-5916
	
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                        │
	│    * If the above advice does not help, please let us know:                                                            │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                          │
	│                                                                                                                        │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                               │
	│    * Please also attach the following file to the GitHub issue:                                                        │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_image_e2aefe9262d799a959d49d679e73a402d931951a_18.log    │
	│                                                                                                                        │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:310: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220921213353-5916 image build -t localhost/my-image:functional-20220921213353-5916 testdata\build
functional_test.go:443: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220921213353-5916 image ls
functional_test.go:438: expected "localhost/my-image:functional-20220921213353-5916" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageBuild (2.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:337: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:337: (dbg) Non-zero exit: docker pull gcr.io/google-containers/addon-resizer:1.8.8: exit status 1 (370.5968ms)

                                                
                                                
** stderr ** 
	Error response from daemon: error creating temporary lease: write /var/lib/desktop-containerd/daemon/io.containerd.metadata.v1.bolt/meta.db: read-only file system: unknown

                                                
                                                
** /stderr **
functional_test.go:339: failed to setup test (pull image): exit status 1

                                                
                                                
** stderr ** 
	Error response from daemon: error creating temporary lease: write /var/lib/desktop-containerd/daemon/io.containerd.metadata.v1.bolt/meta.db: read-only file system: unknown

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/Setup (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:350: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220921213353-5916 image load --daemon gcr.io/google-containers/addon-resizer:functional-20220921213353-5916

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:443: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220921213353-5916 image ls

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:438: expected "gcr.io/google-containers/addon-resizer:functional-20220921213353-5916" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.41s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:143: failed to get Kubernetes client for "functional-20220921213353-5916": client config: context "functional-20220921213353-5916" does not exist
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:360: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220921213353-5916 image load --daemon gcr.io/google-containers/addon-resizer:functional-20220921213353-5916

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:443: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220921213353-5916 image ls

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:438: expected "gcr.io/google-containers/addon-resizer:functional-20220921213353-5916" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.38s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (1.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2045: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220921213353-5916 update-context --alsologtostderr -v=2

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2045: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-20220921213353-5916 update-context --alsologtostderr -v=2: exit status 80 (1.1450749s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0921 21:37:51.605975    8692 out.go:296] Setting OutFile to fd 672 ...
	I0921 21:37:51.681067    8692 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0921 21:37:51.681067    8692 out.go:309] Setting ErrFile to fd 868...
	I0921 21:37:51.681067    8692 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0921 21:37:51.711245    8692 mustload.go:65] Loading cluster: functional-20220921213353-5916
	I0921 21:37:51.712543    8692 config.go:180] Loaded profile config "functional-20220921213353-5916": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.2
	I0921 21:37:51.730291    8692 cli_runner.go:164] Run: docker container inspect functional-20220921213353-5916 --format={{.State.Status}}
	W0921 21:37:51.937641    8692 cli_runner.go:211] docker container inspect functional-20220921213353-5916 --format={{.State.Status}} returned with exit code 1
	I0921 21:37:51.942771    8692 out.go:177] 
	W0921 21:37:51.944780    8692 out.go:239] X Exiting due to GUEST_STATUS: state: unknown state "functional-20220921213353-5916": docker container inspect functional-20220921213353-5916 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220921213353-5916
	
	X Exiting due to GUEST_STATUS: state: unknown state "functional-20220921213353-5916": docker container inspect functional-20220921213353-5916 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220921213353-5916
	
	W0921 21:37:51.944780    8692 out.go:239] * 
	* 
	W0921 21:37:52.478757    8692 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                     │
	│    * If the above advice does not help, please let us know:                                                         │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                       │
	│                                                                                                                     │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                            │
	│    * Please also attach the following file to the GitHub issue:                                                     │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_ssh_15a8ec4b54c4600ccdf64f589dd9f75cfe823832_3.log    │
	│                                                                                                                     │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                     │
	│    * If the above advice does not help, please let us know:                                                         │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                       │
	│                                                                                                                     │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                            │
	│    * Please also attach the following file to the GitHub issue:                                                     │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_ssh_15a8ec4b54c4600ccdf64f589dd9f75cfe823832_3.log    │
	│                                                                                                                     │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0921 21:37:52.481974    8692 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:2047: failed to run minikube update-context: args "out/minikube-windows-amd64.exe -p functional-20220921213353-5916 update-context --alsologtostderr -v=2": exit status 80
functional_test.go:2052: update-context: got="\n\n", want=*"No changes"*
--- FAIL: TestFunctional/parallel/UpdateContextCmd/no_changes (1.15s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (1.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2045: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220921213353-5916 update-context --alsologtostderr -v=2

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2045: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-20220921213353-5916 update-context --alsologtostderr -v=2: exit status 80 (1.1109299s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0921 21:37:53.822053    4848 out.go:296] Setting OutFile to fd 860 ...
	I0921 21:37:53.890866    4848 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0921 21:37:53.890866    4848 out.go:309] Setting ErrFile to fd 1016...
	I0921 21:37:53.890866    4848 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0921 21:37:53.903578    4848 mustload.go:65] Loading cluster: functional-20220921213353-5916
	I0921 21:37:53.904414    4848 config.go:180] Loaded profile config "functional-20220921213353-5916": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.2
	I0921 21:37:53.922691    4848 cli_runner.go:164] Run: docker container inspect functional-20220921213353-5916 --format={{.State.Status}}
	W0921 21:37:54.122295    4848 cli_runner.go:211] docker container inspect functional-20220921213353-5916 --format={{.State.Status}} returned with exit code 1
	I0921 21:37:54.126297    4848 out.go:177] 
	W0921 21:37:54.128288    4848 out.go:239] X Exiting due to GUEST_STATUS: state: unknown state "functional-20220921213353-5916": docker container inspect functional-20220921213353-5916 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220921213353-5916
	
	X Exiting due to GUEST_STATUS: state: unknown state "functional-20220921213353-5916": docker container inspect functional-20220921213353-5916 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220921213353-5916
	
	W0921 21:37:54.128288    4848 out.go:239] * 
	* 
	W0921 21:37:54.638439    4848 out.go:239] ╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                        │
	│    * If the above advice does not help, please let us know:                                                            │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                          │
	│                                                                                                                        │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                               │
	│    * Please also attach the following file to the GitHub issue:                                                        │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_image_e2aefe9262d799a959d49d679e73a402d931951a_18.log    │
	│                                                                                                                        │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                        │
	│    * If the above advice does not help, please let us know:                                                            │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                          │
	│                                                                                                                        │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                               │
	│    * Please also attach the following file to the GitHub issue:                                                        │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_image_e2aefe9262d799a959d49d679e73a402d931951a_18.log    │
	│                                                                                                                        │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0921 21:37:54.641560    4848 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:2047: failed to run minikube update-context: args "out/minikube-windows-amd64.exe -p functional-20220921213353-5916 update-context --alsologtostderr -v=2": exit status 80
functional_test.go:2052: update-context: got="\n\n", want=*"context has been updated"*
--- FAIL: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (1.11s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (1.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2045: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220921213353-5916 update-context --alsologtostderr -v=2

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2045: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-20220921213353-5916 update-context --alsologtostderr -v=2: exit status 80 (1.0958854s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0921 21:37:52.763435    3196 out.go:296] Setting OutFile to fd 912 ...
	I0921 21:37:52.820436    3196 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0921 21:37:52.820436    3196 out.go:309] Setting ErrFile to fd 752...
	I0921 21:37:52.820436    3196 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0921 21:37:52.831437    3196 mustload.go:65] Loading cluster: functional-20220921213353-5916
	I0921 21:37:52.832456    3196 config.go:180] Loaded profile config "functional-20220921213353-5916": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.2
	I0921 21:37:52.846441    3196 cli_runner.go:164] Run: docker container inspect functional-20220921213353-5916 --format={{.State.Status}}
	W0921 21:37:53.037460    3196 cli_runner.go:211] docker container inspect functional-20220921213353-5916 --format={{.State.Status}} returned with exit code 1
	I0921 21:37:53.044443    3196 out.go:177] 
	W0921 21:37:53.047451    3196 out.go:239] X Exiting due to GUEST_STATUS: state: unknown state "functional-20220921213353-5916": docker container inspect functional-20220921213353-5916 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220921213353-5916
	
	X Exiting due to GUEST_STATUS: state: unknown state "functional-20220921213353-5916": docker container inspect functional-20220921213353-5916 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20220921213353-5916
	
	W0921 21:37:53.047451    3196 out.go:239] * 
	* 
	W0921 21:37:53.575040    3196 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                                 │
	│    * If the above advice does not help, please let us know:                                                                     │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                                   │
	│                                                                                                                                 │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                        │
	│    * Please also attach the following file to the GitHub issue:                                                                 │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_update-context_9738a94781505e531269d5196158beef5ee79b06_11.log    │
	│                                                                                                                                 │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                                 │
	│    * If the above advice does not help, please let us know:                                                                     │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                                   │
	│                                                                                                                                 │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                        │
	│    * Please also attach the following file to the GitHub issue:                                                                 │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_update-context_9738a94781505e531269d5196158beef5ee79b06_11.log    │
	│                                                                                                                                 │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0921 21:37:53.579042    3196 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:2047: failed to run minikube update-context: args "out/minikube-windows-amd64.exe -p functional-20220921213353-5916 update-context --alsologtostderr -v=2": exit status 80
functional_test.go:2052: update-context: got="\n\n", want=*"context has been updated"*
--- FAIL: TestFunctional/parallel/UpdateContextCmd/no_clusters (1.10s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:230: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:230: (dbg) Non-zero exit: docker pull gcr.io/google-containers/addon-resizer:1.8.9: exit status 1 (409.0097ms)

                                                
                                                
** stderr ** 
	Error response from daemon: error creating temporary lease: write /var/lib/desktop-containerd/daemon/io.containerd.metadata.v1.bolt/meta.db: read-only file system: unknown

                                                
                                                
** /stderr **
functional_test.go:232: failed to setup test (pull image): exit status 1

                                                
                                                
** stderr ** 
	Error response from daemon: error creating temporary lease: write /var/lib/desktop-containerd/daemon/io.containerd.metadata.v1.bolt/meta.db: read-only file system: unknown

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:375: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220921213353-5916 image save gcr.io/google-containers/addon-resizer:functional-20220921213353-5916 C:\jenkins\workspace\Docker_Windows_integration\addon-resizer-save.tar

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:381: expected "C:\\jenkins\\workspace\\Docker_Windows_integration\\addon-resizer-save.tar" to exist after `image save`, but doesn't exist
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:404: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220921213353-5916 image load C:\jenkins\workspace\Docker_Windows_integration\addon-resizer-save.tar

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:404: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-20220921213353-5916 image load C:\jenkins\workspace\Docker_Windows_integration\addon-resizer-save.tar: exit status 80 (1.1033382s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_IMAGE_LOAD: save to dir: caching images: caching image "C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\cache\\images\\amd64\\C_\\jenkins\\workspace\\Docker_Windows_integration\\addon-resizer-save.tar": parsing image ref name for C:\jenkins\workspace\Docker_Windows_integration\addon-resizer-save.tar: could not parse reference: C:\jenkins\workspace\Docker_Windows_integration\addon-resizer-save.tar
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                     │
	│    * If the above advice does not help, please let us know:                                                         │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                       │
	│                                                                                                                     │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                            │
	│    * Please also attach the following file to the GitHub issue:                                                     │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_ssh_fea49abfab0323d8512b535581403500420d48f0_3.log    │
	│                                                                                                                     │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:406: loading image into minikube from file: exit status 80

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_IMAGE_LOAD: save to dir: caching images: caching image "C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\cache\\images\\amd64\\C_\\jenkins\\workspace\\Docker_Windows_integration\\addon-resizer-save.tar": parsing image ref name for C:\jenkins\workspace\Docker_Windows_integration\addon-resizer-save.tar: could not parse reference: C:\jenkins\workspace\Docker_Windows_integration\addon-resizer-save.tar
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                     │
	│    * If the above advice does not help, please let us know:                                                         │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                       │
	│                                                                                                                     │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                            │
	│    * Please also attach the following file to the GitHub issue:                                                     │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_ssh_fea49abfab0323d8512b535581403500420d48f0_3.log    │
	│                                                                                                                     │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.10s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:414: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-20220921213353-5916
functional_test.go:414: (dbg) Non-zero exit: docker rmi gcr.io/google-containers/addon-resizer:functional-20220921213353-5916: exit status 1 (181.86ms)

                                                
                                                
** stderr ** 
	Error: No such image: gcr.io/google-containers/addon-resizer:functional-20220921213353-5916

                                                
                                                
** /stderr **
functional_test.go:416: failed to remove image from docker: exit status 1

                                                
                                                
** stderr ** 
	Error: No such image: gcr.io/google-containers/addon-resizer:functional-20220921213353-5916

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.19s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (50.69s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-windows-amd64.exe start -p ingress-addon-legacy-20220921214242-5916 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker
ingress_addon_legacy_test.go:39: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p ingress-addon-legacy-20220921214242-5916 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker: exit status 60 (50.5579965s)

                                                
                                                
-- stdout --
	* [ingress-addon-legacy-20220921214242-5916] minikube v1.27.0 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	  - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=14995
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting control plane node ingress-addon-legacy-20220921214242-5916 in cluster ingress-addon-legacy-20220921214242-5916
	* Pulling base image ...
	* Downloading Kubernetes v1.18.20 preload ...
	* Creating docker container (CPUs=2, Memory=4096MB) ...
	* docker "ingress-addon-legacy-20220921214242-5916" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=4096MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0921 21:42:42.913932    7208 out.go:296] Setting OutFile to fd 868 ...
	I0921 21:42:42.977367    7208 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0921 21:42:42.977367    7208 out.go:309] Setting ErrFile to fd 844...
	I0921 21:42:42.977367    7208 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0921 21:42:42.997072    7208 out.go:303] Setting JSON to false
	I0921 21:42:42.999796    7208 start.go:115] hostinfo: {"hostname":"minikube2","uptime":2631,"bootTime":1663793931,"procs":150,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19044 Build 19044","kernelVersion":"10.0.19044 Build 19044","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"29dced62-21fb-45d8-a34e-472b66ced002"}
	W0921 21:42:42.999937    7208 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0921 21:42:43.019738    7208 out.go:177] * [ingress-addon-legacy-20220921214242-5916] minikube v1.27.0 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	I0921 21:42:43.023606    7208 notify.go:214] Checking for updates...
	I0921 21:42:43.030763    7208 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	I0921 21:42:43.033550    7208 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	I0921 21:42:43.035916    7208 out.go:177]   - MINIKUBE_LOCATION=14995
	I0921 21:42:43.038813    7208 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0921 21:42:43.041358    7208 driver.go:365] Setting default libvirt URI to qemu:///system
	I0921 21:42:43.334344    7208 docker.go:137] docker version: linux-20.10.17
	I0921 21:42:43.342411    7208 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0921 21:42:43.877740    7208 info.go:265] docker info: {ID:6VFI:N2PO:PLBP:KXLR:EN7R:OR7P:564S:UY7N:ZWDA:5V2U:KGL2:KJZ7 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:43 OomKillDisable:true NGoroutines:51 SystemTime:2022-09-21 21:42:43.509042 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_6
4 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,p
rofile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.10.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.9] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-p
lugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.19.0]] Warnings:<nil>}}
	I0921 21:42:43.882680    7208 out.go:177] * Using the docker driver based on user configuration
	I0921 21:42:43.884801    7208 start.go:284] selected driver: docker
	I0921 21:42:43.884801    7208 start.go:808] validating driver "docker" against <nil>
	I0921 21:42:43.884992    7208 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0921 21:42:44.001799    7208 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0921 21:42:44.533179    7208 info.go:265] docker info: {ID:6VFI:N2PO:PLBP:KXLR:EN7R:OR7P:564S:UY7N:ZWDA:5V2U:KGL2:KJZ7 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:43 OomKillDisable:true NGoroutines:51 SystemTime:2022-09-21 21:42:44.1606757 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.10.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.9] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-
plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.19.0]] Warnings:<nil>}}
	I0921 21:42:44.533179    7208 start_flags.go:302] no existing cluster config was found, will generate one from the flags 
	I0921 21:42:44.533899    7208 start_flags.go:867] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0921 21:42:44.539578    7208 out.go:177] * Using Docker Desktop driver with root privileges
	I0921 21:42:44.541901    7208 cni.go:95] Creating CNI manager for ""
	I0921 21:42:44.541901    7208 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0921 21:42:44.541901    7208 start_flags.go:316] config:
	{Name:ingress-addon-legacy-20220921214242-5916 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-20220921214242-5916 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.
local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmn
et}
	I0921 21:42:44.544060    7208 out.go:177] * Starting control plane node ingress-addon-legacy-20220921214242-5916 in cluster ingress-addon-legacy-20220921214242-5916
	I0921 21:42:44.547638    7208 cache.go:120] Beginning downloading kic base image for docker with docker
	I0921 21:42:44.550652    7208 out.go:177] * Pulling base image ...
	I0921 21:42:44.554061    7208 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I0921 21:42:44.554061    7208 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c in local docker daemon
	I0921 21:42:44.600716    7208 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4
	I0921 21:42:44.600716    7208 cache.go:57] Caching tarball of preloaded images
	I0921 21:42:44.601782    7208 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I0921 21:42:44.605986    7208 out.go:177] * Downloading Kubernetes v1.18.20 preload ...
	I0921 21:42:44.608576    7208 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 ...
	I0921 21:42:44.683111    7208 download.go:101] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4?checksum=md5:ff35f06d4f6c0bac9297b8f85d8ebf70 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4
	I0921 21:42:44.779989    7208 cache.go:147] Downloading gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c to local cache
	I0921 21:42:44.780058    7208 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase_v0.0.34@sha256_f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c.tar
	I0921 21:42:44.780058    7208 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase_v0.0.34@sha256_f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c.tar
	I0921 21:42:44.780058    7208 image.go:59] Checking for gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c in local cache directory
	I0921 21:42:44.780058    7208 image.go:62] Found gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c in local cache directory, skipping pull
	I0921 21:42:44.780058    7208 image.go:103] gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c exists in cache, skipping pull
	I0921 21:42:44.780587    7208 cache.go:150] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c as a tarball
	I0921 21:42:44.780587    7208 cache.go:161] Loading gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c from local cache
	I0921 21:42:44.780679    7208 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase_v0.0.34@sha256_f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c.tar
	I0921 21:42:47.624421    7208 image.go:219] response: {"errorDetail":{"message":"mkdir /var/lib/docker/tmp/docker-import-2429713402: read-only file system"},"error":"mkdir /var/lib/docker/tmp/docker-import-2429713402: read-only file system"}
	I0921 21:42:47.624421    7208 cache.go:164] successfully loaded gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c from cached tarball
	I0921 21:42:47.624421    7208 cache.go:170] Downloading gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c to local daemon
	I0921 21:42:47.624421    7208 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c.d.lock -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase_v0.0.34@sha256_f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c.d.lock
	I0921 21:42:47.625231    7208 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c in local docker daemon
	I0921 21:42:47.822659    7208 image.go:243] Writing gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c to local daemon
	I0921 21:42:47.822659    7208 image.go:258] Getting image gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c
	I0921 21:42:48.198174    7208 image.go:272] Writing image gcr.io/k8s-minikube/kicbase:v0.0.34
	    > gcr.io/k8s-minikube/kicbase:  0 B [__________________________] ?% ? p/s ?I0921 21:42:48.511313    7208 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 ...
	I0921 21:42:48.511995    7208 preload.go:256] verifying checksum of C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 ...
	    > gcr.io/k8s-minikube/kicbase:  0 B [__________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase:  0 B [__________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase:  0 B [__________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase:  0 B [______________________] ?% ? p/s 800msI0921 21:42:49.003871    7208 image.go:306] Pulling image gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c
	I0921 21:42:49.323942    7208 cache.go:177] use image loaded from cache gcr.io/k8s-minikube/kicbase:v0.0.34
	W0921 21:42:49.323942    7208 out.go:239] ! minikube was unable to download gcr.io/k8s-minikube/kicbase:v0.0.34, but successfully downloaded gcr.io/k8s-minikube/kicbase:v0.0.34 as a fallback image
	! minikube was unable to download gcr.io/k8s-minikube/kicbase:v0.0.34, but successfully downloaded gcr.io/k8s-minikube/kicbase:v0.0.34 as a fallback image
	I0921 21:42:49.715221    7208 cache.go:60] Finished verifying existence of preloaded tar for  v1.18.20 on docker
	I0921 21:42:49.716255    7208 profile.go:148] Saving config to C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\ingress-addon-legacy-20220921214242-5916\config.json ...
	I0921 21:42:49.716551    7208 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\ingress-addon-legacy-20220921214242-5916\config.json: {Name:mkec4431f5f7e9ecd5c39866c9a16d9735959363 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0921 21:42:49.717268    7208 cache.go:208] Successfully downloaded all kic artifacts
	I0921 21:42:49.717268    7208 start.go:364] acquiring machines lock for ingress-addon-legacy-20220921214242-5916: {Name:mk894d4dd4929c6eaf7f9c746cd8fb7d42c6a0f1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0921 21:42:49.718295    7208 start.go:368] acquired machines lock for "ingress-addon-legacy-20220921214242-5916" in 194.3µs
	I0921 21:42:49.718467    7208 start.go:93] Provisioning new machine with config: &{Name:ingress-addon-legacy-20220921214242-5916 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.34 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-20220921214242-5916 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomai
n:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuF
irmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet} &{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0921 21:42:49.718467    7208 start.go:125] createHost starting for "" (driver="docker")
	I0921 21:42:49.863776    7208 out.go:204] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I0921 21:42:49.864095    7208 start.go:159] libmachine.API.Create for "ingress-addon-legacy-20220921214242-5916" (driver="docker")
	I0921 21:42:49.864095    7208 client.go:168] LocalClient.Create starting
	I0921 21:42:49.865355    7208 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I0921 21:42:49.865706    7208 main.go:134] libmachine: Decoding PEM data...
	I0921 21:42:49.865769    7208 main.go:134] libmachine: Parsing certificate...
	I0921 21:42:49.865819    7208 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I0921 21:42:49.865819    7208 main.go:134] libmachine: Decoding PEM data...
	I0921 21:42:49.865819    7208 main.go:134] libmachine: Parsing certificate...
	I0921 21:42:49.875840    7208 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-20220921214242-5916 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0921 21:42:50.051718    7208 cli_runner.go:211] docker network inspect ingress-addon-legacy-20220921214242-5916 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0921 21:42:50.060091    7208 network_create.go:272] running [docker network inspect ingress-addon-legacy-20220921214242-5916] to gather additional debugging logs...
	I0921 21:42:50.060091    7208 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-20220921214242-5916
	W0921 21:42:50.239106    7208 cli_runner.go:211] docker network inspect ingress-addon-legacy-20220921214242-5916 returned with exit code 1
	I0921 21:42:50.239106    7208 network_create.go:275] error running [docker network inspect ingress-addon-legacy-20220921214242-5916]: docker network inspect ingress-addon-legacy-20220921214242-5916: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: ingress-addon-legacy-20220921214242-5916
	I0921 21:42:50.239106    7208 network_create.go:277] output of [docker network inspect ingress-addon-legacy-20220921214242-5916]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: ingress-addon-legacy-20220921214242-5916
	
	** /stderr **
	I0921 21:42:50.246115    7208 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0921 21:42:50.446076    7208 network.go:290] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc000402f00] misses:0}
	I0921 21:42:50.446076    7208 network.go:236] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0921 21:42:50.446076    7208 network_create.go:115] attempt to create docker network ingress-addon-legacy-20220921214242-5916 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0921 21:42:50.455267    7208 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ingress-addon-legacy-20220921214242-5916 ingress-addon-legacy-20220921214242-5916
	W0921 21:42:50.658554    7208 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ingress-addon-legacy-20220921214242-5916 ingress-addon-legacy-20220921214242-5916 returned with exit code 1
	E0921 21:42:50.658643    7208 network_create.go:104] error while trying to create docker network ingress-addon-legacy-20220921214242-5916 192.168.49.0/24: create docker network ingress-addon-legacy-20220921214242-5916 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ingress-addon-legacy-20220921214242-5916 ingress-addon-legacy-20220921214242-5916: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 596b9ac26c9b48ce4d7fc1a2ac73b4cfc932c26a7c02addc95ac8b94469405c5 (br-596b9ac26c9b): conflicts with network a04d36bfb3cf4b1b1ab643a2ec70fadfc4a485878b32a275b4c7653037d66dd0 (br-a04d36bfb3cf): networks have overlapping IPv4
	W0921 21:42:50.658988    7208 out.go:239] ! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network ingress-addon-legacy-20220921214242-5916 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ingress-addon-legacy-20220921214242-5916 ingress-addon-legacy-20220921214242-5916: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 596b9ac26c9b48ce4d7fc1a2ac73b4cfc932c26a7c02addc95ac8b94469405c5 (br-596b9ac26c9b): conflicts with network a04d36bfb3cf4b1b1ab643a2ec70fadfc4a485878b32a275b4c7653037d66dd0 (br-a04d36bfb3cf): networks have overlapping IPv4
	
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network ingress-addon-legacy-20220921214242-5916 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ingress-addon-legacy-20220921214242-5916 ingress-addon-legacy-20220921214242-5916: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 596b9ac26c9b48ce4d7fc1a2ac73b4cfc932c26a7c02addc95ac8b94469405c5 (br-596b9ac26c9b): conflicts with network a04d36bfb3cf4b1b1ab643a2ec70fadfc4a485878b32a275b4c7653037d66dd0 (br-a04d36bfb3cf): networks have overlapping IPv4
	
	I0921 21:42:50.673213    7208 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0921 21:42:50.884331    7208 cli_runner.go:164] Run: docker volume create ingress-addon-legacy-20220921214242-5916 --label name.minikube.sigs.k8s.io=ingress-addon-legacy-20220921214242-5916 --label created_by.minikube.sigs.k8s.io=true
	W0921 21:42:51.109683    7208 cli_runner.go:211] docker volume create ingress-addon-legacy-20220921214242-5916 --label name.minikube.sigs.k8s.io=ingress-addon-legacy-20220921214242-5916 --label created_by.minikube.sigs.k8s.io=true returned with exit code 1
	I0921 21:42:51.109772    7208 client.go:171] LocalClient.Create took 1.245669s
	I0921 21:42:53.128394    7208 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0921 21:42:53.363573    7208 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220921214242-5916
	W0921 21:42:53.548498    7208 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220921214242-5916 returned with exit code 1
	I0921 21:42:53.548498    7208 retry.go:31] will retry after 276.165072ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "ingress-addon-legacy-20220921214242-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220921214242-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: ingress-addon-legacy-20220921214242-5916
	I0921 21:42:53.842157    7208 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220921214242-5916
	W0921 21:42:54.021739    7208 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220921214242-5916 returned with exit code 1
	I0921 21:42:54.021927    7208 retry.go:31] will retry after 540.190908ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "ingress-addon-legacy-20220921214242-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220921214242-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: ingress-addon-legacy-20220921214242-5916
	I0921 21:42:54.579337    7208 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220921214242-5916
	W0921 21:42:54.760280    7208 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220921214242-5916 returned with exit code 1
	W0921 21:42:54.760280    7208 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "ingress-addon-legacy-20220921214242-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220921214242-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: ingress-addon-legacy-20220921214242-5916
	
	W0921 21:42:54.760280    7208 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "ingress-addon-legacy-20220921214242-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220921214242-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: ingress-addon-legacy-20220921214242-5916
	I0921 21:42:54.771007    7208 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0921 21:42:54.778347    7208 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220921214242-5916
	W0921 21:42:54.948163    7208 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220921214242-5916 returned with exit code 1
	I0921 21:42:54.948163    7208 retry.go:31] will retry after 234.428547ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "ingress-addon-legacy-20220921214242-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220921214242-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: ingress-addon-legacy-20220921214242-5916
	I0921 21:42:55.207615    7208 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220921214242-5916
	W0921 21:42:55.398523    7208 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220921214242-5916 returned with exit code 1
	I0921 21:42:55.398523    7208 retry.go:31] will retry after 346.739061ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "ingress-addon-legacy-20220921214242-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220921214242-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: ingress-addon-legacy-20220921214242-5916
	I0921 21:42:55.763609    7208 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220921214242-5916
	W0921 21:42:55.956689    7208 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220921214242-5916 returned with exit code 1
	I0921 21:42:55.956996    7208 retry.go:31] will retry after 667.587979ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "ingress-addon-legacy-20220921214242-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220921214242-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: ingress-addon-legacy-20220921214242-5916
	I0921 21:42:56.645300    7208 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220921214242-5916
	W0921 21:42:56.856056    7208 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220921214242-5916 returned with exit code 1
	W0921 21:42:56.856315    7208 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "ingress-addon-legacy-20220921214242-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220921214242-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: ingress-addon-legacy-20220921214242-5916
	
	W0921 21:42:56.856366    7208 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "ingress-addon-legacy-20220921214242-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220921214242-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: ingress-addon-legacy-20220921214242-5916
	I0921 21:42:56.856404    7208 start.go:128] duration metric: createHost completed in 7.1378955s
	I0921 21:42:56.856404    7208 start.go:83] releasing machines lock for "ingress-addon-legacy-20220921214242-5916", held for 7.1380677s
	W0921 21:42:56.856675    7208 start.go:602] error starting host: creating host: create: creating: setting up container node: creating volume for ingress-addon-legacy-20220921214242-5916 container: docker volume create ingress-addon-legacy-20220921214242-5916 --label name.minikube.sigs.k8s.io=ingress-addon-legacy-20220921214242-5916 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create ingress-addon-legacy-20220921214242-5916: error while creating volume root path '/var/lib/docker/volumes/ingress-addon-legacy-20220921214242-5916': mkdir /var/lib/docker/volumes/ingress-addon-legacy-20220921214242-5916: read-only file system
	I0921 21:42:56.871484    7208 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-20220921214242-5916 --format={{.State.Status}}
	W0921 21:42:57.048089    7208 cli_runner.go:211] docker container inspect ingress-addon-legacy-20220921214242-5916 --format={{.State.Status}} returned with exit code 1
	I0921 21:42:57.048089    7208 delete.go:82] Unable to get host status for ingress-addon-legacy-20220921214242-5916, assuming it has already been deleted: state: unknown state "ingress-addon-legacy-20220921214242-5916": docker container inspect ingress-addon-legacy-20220921214242-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: ingress-addon-legacy-20220921214242-5916
	W0921 21:42:57.048089    7208 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: setting up container node: creating volume for ingress-addon-legacy-20220921214242-5916 container: docker volume create ingress-addon-legacy-20220921214242-5916 --label name.minikube.sigs.k8s.io=ingress-addon-legacy-20220921214242-5916 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create ingress-addon-legacy-20220921214242-5916: error while creating volume root path '/var/lib/docker/volumes/ingress-addon-legacy-20220921214242-5916': mkdir /var/lib/docker/volumes/ingress-addon-legacy-20220921214242-5916: read-only file system
	
	! StartHost failed, but will try again: creating host: create: creating: setting up container node: creating volume for ingress-addon-legacy-20220921214242-5916 container: docker volume create ingress-addon-legacy-20220921214242-5916 --label name.minikube.sigs.k8s.io=ingress-addon-legacy-20220921214242-5916 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create ingress-addon-legacy-20220921214242-5916: error while creating volume root path '/var/lib/docker/volumes/ingress-addon-legacy-20220921214242-5916': mkdir /var/lib/docker/volumes/ingress-addon-legacy-20220921214242-5916: read-only file system
	
	I0921 21:42:57.048089    7208 start.go:617] Will try again in 5 seconds ...
	I0921 21:43:02.054764    7208 start.go:364] acquiring machines lock for ingress-addon-legacy-20220921214242-5916: {Name:mk894d4dd4929c6eaf7f9c746cd8fb7d42c6a0f1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0921 21:43:02.055193    7208 start.go:368] acquired machines lock for "ingress-addon-legacy-20220921214242-5916" in 244.9µs
	I0921 21:43:02.055389    7208 start.go:96] Skipping create...Using existing machine configuration
	I0921 21:43:02.055389    7208 fix.go:55] fixHost starting: 
	I0921 21:43:02.069231    7208 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-20220921214242-5916 --format={{.State.Status}}
	W0921 21:43:02.264640    7208 cli_runner.go:211] docker container inspect ingress-addon-legacy-20220921214242-5916 --format={{.State.Status}} returned with exit code 1
	I0921 21:43:02.264721    7208 fix.go:103] recreateIfNeeded on ingress-addon-legacy-20220921214242-5916: state= err=unknown state "ingress-addon-legacy-20220921214242-5916": docker container inspect ingress-addon-legacy-20220921214242-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: ingress-addon-legacy-20220921214242-5916
	I0921 21:43:02.264805    7208 fix.go:108] machineExists: false. err=machine does not exist
	I0921 21:43:02.329002    7208 out.go:177] * docker "ingress-addon-legacy-20220921214242-5916" container is missing, will recreate.
	I0921 21:43:02.332059    7208 delete.go:124] DEMOLISHING ingress-addon-legacy-20220921214242-5916 ...
	I0921 21:43:02.345907    7208 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-20220921214242-5916 --format={{.State.Status}}
	W0921 21:43:02.528920    7208 cli_runner.go:211] docker container inspect ingress-addon-legacy-20220921214242-5916 --format={{.State.Status}} returned with exit code 1
	W0921 21:43:02.529212    7208 stop.go:75] unable to get state: unknown state "ingress-addon-legacy-20220921214242-5916": docker container inspect ingress-addon-legacy-20220921214242-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: ingress-addon-legacy-20220921214242-5916
	I0921 21:43:02.529212    7208 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "ingress-addon-legacy-20220921214242-5916": docker container inspect ingress-addon-legacy-20220921214242-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: ingress-addon-legacy-20220921214242-5916
	I0921 21:43:02.547097    7208 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-20220921214242-5916 --format={{.State.Status}}
	W0921 21:43:02.750057    7208 cli_runner.go:211] docker container inspect ingress-addon-legacy-20220921214242-5916 --format={{.State.Status}} returned with exit code 1
	I0921 21:43:02.750057    7208 delete.go:82] Unable to get host status for ingress-addon-legacy-20220921214242-5916, assuming it has already been deleted: state: unknown state "ingress-addon-legacy-20220921214242-5916": docker container inspect ingress-addon-legacy-20220921214242-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: ingress-addon-legacy-20220921214242-5916
	I0921 21:43:02.760016    7208 cli_runner.go:164] Run: docker container inspect -f {{.Id}} ingress-addon-legacy-20220921214242-5916
	W0921 21:43:02.953416    7208 cli_runner.go:211] docker container inspect -f {{.Id}} ingress-addon-legacy-20220921214242-5916 returned with exit code 1
	I0921 21:43:02.953506    7208 kic.go:356] could not find the container ingress-addon-legacy-20220921214242-5916 to remove it. will try anyways
	I0921 21:43:02.961318    7208 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-20220921214242-5916 --format={{.State.Status}}
	W0921 21:43:03.157391    7208 cli_runner.go:211] docker container inspect ingress-addon-legacy-20220921214242-5916 --format={{.State.Status}} returned with exit code 1
	W0921 21:43:03.157391    7208 oci.go:84] error getting container status, will try to delete anyways: unknown state "ingress-addon-legacy-20220921214242-5916": docker container inspect ingress-addon-legacy-20220921214242-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: ingress-addon-legacy-20220921214242-5916
	I0921 21:43:03.168118    7208 cli_runner.go:164] Run: docker exec --privileged -t ingress-addon-legacy-20220921214242-5916 /bin/bash -c "sudo init 0"
	W0921 21:43:03.344152    7208 cli_runner.go:211] docker exec --privileged -t ingress-addon-legacy-20220921214242-5916 /bin/bash -c "sudo init 0" returned with exit code 1
	I0921 21:43:03.344152    7208 oci.go:646] error shutdown ingress-addon-legacy-20220921214242-5916: docker exec --privileged -t ingress-addon-legacy-20220921214242-5916 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: ingress-addon-legacy-20220921214242-5916
	I0921 21:43:04.366828    7208 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-20220921214242-5916 --format={{.State.Status}}
	W0921 21:43:04.545364    7208 cli_runner.go:211] docker container inspect ingress-addon-legacy-20220921214242-5916 --format={{.State.Status}} returned with exit code 1
	I0921 21:43:04.545505    7208 oci.go:658] temporary error verifying shutdown: unknown state "ingress-addon-legacy-20220921214242-5916": docker container inspect ingress-addon-legacy-20220921214242-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: ingress-addon-legacy-20220921214242-5916
	I0921 21:43:04.545505    7208 oci.go:660] temporary error: container ingress-addon-legacy-20220921214242-5916 status is  but expect it to be exited
	I0921 21:43:04.545505    7208 retry.go:31] will retry after 328.259627ms: couldn't verify container is exited. %v: unknown state "ingress-addon-legacy-20220921214242-5916": docker container inspect ingress-addon-legacy-20220921214242-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: ingress-addon-legacy-20220921214242-5916
	I0921 21:43:04.895457    7208 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-20220921214242-5916 --format={{.State.Status}}
	W0921 21:43:05.075613    7208 cli_runner.go:211] docker container inspect ingress-addon-legacy-20220921214242-5916 --format={{.State.Status}} returned with exit code 1
	I0921 21:43:05.075753    7208 oci.go:658] temporary error verifying shutdown: unknown state "ingress-addon-legacy-20220921214242-5916": docker container inspect ingress-addon-legacy-20220921214242-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: ingress-addon-legacy-20220921214242-5916
	I0921 21:43:05.075753    7208 oci.go:660] temporary error: container ingress-addon-legacy-20220921214242-5916 status is  but expect it to be exited
	I0921 21:43:05.075753    7208 retry.go:31] will retry after 447.727139ms: couldn't verify container is exited. %v: unknown state "ingress-addon-legacy-20220921214242-5916": docker container inspect ingress-addon-legacy-20220921214242-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: ingress-addon-legacy-20220921214242-5916
	I0921 21:43:05.535175    7208 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-20220921214242-5916 --format={{.State.Status}}
	W0921 21:43:05.715779    7208 cli_runner.go:211] docker container inspect ingress-addon-legacy-20220921214242-5916 --format={{.State.Status}} returned with exit code 1
	I0921 21:43:05.715779    7208 oci.go:658] temporary error verifying shutdown: unknown state "ingress-addon-legacy-20220921214242-5916": docker container inspect ingress-addon-legacy-20220921214242-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: ingress-addon-legacy-20220921214242-5916
	I0921 21:43:05.715779    7208 oci.go:660] temporary error: container ingress-addon-legacy-20220921214242-5916 status is  but expect it to be exited
	I0921 21:43:05.715779    7208 retry.go:31] will retry after 901.025843ms: couldn't verify container is exited. %v: unknown state "ingress-addon-legacy-20220921214242-5916": docker container inspect ingress-addon-legacy-20220921214242-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: ingress-addon-legacy-20220921214242-5916
	I0921 21:43:06.631886    7208 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-20220921214242-5916 --format={{.State.Status}}
	W0921 21:43:06.856684    7208 cli_runner.go:211] docker container inspect ingress-addon-legacy-20220921214242-5916 --format={{.State.Status}} returned with exit code 1
	I0921 21:43:06.856684    7208 oci.go:658] temporary error verifying shutdown: unknown state "ingress-addon-legacy-20220921214242-5916": docker container inspect ingress-addon-legacy-20220921214242-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: ingress-addon-legacy-20220921214242-5916
	I0921 21:43:06.856919    7208 oci.go:660] temporary error: container ingress-addon-legacy-20220921214242-5916 status is  but expect it to be exited
	I0921 21:43:06.856919    7208 retry.go:31] will retry after 1.713171311s: couldn't verify container is exited. %v: unknown state "ingress-addon-legacy-20220921214242-5916": docker container inspect ingress-addon-legacy-20220921214242-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: ingress-addon-legacy-20220921214242-5916
	I0921 21:43:08.582713    7208 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-20220921214242-5916 --format={{.State.Status}}
	W0921 21:43:08.776845    7208 cli_runner.go:211] docker container inspect ingress-addon-legacy-20220921214242-5916 --format={{.State.Status}} returned with exit code 1
	I0921 21:43:08.776845    7208 oci.go:658] temporary error verifying shutdown: unknown state "ingress-addon-legacy-20220921214242-5916": docker container inspect ingress-addon-legacy-20220921214242-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: ingress-addon-legacy-20220921214242-5916
	I0921 21:43:08.776845    7208 oci.go:660] temporary error: container ingress-addon-legacy-20220921214242-5916 status is  but expect it to be exited
	I0921 21:43:08.776845    7208 retry.go:31] will retry after 3.325151152s: couldn't verify container is exited. %v: unknown state "ingress-addon-legacy-20220921214242-5916": docker container inspect ingress-addon-legacy-20220921214242-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: ingress-addon-legacy-20220921214242-5916
	I0921 21:43:12.112284    7208 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-20220921214242-5916 --format={{.State.Status}}
	W0921 21:43:12.338932    7208 cli_runner.go:211] docker container inspect ingress-addon-legacy-20220921214242-5916 --format={{.State.Status}} returned with exit code 1
	I0921 21:43:12.339027    7208 oci.go:658] temporary error verifying shutdown: unknown state "ingress-addon-legacy-20220921214242-5916": docker container inspect ingress-addon-legacy-20220921214242-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: ingress-addon-legacy-20220921214242-5916
	I0921 21:43:12.339152    7208 oci.go:660] temporary error: container ingress-addon-legacy-20220921214242-5916 status is  but expect it to be exited
	I0921 21:43:12.339152    7208 retry.go:31] will retry after 2.711970641s: couldn't verify container is exited. %v: unknown state "ingress-addon-legacy-20220921214242-5916": docker container inspect ingress-addon-legacy-20220921214242-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: ingress-addon-legacy-20220921214242-5916
	I0921 21:43:15.071318    7208 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-20220921214242-5916 --format={{.State.Status}}
	W0921 21:43:15.254612    7208 cli_runner.go:211] docker container inspect ingress-addon-legacy-20220921214242-5916 --format={{.State.Status}} returned with exit code 1
	I0921 21:43:15.254701    7208 oci.go:658] temporary error verifying shutdown: unknown state "ingress-addon-legacy-20220921214242-5916": docker container inspect ingress-addon-legacy-20220921214242-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: ingress-addon-legacy-20220921214242-5916
	I0921 21:43:15.254858    7208 oci.go:660] temporary error: container ingress-addon-legacy-20220921214242-5916 status is  but expect it to be exited
	I0921 21:43:15.254986    7208 retry.go:31] will retry after 5.015617898s: couldn't verify container is exited. %v: unknown state "ingress-addon-legacy-20220921214242-5916": docker container inspect ingress-addon-legacy-20220921214242-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: ingress-addon-legacy-20220921214242-5916
	I0921 21:43:20.285967    7208 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-20220921214242-5916 --format={{.State.Status}}
	W0921 21:43:20.464098    7208 cli_runner.go:211] docker container inspect ingress-addon-legacy-20220921214242-5916 --format={{.State.Status}} returned with exit code 1
	I0921 21:43:20.464098    7208 oci.go:658] temporary error verifying shutdown: unknown state "ingress-addon-legacy-20220921214242-5916": docker container inspect ingress-addon-legacy-20220921214242-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: ingress-addon-legacy-20220921214242-5916
	I0921 21:43:20.464098    7208 oci.go:660] temporary error: container ingress-addon-legacy-20220921214242-5916 status is  but expect it to be exited
	I0921 21:43:20.464098    7208 oci.go:88] couldn't shut down ingress-addon-legacy-20220921214242-5916 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "ingress-addon-legacy-20220921214242-5916": docker container inspect ingress-addon-legacy-20220921214242-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: ingress-addon-legacy-20220921214242-5916
	 
	I0921 21:43:20.471997    7208 cli_runner.go:164] Run: docker rm -f -v ingress-addon-legacy-20220921214242-5916
	I0921 21:43:20.689290    7208 cli_runner.go:164] Run: docker container inspect -f {{.Id}} ingress-addon-legacy-20220921214242-5916
	W0921 21:43:20.876504    7208 cli_runner.go:211] docker container inspect -f {{.Id}} ingress-addon-legacy-20220921214242-5916 returned with exit code 1
	I0921 21:43:20.884586    7208 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-20220921214242-5916 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0921 21:43:21.064103    7208 cli_runner.go:211] docker network inspect ingress-addon-legacy-20220921214242-5916 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0921 21:43:21.074584    7208 network_create.go:272] running [docker network inspect ingress-addon-legacy-20220921214242-5916] to gather additional debugging logs...
	I0921 21:43:21.074584    7208 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-20220921214242-5916
	W0921 21:43:21.250808    7208 cli_runner.go:211] docker network inspect ingress-addon-legacy-20220921214242-5916 returned with exit code 1
	I0921 21:43:21.250808    7208 network_create.go:275] error running [docker network inspect ingress-addon-legacy-20220921214242-5916]: docker network inspect ingress-addon-legacy-20220921214242-5916: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: ingress-addon-legacy-20220921214242-5916
	I0921 21:43:21.250808    7208 network_create.go:277] output of [docker network inspect ingress-addon-legacy-20220921214242-5916]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: ingress-addon-legacy-20220921214242-5916
	
	** /stderr **
	W0921 21:43:21.251802    7208 delete.go:139] delete failed (probably ok) <nil>
	I0921 21:43:21.251802    7208 fix.go:115] Sleeping 1 second for extra luck!
	I0921 21:43:22.264296    7208 start.go:125] createHost starting for "" (driver="docker")
	I0921 21:43:22.285074    7208 out.go:204] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I0921 21:43:22.285609    7208 start.go:159] libmachine.API.Create for "ingress-addon-legacy-20220921214242-5916" (driver="docker")
	I0921 21:43:22.285609    7208 client.go:168] LocalClient.Create starting
	I0921 21:43:22.286215    7208 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I0921 21:43:22.286215    7208 main.go:134] libmachine: Decoding PEM data...
	I0921 21:43:22.286730    7208 main.go:134] libmachine: Parsing certificate...
	I0921 21:43:22.286789    7208 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I0921 21:43:22.286789    7208 main.go:134] libmachine: Decoding PEM data...
	I0921 21:43:22.286789    7208 main.go:134] libmachine: Parsing certificate...
	I0921 21:43:22.296306    7208 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-20220921214242-5916 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0921 21:43:22.480323    7208 cli_runner.go:211] docker network inspect ingress-addon-legacy-20220921214242-5916 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0921 21:43:22.488344    7208 network_create.go:272] running [docker network inspect ingress-addon-legacy-20220921214242-5916] to gather additional debugging logs...
	I0921 21:43:22.488344    7208 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-20220921214242-5916
	W0921 21:43:22.669215    7208 cli_runner.go:211] docker network inspect ingress-addon-legacy-20220921214242-5916 returned with exit code 1
	I0921 21:43:22.669215    7208 network_create.go:275] error running [docker network inspect ingress-addon-legacy-20220921214242-5916]: docker network inspect ingress-addon-legacy-20220921214242-5916: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: ingress-addon-legacy-20220921214242-5916
	I0921 21:43:22.669397    7208 network_create.go:277] output of [docker network inspect ingress-addon-legacy-20220921214242-5916]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: ingress-addon-legacy-20220921214242-5916
	
	** /stderr **
	I0921 21:43:22.675895    7208 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0921 21:43:22.903309    7208 network.go:281] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000402f00] amended:false}} dirty:map[] misses:0}
	I0921 21:43:22.903878    7208 network.go:239] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0921 21:43:22.919312    7208 network.go:290] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000402f00] amended:true}} dirty:map[192.168.49.0:0xc000402f00 192.168.58.0:0xc0004033c0] misses:0}
	I0921 21:43:22.919312    7208 network.go:236] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0921 21:43:22.919312    7208 network_create.go:115] attempt to create docker network ingress-addon-legacy-20220921214242-5916 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0921 21:43:22.926488    7208 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ingress-addon-legacy-20220921214242-5916 ingress-addon-legacy-20220921214242-5916
	W0921 21:43:23.116329    7208 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ingress-addon-legacy-20220921214242-5916 ingress-addon-legacy-20220921214242-5916 returned with exit code 1
	E0921 21:43:23.116329    7208 network_create.go:104] error while trying to create docker network ingress-addon-legacy-20220921214242-5916 192.168.58.0/24: create docker network ingress-addon-legacy-20220921214242-5916 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ingress-addon-legacy-20220921214242-5916 ingress-addon-legacy-20220921214242-5916: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 115a7331bba748ceaeaf02c6fd6d2a2327195061cd6589cdc34fd288f73158ea (br-115a7331bba7): conflicts with network 8a3cd8d165a4af032276277dbee93b52b8ee4aed7ca6d8aceee6814901b31792 (br-8a3cd8d165a4): networks have overlapping IPv4
	W0921 21:43:23.116329    7208 out.go:239] ! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network ingress-addon-legacy-20220921214242-5916 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ingress-addon-legacy-20220921214242-5916 ingress-addon-legacy-20220921214242-5916: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 115a7331bba748ceaeaf02c6fd6d2a2327195061cd6589cdc34fd288f73158ea (br-115a7331bba7): conflicts with network 8a3cd8d165a4af032276277dbee93b52b8ee4aed7ca6d8aceee6814901b31792 (br-8a3cd8d165a4): networks have overlapping IPv4
	
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network ingress-addon-legacy-20220921214242-5916 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ingress-addon-legacy-20220921214242-5916 ingress-addon-legacy-20220921214242-5916: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 115a7331bba748ceaeaf02c6fd6d2a2327195061cd6589cdc34fd288f73158ea (br-115a7331bba7): conflicts with network 8a3cd8d165a4af032276277dbee93b52b8ee4aed7ca6d8aceee6814901b31792 (br-8a3cd8d165a4): networks have overlapping IPv4
	
	I0921 21:43:23.130389    7208 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0921 21:43:23.327463    7208 cli_runner.go:164] Run: docker volume create ingress-addon-legacy-20220921214242-5916 --label name.minikube.sigs.k8s.io=ingress-addon-legacy-20220921214242-5916 --label created_by.minikube.sigs.k8s.io=true
	W0921 21:43:23.527576    7208 cli_runner.go:211] docker volume create ingress-addon-legacy-20220921214242-5916 --label name.minikube.sigs.k8s.io=ingress-addon-legacy-20220921214242-5916 --label created_by.minikube.sigs.k8s.io=true returned with exit code 1
	I0921 21:43:23.527754    7208 client.go:171] LocalClient.Create took 1.2421371s
	I0921 21:43:25.548119    7208 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0921 21:43:25.554485    7208 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220921214242-5916
	W0921 21:43:25.742288    7208 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220921214242-5916 returned with exit code 1
	I0921 21:43:25.742288    7208 retry.go:31] will retry after 242.222461ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "ingress-addon-legacy-20220921214242-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220921214242-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: ingress-addon-legacy-20220921214242-5916
	I0921 21:43:25.997290    7208 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220921214242-5916
	W0921 21:43:26.175470    7208 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220921214242-5916 returned with exit code 1
	I0921 21:43:26.175861    7208 retry.go:31] will retry after 293.637806ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "ingress-addon-legacy-20220921214242-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220921214242-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: ingress-addon-legacy-20220921214242-5916
	I0921 21:43:26.480906    7208 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220921214242-5916
	W0921 21:43:26.676141    7208 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220921214242-5916 returned with exit code 1
	I0921 21:43:26.676435    7208 retry.go:31] will retry after 446.119795ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "ingress-addon-legacy-20220921214242-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220921214242-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: ingress-addon-legacy-20220921214242-5916
	I0921 21:43:27.136199    7208 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220921214242-5916
	W0921 21:43:27.322249    7208 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220921214242-5916 returned with exit code 1
	W0921 21:43:27.322364    7208 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "ingress-addon-legacy-20220921214242-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220921214242-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: ingress-addon-legacy-20220921214242-5916
	
	W0921 21:43:27.322364    7208 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "ingress-addon-legacy-20220921214242-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220921214242-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: ingress-addon-legacy-20220921214242-5916
	I0921 21:43:27.332708    7208 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0921 21:43:27.339483    7208 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220921214242-5916
	W0921 21:43:27.524885    7208 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220921214242-5916 returned with exit code 1
	I0921 21:43:27.524967    7208 retry.go:31] will retry after 179.638263ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "ingress-addon-legacy-20220921214242-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220921214242-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: ingress-addon-legacy-20220921214242-5916
	I0921 21:43:27.719697    7208 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220921214242-5916
	W0921 21:43:27.901178    7208 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220921214242-5916 returned with exit code 1
	I0921 21:43:27.901178    7208 retry.go:31] will retry after 263.695078ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "ingress-addon-legacy-20220921214242-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220921214242-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: ingress-addon-legacy-20220921214242-5916
	I0921 21:43:28.176377    7208 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220921214242-5916
	W0921 21:43:28.373190    7208 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220921214242-5916 returned with exit code 1
	I0921 21:43:28.373190    7208 retry.go:31] will retry after 484.240172ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "ingress-addon-legacy-20220921214242-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220921214242-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: ingress-addon-legacy-20220921214242-5916
	I0921 21:43:28.866085    7208 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220921214242-5916
	W0921 21:43:29.044785    7208 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220921214242-5916 returned with exit code 1
	W0921 21:43:29.045151    7208 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "ingress-addon-legacy-20220921214242-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220921214242-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: ingress-addon-legacy-20220921214242-5916
	
	W0921 21:43:29.045151    7208 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "ingress-addon-legacy-20220921214242-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220921214242-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: ingress-addon-legacy-20220921214242-5916
	I0921 21:43:29.045151    7208 start.go:128] duration metric: createHost completed in 6.780815s
	I0921 21:43:29.056557    7208 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0921 21:43:29.064052    7208 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220921214242-5916
	W0921 21:43:29.262264    7208 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220921214242-5916 returned with exit code 1
	I0921 21:43:29.262727    7208 retry.go:31] will retry after 340.62286ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "ingress-addon-legacy-20220921214242-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220921214242-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: ingress-addon-legacy-20220921214242-5916
	I0921 21:43:29.613664    7208 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220921214242-5916
	W0921 21:43:29.807092    7208 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220921214242-5916 returned with exit code 1
	I0921 21:43:29.807092    7208 retry.go:31] will retry after 297.417842ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "ingress-addon-legacy-20220921214242-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220921214242-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: ingress-addon-legacy-20220921214242-5916
	I0921 21:43:30.125283    7208 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220921214242-5916
	W0921 21:43:30.321614    7208 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220921214242-5916 returned with exit code 1
	I0921 21:43:30.321614    7208 retry.go:31] will retry after 448.358942ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "ingress-addon-legacy-20220921214242-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220921214242-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: ingress-addon-legacy-20220921214242-5916
	I0921 21:43:30.780362    7208 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220921214242-5916
	W0921 21:43:30.988977    7208 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220921214242-5916 returned with exit code 1
	W0921 21:43:30.989373    7208 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "ingress-addon-legacy-20220921214242-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220921214242-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: ingress-addon-legacy-20220921214242-5916
	
	W0921 21:43:30.989441    7208 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "ingress-addon-legacy-20220921214242-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220921214242-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: ingress-addon-legacy-20220921214242-5916
	I0921 21:43:31.001295    7208 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0921 21:43:31.008325    7208 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220921214242-5916
	W0921 21:43:31.176664    7208 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220921214242-5916 returned with exit code 1
	I0921 21:43:31.176664    7208 retry.go:31] will retry after 176.645665ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "ingress-addon-legacy-20220921214242-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220921214242-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: ingress-addon-legacy-20220921214242-5916
	I0921 21:43:31.369344    7208 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220921214242-5916
	W0921 21:43:31.562773    7208 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220921214242-5916 returned with exit code 1
	I0921 21:43:31.563224    7208 retry.go:31] will retry after 512.00063ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "ingress-addon-legacy-20220921214242-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220921214242-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: ingress-addon-legacy-20220921214242-5916
	I0921 21:43:32.095857    7208 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220921214242-5916
	W0921 21:43:32.275976    7208 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220921214242-5916 returned with exit code 1
	I0921 21:43:32.275976    7208 retry.go:31] will retry after 673.154531ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "ingress-addon-legacy-20220921214242-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220921214242-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: ingress-addon-legacy-20220921214242-5916
	I0921 21:43:32.968709    7208 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220921214242-5916
	W0921 21:43:33.163510    7208 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220921214242-5916 returned with exit code 1
	W0921 21:43:33.163510    7208 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "ingress-addon-legacy-20220921214242-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220921214242-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: ingress-addon-legacy-20220921214242-5916
	
	W0921 21:43:33.163510    7208 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "ingress-addon-legacy-20220921214242-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220921214242-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: ingress-addon-legacy-20220921214242-5916
	I0921 21:43:33.163510    7208 fix.go:57] fixHost completed within 31.1079369s
	I0921 21:43:33.163510    7208 start.go:83] releasing machines lock for "ingress-addon-legacy-20220921214242-5916", held for 31.1080389s
	W0921 21:43:33.163510    7208 out.go:239] * Failed to start docker container. Running "minikube delete -p ingress-addon-legacy-20220921214242-5916" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for ingress-addon-legacy-20220921214242-5916 container: docker volume create ingress-addon-legacy-20220921214242-5916 --label name.minikube.sigs.k8s.io=ingress-addon-legacy-20220921214242-5916 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create ingress-addon-legacy-20220921214242-5916: error while creating volume root path '/var/lib/docker/volumes/ingress-addon-legacy-20220921214242-5916': mkdir /var/lib/docker/volumes/ingress-addon-legacy-20220921214242-5916: read-only file system
	
	* Failed to start docker container. Running "minikube delete -p ingress-addon-legacy-20220921214242-5916" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for ingress-addon-legacy-20220921214242-5916 container: docker volume create ingress-addon-legacy-20220921214242-5916 --label name.minikube.sigs.k8s.io=ingress-addon-legacy-20220921214242-5916 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create ingress-addon-legacy-20220921214242-5916: error while creating volume root path '/var/lib/docker/volumes/ingress-addon-legacy-20220921214242-5916': mkdir /var/lib/docker/volumes/ingress-addon-legacy-20220921214242-5916: read-only file system
	
	I0921 21:43:33.175412    7208 out.go:177] 
	W0921 21:43:33.178649    7208 out.go:239] X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for ingress-addon-legacy-20220921214242-5916 container: docker volume create ingress-addon-legacy-20220921214242-5916 --label name.minikube.sigs.k8s.io=ingress-addon-legacy-20220921214242-5916 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create ingress-addon-legacy-20220921214242-5916: error while creating volume root path '/var/lib/docker/volumes/ingress-addon-legacy-20220921214242-5916': mkdir /var/lib/docker/volumes/ingress-addon-legacy-20220921214242-5916: read-only file system
	
	X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for ingress-addon-legacy-20220921214242-5916 container: docker volume create ingress-addon-legacy-20220921214242-5916 --label name.minikube.sigs.k8s.io=ingress-addon-legacy-20220921214242-5916 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create ingress-addon-legacy-20220921214242-5916: error while creating volume root path '/var/lib/docker/volumes/ingress-addon-legacy-20220921214242-5916': mkdir /var/lib/docker/volumes/ingress-addon-legacy-20220921214242-5916: read-only file system
	
	W0921 21:43:33.179015    7208 out.go:239] * Suggestion: Restart Docker
	* Suggestion: Restart Docker
	W0921 21:43:33.179307    7208 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/6825
	* Related issue: https://github.com/kubernetes/minikube/issues/6825
	I0921 21:43:33.183569    7208 out.go:177] 

                                                
                                                
** /stderr **
ingress_addon_legacy_test.go:41: failed to start minikube with args: "out/minikube-windows-amd64.exe start -p ingress-addon-legacy-20220921214242-5916 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker" : exit status 60
--- FAIL: TestIngressAddonLegacy/StartLegacyK8sCluster (50.69s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (1.88s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-windows-amd64.exe -p ingress-addon-legacy-20220921214242-5916 addons enable ingress --alsologtostderr -v=5
ingress_addon_legacy_test.go:70: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p ingress-addon-legacy-20220921214242-5916 addons enable ingress --alsologtostderr -v=5: exit status 10 (1.0873802s)

                                                
                                                
-- stdout --
	* ingress is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	* Verifying ingress addon...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0921 21:43:33.626485    6248 out.go:296] Setting OutFile to fd 844 ...
	I0921 21:43:33.689394    6248 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0921 21:43:33.689394    6248 out.go:309] Setting ErrFile to fd 672...
	I0921 21:43:33.689394    6248 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0921 21:43:33.705744    6248 out.go:177] * ingress is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	I0921 21:43:33.708920    6248 config.go:180] Loaded profile config "ingress-addon-legacy-20220921214242-5916": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I0921 21:43:33.709029    6248 addons.go:65] Setting ingress=true in profile "ingress-addon-legacy-20220921214242-5916"
	I0921 21:43:33.709029    6248 addons.go:153] Setting addon ingress=true in "ingress-addon-legacy-20220921214242-5916"
	I0921 21:43:33.709725    6248 host.go:66] Checking if "ingress-addon-legacy-20220921214242-5916" exists ...
	I0921 21:43:33.723543    6248 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-20220921214242-5916 --format={{.State.Status}}
	W0921 21:43:33.920234    6248 cli_runner.go:211] docker container inspect ingress-addon-legacy-20220921214242-5916 --format={{.State.Status}} returned with exit code 1
	W0921 21:43:33.920486    6248 host.go:54] host status for "ingress-addon-legacy-20220921214242-5916" returned error: state: unknown state "ingress-addon-legacy-20220921214242-5916": docker container inspect ingress-addon-legacy-20220921214242-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: ingress-addon-legacy-20220921214242-5916
	W0921 21:43:33.920486    6248 addons.go:199] "ingress-addon-legacy-20220921214242-5916" is not running, setting ingress=true and skipping enablement (err=<nil>)
	I0921 21:43:33.920486    6248 addons.go:383] Verifying addon ingress=true in "ingress-addon-legacy-20220921214242-5916"
	I0921 21:43:33.923265    6248 out.go:177] * Verifying ingress addon...
	W0921 21:43:33.925217    6248 loader.go:223] Config not found: C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	I0921 21:43:33.928146    6248 out.go:177] 
	W0921 21:43:33.930801    6248 out.go:239] X Exiting due to MK_ADDON_ENABLE: run callbacks: running callbacks: [get kube-client to validate ingress addon: client config: context "ingress-addon-legacy-20220921214242-5916" does not exist: client config: context "ingress-addon-legacy-20220921214242-5916" does not exist]
	X Exiting due to MK_ADDON_ENABLE: run callbacks: running callbacks: [get kube-client to validate ingress addon: client config: context "ingress-addon-legacy-20220921214242-5916" does not exist: client config: context "ingress-addon-legacy-20220921214242-5916" does not exist]
	W0921 21:43:33.930801    6248 out.go:239] * 
	* 
	W0921 21:43:34.417747    6248 out.go:239] ╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                        │
	│    * If the above advice does not help, please let us know:                                                            │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                          │
	│                                                                                                                        │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                               │
	│    * Please also attach the following file to the GitHub issue:                                                        │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_addons_765a40db962dd8139438d8c956b5e6e825316d2d_9.log    │
	│                                                                                                                        │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                        │
	│    * If the above advice does not help, please let us know:                                                            │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                          │
	│                                                                                                                        │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                               │
	│    * Please also attach the following file to the GitHub issue:                                                        │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_addons_765a40db962dd8139438d8c956b5e6e825316d2d_9.log    │
	│                                                                                                                        │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0921 21:43:34.421104    6248 out.go:177] 

                                                
                                                
** /stderr **
ingress_addon_legacy_test.go:71: failed to enable ingress addon: exit status 10
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddonActivation]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ingress-addon-legacy-20220921214242-5916
helpers_test.go:231: (dbg) Non-zero exit: docker inspect ingress-addon-legacy-20220921214242-5916: exit status 1 (257.6696ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: ingress-addon-legacy-20220921214242-5916

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p ingress-addon-legacy-20220921214242-5916 -n ingress-addon-legacy-20220921214242-5916
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p ingress-addon-legacy-20220921214242-5916 -n ingress-addon-legacy-20220921214242-5916: exit status 7 (531.0741ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0921 21:43:35.221735    9132 status.go:247] status error: host: state: unknown state "ingress-addon-legacy-20220921214242-5916": docker container inspect ingress-addon-legacy-20220921214242-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: ingress-addon-legacy-20220921214242-5916

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ingress-addon-legacy-20220921214242-5916" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (1.88s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (0.84s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:158: failed to get Kubernetes client: <nil>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ingress-addon-legacy-20220921214242-5916
helpers_test.go:231: (dbg) Non-zero exit: docker inspect ingress-addon-legacy-20220921214242-5916: exit status 1 (265.8744ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: ingress-addon-legacy-20220921214242-5916

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p ingress-addon-legacy-20220921214242-5916 -n ingress-addon-legacy-20220921214242-5916
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p ingress-addon-legacy-20220921214242-5916 -n ingress-addon-legacy-20220921214242-5916: exit status 7 (559.6625ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0921 21:43:36.607895    6284 status.go:247] status error: host: state: unknown state "ingress-addon-legacy-20220921214242-5916": docker container inspect ingress-addon-legacy-20220921214242-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: ingress-addon-legacy-20220921214242-5916

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ingress-addon-legacy-20220921214242-5916" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressAddons (0.84s)

                                                
                                    
x
+
TestJSONOutput/start/Command (48.91s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe start -p json-output-20220921214338-5916 --output=json --user=testUser --memory=2200 --wait=true --driver=docker
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p json-output-20220921214338-5916 --output=json --user=testUser --memory=2200 --wait=true --driver=docker: exit status 60 (48.9054868s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"0a96f1f3-1133-4def-ba2e-eaea3b370ddd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-20220921214338-5916] minikube v1.27.0 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"90327580-a402-43cb-ac4a-9a67eb01386d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=C:\\Users\\jenkins.minikube2\\minikube-integration\\kubeconfig"}}
	{"specversion":"1.0","id":"de02b1a4-e95b-46dc-9b13-286e350aee4c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube"}}
	{"specversion":"1.0","id":"3ab4f07d-5a24-40b6-8c09-fe0a83fb59f4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=14995"}}
	{"specversion":"1.0","id":"694af05c-d307-4286-9009-ef9ca292005c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"ba6d76e0-7d9f-4e6c-a999-8fbbaae4d701","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"e2cd6b44-c51f-4c42-925d-7c2e3699a6ef","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker Desktop driver with root privileges"}}
	{"specversion":"1.0","id":"a04065dd-9790-4649-b558-1a3e83bd2339","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting control plane node json-output-20220921214338-5916 in cluster json-output-20220921214338-5916","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"811e41d3-fac0-4d1c-84e9-2806cc1087ed","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"0ce7167b-51b3-4b5c-b37e-eab1ff458f5e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.warning","datacontenttype":"application/json","data":{"message":"minikube was unable to download gcr.io/k8s-minikube/kicbase:v0.0.34, but successfully downloaded gcr.io/k8s-minikube/kicbase:v0.0.34 as a fallback image"}}
	{"specversion":"1.0","id":"014a37ce-0a1f-4284-859b-3ad3401e9abe","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2200MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"c9dcd663-f266-4263-8370-b8b7c7e9a6fb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.warning","datacontenttype":"application/json","data":{"message":"Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network json-output-20220921214338-5916 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=json-output-20220921214338-5916 json-output-20220921214338-5916: exit status 1\nstdout:\n\nstderr:\nError response from daemon: cannot create network 7932e02ee48efa6e69f766ee259f5fbe37a7e7428bdda9060aa9ecab0ca065ae (br-7932e02ee48e): conflicts with network a04d36bfb3cf4b1b1ab643a2ec70fadfc4a485878b32a275b4c7653037d66dd0 (br-a04d36bfb3cf): networks have overlapping IPv4"}
}
	{"specversion":"1.0","id":"fdf0d320-656b-49c9-b10e-5f7b7c087f34","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"StartHost failed, but will try again: creating host: create: creating: setting up container node: creating volume for json-output-20220921214338-5916 container: docker volume create json-output-20220921214338-5916 --label name.minikube.sigs.k8s.io=json-output-20220921214338-5916 --label created_by.minikube.sigs.k8s.io=true: exit status 1\nstdout:\n\nstderr:\nError response from daemon: create json-output-20220921214338-5916: error while creating volume root path '/var/lib/docker/volumes/json-output-20220921214338-5916': mkdir /var/lib/docker/volumes/json-output-20220921214338-5916: read-only file system"}}
	{"specversion":"1.0","id":"8aa1ace4-21d6-4fea-ab38-543b84e71409","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"docker \"json-output-20220921214338-5916\" container is missing, will recreate.","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"d41bbf0b-9ee5-4425-a648-ac0920ab29ce","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2200MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"716d2e5a-fe4b-4cb0-8600-f10ccd12f770","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.warning","datacontenttype":"application/json","data":{"message":"Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network json-output-20220921214338-5916 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=json-output-20220921214338-5916 json-output-20220921214338-5916: exit status 1\nstdout:\n\nstderr:\nError response from daemon: cannot create network 3dab4a1be42c55e6957a6b879b960352dbef169b7bb827d206fe9e677e87b79a (br-3dab4a1be42c): conflicts with network 8a3cd8d165a4af032276277dbee93b52b8ee4aed7ca6d8aceee6814901b31792 (br-8a3cd8d165a4): networks have overlapping IPv4"}
}
	{"specversion":"1.0","id":"29448186-0fd8-4f1b-82e9-f1151c3705c2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"Failed to start docker container. Running \"minikube delete -p json-output-20220921214338-5916\" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for json-output-20220921214338-5916 container: docker volume create json-output-20220921214338-5916 --label name.minikube.sigs.k8s.io=json-output-20220921214338-5916 --label created_by.minikube.sigs.k8s.io=true: exit status 1\nstdout:\n\nstderr:\nError response from daemon: create json-output-20220921214338-5916: error while creating volume root path '/var/lib/docker/volumes/json-output-20220921214338-5916': mkdir /var/lib/docker/volumes/json-output-20220921214338-5916: read-only file system"}}
	{"specversion":"1.0","id":"c418a031-94a1-4d54-b869-bc345d67d179","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Restart Docker","exitcode":"60","issues":"https://github.com/kubernetes/minikube/issues/6825","message":"Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for json-output-20220921214338-5916 container: docker volume create json-output-20220921214338-5916 --label name.minikube.sigs.k8s.io=json-output-20220921214338-5916 --label created_by.minikube.sigs.k8s.io=true: exit status 1\nstdout:\n\nstderr:\nError response from daemon: create json-output-20220921214338-5916: error while creating volume root path '/var/lib/docker/volumes/json-output-20220921214338-5916': mkdir /var/lib/docker/volumes/json-output-20220921214338-5916: read-only file system","name":"PR_DOCKER_READONLY_VOL","url":""}}

                                                
                                                
-- /stdout --
** stderr ** 
	    > gcr.io/k8s-minikube/kicbase:  0 B [__________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase:  0 B [__________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase:  0 B [__________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase:  0 B [__________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase:  0 B [______________________] ?% ? p/s 800msE0921 21:43:44.905803    7644 network_create.go:104] error while trying to create docker network json-output-20220921214338-5916 192.168.49.0/24: create docker network json-output-20220921214338-5916 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=json-output-20220921214338-5916 json-output-20220921214338-5916: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 7932e02ee48efa6e69f766ee259f5fbe37a7e7428bdda9060aa9ecab0ca065ae (br-7932e02ee48e): conflicts with network a04d36bfb3cf4b1b1ab643a2ec70fadfc4a485878b32a275b4c7653037d66dd0 (br-a04d36bfb3cf): networks have overlapping IPv4
	E0921 21:44:17.058496    7644 network_create.go:104] error while trying to create docker network json-output-20220921214338-5916 192.168.58.0/24: create docker network json-output-20220921214338-5916 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=json-output-20220921214338-5916 json-output-20220921214338-5916: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 3dab4a1be42c55e6957a6b879b960352dbef169b7bb827d206fe9e677e87b79a (br-3dab4a1be42c): conflicts with network 8a3cd8d165a4af032276277dbee93b52b8ee4aed7ca6d8aceee6814901b31792 (br-8a3cd8d165a4): networks have overlapping IPv4

                                                
                                                
** /stderr **
json_output_test.go:65: failed to clean up: args "out/minikube-windows-amd64.exe start -p json-output-20220921214338-5916 --output=json --user=testUser --memory=2200 --wait=true --driver=docker": exit status 60
--- FAIL: TestJSONOutput/start/Command (48.91s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
json_output_test.go:114: step 8 has already been assigned to another step:
Creating docker container (CPUs=2, Memory=2200MB) ...
Cannot use for:
docker "json-output-20220921214338-5916" container is missing, will recreate.
[Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 0a96f1f3-1133-4def-ba2e-eaea3b370ddd
datacontenttype: application/json
Data,
{
"currentstep": "0",
"message": "[json-output-20220921214338-5916] minikube v1.27.0 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044",
"name": "Initial Minikube Setup",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: 90327580-a402-43cb-ac4a-9a67eb01386d
datacontenttype: application/json
Data,
{
"message": "KUBECONFIG=C:\\Users\\jenkins.minikube2\\minikube-integration\\kubeconfig"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: de02b1a4-e95b-46dc-9b13-286e350aee4c
datacontenttype: application/json
Data,
{
"message": "MINIKUBE_HOME=C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: 3ab4f07d-5a24-40b6-8c09-fe0a83fb59f4
datacontenttype: application/json
Data,
{
"message": "MINIKUBE_LOCATION=14995"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: 694af05c-d307-4286-9009-ef9ca292005c
datacontenttype: application/json
Data,
{
"message": "MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: ba6d76e0-7d9f-4e6c-a999-8fbbaae4d701
datacontenttype: application/json
Data,
{
"currentstep": "1",
"message": "Using the docker driver based on user configuration",
"name": "Selecting Driver",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: e2cd6b44-c51f-4c42-925d-7c2e3699a6ef
datacontenttype: application/json
Data,
{
"message": "Using Docker Desktop driver with root privileges"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: a04065dd-9790-4649-b558-1a3e83bd2339
datacontenttype: application/json
Data,
{
"currentstep": "3",
"message": "Starting control plane node json-output-20220921214338-5916 in cluster json-output-20220921214338-5916",
"name": "Starting Node",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 811e41d3-fac0-4d1c-84e9-2806cc1087ed
datacontenttype: application/json
Data,
{
"currentstep": "5",
"message": "Pulling base image ...",
"name": "Pulling Base Image",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.warning
source: https://minikube.sigs.k8s.io/
id: 0ce7167b-51b3-4b5c-b37e-eab1ff458f5e
datacontenttype: application/json
Data,
{
"message": "minikube was unable to download gcr.io/k8s-minikube/kicbase:v0.0.34, but successfully downloaded gcr.io/k8s-minikube/kicbase:v0.0.34 as a fallback image"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 014a37ce-0a1f-4284-859b-3ad3401e9abe
datacontenttype: application/json
Data,
{
"currentstep": "8",
"message": "Creating docker container (CPUs=2, Memory=2200MB) ...",
"name": "Creating Container",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.warning
source: https://minikube.sigs.k8s.io/
id: c9dcd663-f266-4263-8370-b8b7c7e9a6fb
datacontenttype: application/json
Data,
{
"message": "Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network json-output-20220921214338-5916 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=json-output-20220921214338-5916 json-output-20220921214338-5916: exit status 1\nstdout:\n\nstderr:\nError response from daemon: cannot create network 7932e02ee48efa6e69f766ee259f5fbe37a7e7428bdda9060aa9ecab0ca065ae (br-7932e02ee48e): conflicts with network a04d36bfb3cf4b1b1ab643a2ec70fadfc4a485878b32a275b4c7653037d66dd0 (br-a04d36bfb3cf): networks have overlapping IPv4"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.error
source: https://minikube.sigs.k8s.io/
id: fdf0d320-656b-49c9-b10e-5f7b7c087f34
datacontenttype: application/json
Data,
{
"message": "StartHost failed, but will try again: creating host: create: creating: setting up container node: creating volume for json-output-20220921214338-5916 container: docker volume create json-output-20220921214338-5916 --label name.minikube.sigs.k8s.io=json-output-20220921214338-5916 --label created_by.minikube.sigs.k8s.io=true: exit status 1\nstdout:\n\nstderr:\nError response from daemon: create json-output-20220921214338-5916: error while creating volume root path '/var/lib/docker/volumes/json-output-20220921214338-5916': mkdir /var/lib/docker/volumes/json-output-20220921214338-5916: read-only file system"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 8aa1ace4-21d6-4fea-ab38-543b84e71409
datacontenttype: application/json
Data,
{
"currentstep": "8",
"message": "docker \"json-output-20220921214338-5916\" container is missing, will recreate.",
"name": "Creating Container",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: d41bbf0b-9ee5-4425-a648-ac0920ab29ce
datacontenttype: application/json
Data,
{
"currentstep": "8",
"message": "Creating docker container (CPUs=2, Memory=2200MB) ...",
"name": "Creating Container",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.warning
source: https://minikube.sigs.k8s.io/
id: 716d2e5a-fe4b-4cb0-8600-f10ccd12f770
datacontenttype: application/json
Data,
{
"message": "Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network json-output-20220921214338-5916 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=json-output-20220921214338-5916 json-output-20220921214338-5916: exit status 1\nstdout:\n\nstderr:\nError response from daemon: cannot create network 3dab4a1be42c55e6957a6b879b960352dbef169b7bb827d206fe9e677e87b79a (br-3dab4a1be42c): conflicts with network 8a3cd8d165a4af032276277dbee93b52b8ee4aed7ca6d8aceee6814901b31792 (br-8a3cd8d165a4): networks have overlapping IPv4"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.error
source: https://minikube.sigs.k8s.io/
id: 29448186-0fd8-4f1b-82e9-f1151c3705c2
datacontenttype: application/json
Data,
{
"message": "Failed to start docker container. Running \"minikube delete -p json-output-20220921214338-5916\" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for json-output-20220921214338-5916 container: docker volume create json-output-20220921214338-5916 --label name.minikube.sigs.k8s.io=json-output-20220921214338-5916 --label created_by.minikube.sigs.k8s.io=true: exit status 1\nstdout:\n\nstderr:\nError response from daemon: create json-output-20220921214338-5916: error while creating volume root path '/var/lib/docker/volumes/json-output-20220921214338-5916': mkdir /var/lib/docker/volumes/json-output-20220921214338-5916: read-only file system"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.error
source: https://minikube.sigs.k8s.io/
id: c418a031-94a1-4d54-b869-bc345d67d179
datacontenttype: application/json
Data,
{
"advice": "Restart Docker",
"exitcode": "60",
"issues": "https://github.com/kubernetes/minikube/issues/6825",
"message": "Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for json-output-20220921214338-5916 container: docker volume create json-output-20220921214338-5916 --label name.minikube.sigs.k8s.io=json-output-20220921214338-5916 --label created_by.minikube.sigs.k8s.io=true: exit status 1\nstdout:\n\nstderr:\nError response from daemon: create json-output-20220921214338-5916: error while creating volume root path '/var/lib/docker/volumes/json-output-20220921214338-5916': mkdir /var/lib/docker/volumes/json-output-20220921214338-5916: read-only file system",
"name": "PR_DOCKER_READONLY_VOL",
"url": ""
}
]
--- FAIL: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.01s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
json_output_test.go:133: current step is not in increasing order: [Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 0a96f1f3-1133-4def-ba2e-eaea3b370ddd
datacontenttype: application/json
Data,
{
"currentstep": "0",
"message": "[json-output-20220921214338-5916] minikube v1.27.0 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044",
"name": "Initial Minikube Setup",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: 90327580-a402-43cb-ac4a-9a67eb01386d
datacontenttype: application/json
Data,
{
"message": "KUBECONFIG=C:\\Users\\jenkins.minikube2\\minikube-integration\\kubeconfig"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: de02b1a4-e95b-46dc-9b13-286e350aee4c
datacontenttype: application/json
Data,
{
"message": "MINIKUBE_HOME=C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: 3ab4f07d-5a24-40b6-8c09-fe0a83fb59f4
datacontenttype: application/json
Data,
{
"message": "MINIKUBE_LOCATION=14995"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: 694af05c-d307-4286-9009-ef9ca292005c
datacontenttype: application/json
Data,
{
"message": "MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: ba6d76e0-7d9f-4e6c-a999-8fbbaae4d701
datacontenttype: application/json
Data,
{
"currentstep": "1",
"message": "Using the docker driver based on user configuration",
"name": "Selecting Driver",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: e2cd6b44-c51f-4c42-925d-7c2e3699a6ef
datacontenttype: application/json
Data,
{
"message": "Using Docker Desktop driver with root privileges"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: a04065dd-9790-4649-b558-1a3e83bd2339
datacontenttype: application/json
Data,
{
"currentstep": "3",
"message": "Starting control plane node json-output-20220921214338-5916 in cluster json-output-20220921214338-5916",
"name": "Starting Node",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 811e41d3-fac0-4d1c-84e9-2806cc1087ed
datacontenttype: application/json
Data,
{
"currentstep": "5",
"message": "Pulling base image ...",
"name": "Pulling Base Image",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.warning
source: https://minikube.sigs.k8s.io/
id: 0ce7167b-51b3-4b5c-b37e-eab1ff458f5e
datacontenttype: application/json
Data,
{
"message": "minikube was unable to download gcr.io/k8s-minikube/kicbase:v0.0.34, but successfully downloaded gcr.io/k8s-minikube/kicbase:v0.0.34 as a fallback image"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 014a37ce-0a1f-4284-859b-3ad3401e9abe
datacontenttype: application/json
Data,
{
"currentstep": "8",
"message": "Creating docker container (CPUs=2, Memory=2200MB) ...",
"name": "Creating Container",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.warning
source: https://minikube.sigs.k8s.io/
id: c9dcd663-f266-4263-8370-b8b7c7e9a6fb
datacontenttype: application/json
Data,
{
"message": "Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network json-output-20220921214338-5916 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=json-output-20220921214338-5916 json-output-20220921214338-5916: exit status 1\nstdout:\n\nstderr:\nError response from daemon: cannot create network 7932e02ee48efa6e69f766ee259f5fbe37a7e7428bdda9060aa9ecab0ca065ae (br-7932e02ee48e): conflicts with network a04d36bfb3cf4b1b1ab643a2ec70fadfc4a485878b32a275b4c7653037d66dd0 (br-a04d36bfb3cf): networks have overlapping IPv4"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.error
source: https://minikube.sigs.k8s.io/
id: fdf0d320-656b-49c9-b10e-5f7b7c087f34
datacontenttype: application/json
Data,
{
"message": "StartHost failed, but will try again: creating host: create: creating: setting up container node: creating volume for json-output-20220921214338-5916 container: docker volume create json-output-20220921214338-5916 --label name.minikube.sigs.k8s.io=json-output-20220921214338-5916 --label created_by.minikube.sigs.k8s.io=true: exit status 1\nstdout:\n\nstderr:\nError response from daemon: create json-output-20220921214338-5916: error while creating volume root path '/var/lib/docker/volumes/json-output-20220921214338-5916': mkdir /var/lib/docker/volumes/json-output-20220921214338-5916: read-only file system"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 8aa1ace4-21d6-4fea-ab38-543b84e71409
datacontenttype: application/json
Data,
{
"currentstep": "8",
"message": "docker \"json-output-20220921214338-5916\" container is missing, will recreate.",
"name": "Creating Container",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: d41bbf0b-9ee5-4425-a648-ac0920ab29ce
datacontenttype: application/json
Data,
{
"currentstep": "8",
"message": "Creating docker container (CPUs=2, Memory=2200MB) ...",
"name": "Creating Container",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.warning
source: https://minikube.sigs.k8s.io/
id: 716d2e5a-fe4b-4cb0-8600-f10ccd12f770
datacontenttype: application/json
Data,
{
"message": "Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network json-output-20220921214338-5916 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=json-output-20220921214338-5916 json-output-20220921214338-5916: exit status 1\nstdout:\n\nstderr:\nError response from daemon: cannot create network 3dab4a1be42c55e6957a6b879b960352dbef169b7bb827d206fe9e677e87b79a (br-3dab4a1be42c): conflicts with network 8a3cd8d165a4af032276277dbee93b52b8ee4aed7ca6d8aceee6814901b31792 (br-8a3cd8d165a4): networks have overlapping IPv4"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.error
source: https://minikube.sigs.k8s.io/
id: 29448186-0fd8-4f1b-82e9-f1151c3705c2
datacontenttype: application/json
Data,
{
"message": "Failed to start docker container. Running \"minikube delete -p json-output-20220921214338-5916\" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for json-output-20220921214338-5916 container: docker volume create json-output-20220921214338-5916 --label name.minikube.sigs.k8s.io=json-output-20220921214338-5916 --label created_by.minikube.sigs.k8s.io=true: exit status 1\nstdout:\n\nstderr:\nError response from daemon: create json-output-20220921214338-5916: error while creating volume root path '/var/lib/docker/volumes/json-output-20220921214338-5916': mkdir /var/lib/docker/volumes/json-output-20220921214338-5916: read-only file system"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.error
source: https://minikube.sigs.k8s.io/
id: c418a031-94a1-4d54-b869-bc345d67d179
datacontenttype: application/json
Data,
{
"advice": "Restart Docker",
"exitcode": "60",
"issues": "https://github.com/kubernetes/minikube/issues/6825",
"message": "Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for json-output-20220921214338-5916 container: docker volume create json-output-20220921214338-5916 --label name.minikube.sigs.k8s.io=json-output-20220921214338-5916 --label created_by.minikube.sigs.k8s.io=true: exit status 1\nstdout:\n\nstderr:\nError response from daemon: create json-output-20220921214338-5916: error while creating volume root path '/var/lib/docker/volumes/json-output-20220921214338-5916': mkdir /var/lib/docker/volumes/json-output-20220921214338-5916: read-only file system",
"name": "PR_DOCKER_READONLY_VOL",
"url": ""
}
]
--- FAIL: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.01s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (1.07s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe pause -p json-output-20220921214338-5916 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-windows-amd64.exe pause -p json-output-20220921214338-5916 --output=json --user=testUser: exit status 80 (1.0705636s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"0cd26c94-a34f-465b-8953-8308b1dc4b3d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"state: unknown state \"json-output-20220921214338-5916\": docker container inspect json-output-20220921214338-5916 --format=: exit status 1\nstdout:\n\n\nstderr:\nError: No such container: json-output-20220921214338-5916","name":"GUEST_STATUS","url":""}}
	{"specversion":"1.0","id":"70269626-4bc8-4bf7-a4f2-fdcb5a4be2f3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                                                      │\n│    If the above advice does not help, please let us know:                                                            │\n│    https://github.com/kubernetes/minikube/issues/new/choose                                                          │\n│
│\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                               │\n│    Please also attach the following file to the GitHub issue:                                                        │\n│    - C:\\Users\\jenkins.minikube2\\AppData\\Local\\Temp\\minikube_pause_8893f1c977cc86351b34571029ffce3d31854fd6_26.log    │\n│                                                                                                                      │\n╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-windows-amd64.exe pause -p json-output-20220921214338-5916 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/pause/Command (1.07s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (1.08s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p json-output-20220921214338-5916 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-windows-amd64.exe unpause -p json-output-20220921214338-5916 --output=json --user=testUser: exit status 80 (1.0830339s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "json-output-20220921214338-5916": docker container inspect json-output-20220921214338-5916 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: json-output-20220921214338-5916
	
	* 
	╭──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                          │
	│    * If the above advice does not help, please let us know:                                                              │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                            │
	│                                                                                                                          │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                 │
	│    * Please also attach the following file to the GitHub issue:                                                          │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_unpause_00b12d9cedab4ae1bb930a621bdee2ada68dbd98_12.log    │
	│                                                                                                                          │
	╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
json_output_test.go:65: failed to clean up: args "out/minikube-windows-amd64.exe unpause -p json-output-20220921214338-5916 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/unpause/Command (1.08s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (19.1s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe stop -p json-output-20220921214338-5916 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-windows-amd64.exe stop -p json-output-20220921214338-5916 --output=json --user=testUser: exit status 82 (19.0997619s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"f7d3a1c4-f05d-4d09-bd2d-5ab498fc931d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Stopping node \"json-output-20220921214338-5916\"  ...","name":"Stopping","totalsteps":"2"}}
	{"specversion":"1.0","id":"517a19ff-ade9-4873-a106-f8808c001e6a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Stopping node \"json-output-20220921214338-5916\"  ...","name":"Stopping","totalsteps":"2"}}
	{"specversion":"1.0","id":"26c6abf0-1131-4fa2-8044-4594b9805b0a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Stopping node \"json-output-20220921214338-5916\"  ...","name":"Stopping","totalsteps":"2"}}
	{"specversion":"1.0","id":"b0a337e2-d266-4aa5-ba1c-906f94e4c36d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Stopping node \"json-output-20220921214338-5916\"  ...","name":"Stopping","totalsteps":"2"}}
	{"specversion":"1.0","id":"3332c7e2-a369-4183-84b9-ac88c499fef8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Stopping node \"json-output-20220921214338-5916\"  ...","name":"Stopping","totalsteps":"2"}}
	{"specversion":"1.0","id":"6ad45ccd-abc6-412d-b93e-24ca1076e6c5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Stopping node \"json-output-20220921214338-5916\"  ...","name":"Stopping","totalsteps":"2"}}
	{"specversion":"1.0","id":"254743ab-2810-4efb-b1c9-eb96b54be08a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"82","issues":"","message":"docker container inspect json-output-20220921214338-5916 --format=: exit status 1\nstdout:\n\n\nstderr:\nError: No such container: json-output-20220921214338-5916","name":"GUEST_STOP_TIMEOUT","url":""}}
	{"specversion":"1.0","id":"a5550522-1d1d-4e5e-998d-767d5dd9edfd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                                                      │\n│    If the above advice does not help, please let us know:                                                            │\n│    https://github.com/kubernetes/minikube/issues/new/choose                                                          │\n│
│\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                               │\n│    Please also attach the following file to the GitHub issue:                                                        │\n│    - C:\\Users\\jenkins.minikube2\\AppData\\Local\\Temp\\minikube_stop_eeafc14e343686f0df1f1d4295ac2d3042636ff8_153.log    │\n│                                                                                                                      │\n╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
** stderr ** 
	E0921 21:44:33.208718    2412 daemonize_windows.go:38] error terminating scheduled stop for profile json-output-20220921214338-5916: stopping schedule-stop service for profile json-output-20220921214338-5916: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "json-output-20220921214338-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" json-output-20220921214338-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: json-output-20220921214338-5916

                                                
                                                
** /stderr **
json_output_test.go:65: failed to clean up: args "out/minikube-windows-amd64.exe stop -p json-output-20220921214338-5916 --output=json --user=testUser": exit status 82
--- FAIL: TestJSONOutput/stop/Command (19.10s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
json_output_test.go:114: step 0 has already been assigned to another step:
Stopping node "json-output-20220921214338-5916"  ...
Cannot use for:
Stopping node "json-output-20220921214338-5916"  ...
[Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: f7d3a1c4-f05d-4d09-bd2d-5ab498fc931d
datacontenttype: application/json
Data,
{
"currentstep": "0",
"message": "Stopping node \"json-output-20220921214338-5916\"  ...",
"name": "Stopping",
"totalsteps": "2"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 517a19ff-ade9-4873-a106-f8808c001e6a
datacontenttype: application/json
Data,
{
"currentstep": "0",
"message": "Stopping node \"json-output-20220921214338-5916\"  ...",
"name": "Stopping",
"totalsteps": "2"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 26c6abf0-1131-4fa2-8044-4594b9805b0a
datacontenttype: application/json
Data,
{
"currentstep": "0",
"message": "Stopping node \"json-output-20220921214338-5916\"  ...",
"name": "Stopping",
"totalsteps": "2"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: b0a337e2-d266-4aa5-ba1c-906f94e4c36d
datacontenttype: application/json
Data,
{
"currentstep": "0",
"message": "Stopping node \"json-output-20220921214338-5916\"  ...",
"name": "Stopping",
"totalsteps": "2"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 3332c7e2-a369-4183-84b9-ac88c499fef8
datacontenttype: application/json
Data,
{
"currentstep": "0",
"message": "Stopping node \"json-output-20220921214338-5916\"  ...",
"name": "Stopping",
"totalsteps": "2"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 6ad45ccd-abc6-412d-b93e-24ca1076e6c5
datacontenttype: application/json
Data,
{
"currentstep": "0",
"message": "Stopping node \"json-output-20220921214338-5916\"  ...",
"name": "Stopping",
"totalsteps": "2"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.error
source: https://minikube.sigs.k8s.io/
id: 254743ab-2810-4efb-b1c9-eb96b54be08a
datacontenttype: application/json
Data,
{
"advice": "",
"exitcode": "82",
"issues": "",
"message": "docker container inspect json-output-20220921214338-5916 --format=: exit status 1\nstdout:\n\n\nstderr:\nError: No such container: json-output-20220921214338-5916",
"name": "GUEST_STOP_TIMEOUT",
"url": ""
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.error
source: https://minikube.sigs.k8s.io/
id: a5550522-1d1d-4e5e-998d-767d5dd9edfd
datacontenttype: application/json
Data,
{
"message": "╭──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                                                      │\n│    If the above advice does not help, please let us know:                                                            │\n│    https://github.com/kubernetes/minikube/issues/new/choose                                                          │\n│                                                                                                                      │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                               │\n│    P
lease also attach the following file to the GitHub issue:                                                        │\n│    - C:\\Users\\jenkins.minikube2\\AppData\\Local\\Temp\\minikube_stop_eeafc14e343686f0df1f1d4295ac2d3042636ff8_153.log    │\n│                                                                                                                      │\n╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯"
}
]
--- FAIL: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
json_output_test.go:133: current step is not in increasing order: [Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: f7d3a1c4-f05d-4d09-bd2d-5ab498fc931d
datacontenttype: application/json
Data,
{
"currentstep": "0",
"message": "Stopping node \"json-output-20220921214338-5916\"  ...",
"name": "Stopping",
"totalsteps": "2"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 517a19ff-ade9-4873-a106-f8808c001e6a
datacontenttype: application/json
Data,
{
"currentstep": "0",
"message": "Stopping node \"json-output-20220921214338-5916\"  ...",
"name": "Stopping",
"totalsteps": "2"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 26c6abf0-1131-4fa2-8044-4594b9805b0a
datacontenttype: application/json
Data,
{
"currentstep": "0",
"message": "Stopping node \"json-output-20220921214338-5916\"  ...",
"name": "Stopping",
"totalsteps": "2"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: b0a337e2-d266-4aa5-ba1c-906f94e4c36d
datacontenttype: application/json
Data,
{
"currentstep": "0",
"message": "Stopping node \"json-output-20220921214338-5916\"  ...",
"name": "Stopping",
"totalsteps": "2"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 3332c7e2-a369-4183-84b9-ac88c499fef8
datacontenttype: application/json
Data,
{
"currentstep": "0",
"message": "Stopping node \"json-output-20220921214338-5916\"  ...",
"name": "Stopping",
"totalsteps": "2"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 6ad45ccd-abc6-412d-b93e-24ca1076e6c5
datacontenttype: application/json
Data,
{
"currentstep": "0",
"message": "Stopping node \"json-output-20220921214338-5916\"  ...",
"name": "Stopping",
"totalsteps": "2"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.error
source: https://minikube.sigs.k8s.io/
id: 254743ab-2810-4efb-b1c9-eb96b54be08a
datacontenttype: application/json
Data,
{
"advice": "",
"exitcode": "82",
"issues": "",
"message": "docker container inspect json-output-20220921214338-5916 --format=: exit status 1\nstdout:\n\n\nstderr:\nError: No such container: json-output-20220921214338-5916",
"name": "GUEST_STOP_TIMEOUT",
"url": ""
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.error
source: https://minikube.sigs.k8s.io/
id: a5550522-1d1d-4e5e-998d-767d5dd9edfd
datacontenttype: application/json
Data,
{
"message": "╭──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                                                      │\n│    If the above advice does not help, please let us know:                                                            │\n│    https://github.com/kubernetes/minikube/issues/new/choose                                                          │\n│                                                                                                                      │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                               │\n│    P
lease also attach the following file to the GitHub issue:                                                        │\n│    - C:\\Users\\jenkins.minikube2\\AppData\\Local\\Temp\\minikube_stop_eeafc14e343686f0df1f1d4295ac2d3042636ff8_153.log    │\n│                                                                                                                      │\n╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯"
}
]
--- FAIL: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (199.33s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-windows-amd64.exe start -p docker-network-20220921214451-5916 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-windows-amd64.exe start -p docker-network-20220921214451-5916 --network=: (2m53.2294572s)
kic_custom_network_test.go:122: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:127: docker-network-20220921214451-5916 network is not listed by [[docker network ls --format {{.Name}}]]: 
-- stdout --
	bridge
	host
	none

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "docker-network-20220921214451-5916" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p docker-network-20220921214451-5916
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p docker-network-20220921214451-5916: (25.8749411s)
--- FAIL: TestKicCustomNetwork/create_custom_network (199.33s)

                                                
                                    
x
+
TestKicExistingNetwork (0.85s)

                                                
                                                
=== RUN   TestKicExistingNetwork
E0921 21:51:25.679859    5916 network_create.go:104] error while trying to create docker network existing-network 192.168.49.0/24: create docker network existing-network 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network: exit status 1
stdout:

                                                
                                                
stderr:
Error response from daemon: cannot create network 2c9afbee55994baa9403174c78002846ac96f9699947d1433fa899fd69030f6c (br-2c9afbee5599): conflicts with network a04d36bfb3cf4b1b1ab643a2ec70fadfc4a485878b32a275b4c7653037d66dd0 (br-a04d36bfb3cf): networks have overlapping IPv4
kic_custom_network_test.go:78: error creating network: un-retryable: create docker network existing-network 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network: exit status 1
stdout:

                                                
                                                
stderr:
Error response from daemon: cannot create network 2c9afbee55994baa9403174c78002846ac96f9699947d1433fa899fd69030f6c (br-2c9afbee5599): conflicts with network a04d36bfb3cf4b1b1ab643a2ec70fadfc4a485878b32a275b4c7653037d66dd0 (br-a04d36bfb3cf): networks have overlapping IPv4
--- FAIL: TestKicExistingNetwork (0.85s)

                                                
                                    
x
+
TestKicCustomSubnet (204.2s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-windows-amd64.exe start -p custom-subnet-20220921215125-5916 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-windows-amd64.exe start -p custom-subnet-20220921215125-5916 --subnet=192.168.60.0/24: (2m48.4717668s)
kic_custom_network_test.go:133: (dbg) Run:  docker network inspect custom-subnet-20220921215125-5916 --format "{{(index .IPAM.Config 0).Subnet}}"
kic_custom_network_test.go:133: (dbg) Non-zero exit: docker network inspect custom-subnet-20220921215125-5916 --format "{{(index .IPAM.Config 0).Subnet}}": exit status 1 (224.7772ms)

                                                
                                                
-- stdout --
	

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such network: custom-subnet-20220921215125-5916

                                                
                                                
** /stderr **
kic_custom_network_test.go:135: docker network inspect custom-subnet-20220921215125-5916 --format "{{(index .IPAM.Config 0).Subnet}}" failed: exit status 1

                                                
                                                
-- stdout --
	

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such network: custom-subnet-20220921215125-5916

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "custom-subnet-20220921215125-5916" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p custom-subnet-20220921215125-5916
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p custom-subnet-20220921215125-5916: (35.4977289s)
--- FAIL: TestKicCustomSubnet (204.20s)

                                                
                                    
x
+
TestMinikubeProfile (52.84s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-windows-amd64.exe start -p first-20220921215450-5916 --driver=docker
minikube_profile_test.go:44: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p first-20220921215450-5916 --driver=docker: exit status 60 (48.4002922s)

                                                
                                                
-- stdout --
	* [first-20220921215450-5916] minikube v1.27.0 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	  - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=14995
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting control plane node first-20220921215450-5916 in cluster first-20220921215450-5916
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=16300MB) ...
	* docker "first-20220921215450-5916" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=16300MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	    > gcr.io/k8s-minikube/kicbase:  0 B [__________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase:  0 B [__________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase:  0 B [__________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase:  0 B [__________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase:  0 B [______________________] ?% ? p/s 800ms! minikube was unable to download gcr.io/k8s-minikube/kicbase:v0.0.34, but successfully downloaded gcr.io/k8s-minikube/kicbase:v0.0.34 as a fallback image
	E0921 21:54:57.001975    4388 network_create.go:104] error while trying to create docker network first-20220921215450-5916 192.168.49.0/24: create docker network first-20220921215450-5916 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=first-20220921215450-5916 first-20220921215450-5916: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 2d9e5eaa177d0d091b8546a4b86602954f39fd1049c2b655f472e06fe3e44844 (br-2d9e5eaa177d): conflicts with network a04d36bfb3cf4b1b1ab643a2ec70fadfc4a485878b32a275b4c7653037d66dd0 (br-a04d36bfb3cf): networks have overlapping IPv4
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network first-20220921215450-5916 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=first-20220921215450-5916 first-20220921215450-5916: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 2d9e5eaa177d0d091b8546a4b86602954f39fd1049c2b655f472e06fe3e44844 (br-2d9e5eaa177d): conflicts with network a04d36bfb3cf4b1b1ab643a2ec70fadfc4a485878b32a275b4c7653037d66dd0 (br-a04d36bfb3cf): networks have overlapping IPv4
	
	! StartHost failed, but will try again: creating host: create: creating: setting up container node: creating volume for first-20220921215450-5916 container: docker volume create first-20220921215450-5916 --label name.minikube.sigs.k8s.io=first-20220921215450-5916 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create first-20220921215450-5916: error while creating volume root path '/var/lib/docker/volumes/first-20220921215450-5916': mkdir /var/lib/docker/volumes/first-20220921215450-5916: read-only file system
	
	E0921 21:55:29.248565    4388 network_create.go:104] error while trying to create docker network first-20220921215450-5916 192.168.58.0/24: create docker network first-20220921215450-5916 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=first-20220921215450-5916 first-20220921215450-5916: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network f16fc114583675a8a1579e37d3b0c5b34ba2846e675e542426fcf22cc6f51063 (br-f16fc1145836): conflicts with network 8a3cd8d165a4af032276277dbee93b52b8ee4aed7ca6d8aceee6814901b31792 (br-8a3cd8d165a4): networks have overlapping IPv4
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network first-20220921215450-5916 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=first-20220921215450-5916 first-20220921215450-5916: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network f16fc114583675a8a1579e37d3b0c5b34ba2846e675e542426fcf22cc6f51063 (br-f16fc1145836): conflicts with network 8a3cd8d165a4af032276277dbee93b52b8ee4aed7ca6d8aceee6814901b31792 (br-8a3cd8d165a4): networks have overlapping IPv4
	
	* Failed to start docker container. Running "minikube delete -p first-20220921215450-5916" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for first-20220921215450-5916 container: docker volume create first-20220921215450-5916 --label name.minikube.sigs.k8s.io=first-20220921215450-5916 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create first-20220921215450-5916: error while creating volume root path '/var/lib/docker/volumes/first-20220921215450-5916': mkdir /var/lib/docker/volumes/first-20220921215450-5916: read-only file system
	
	X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for first-20220921215450-5916 container: docker volume create first-20220921215450-5916 --label name.minikube.sigs.k8s.io=first-20220921215450-5916 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create first-20220921215450-5916: error while creating volume root path '/var/lib/docker/volumes/first-20220921215450-5916': mkdir /var/lib/docker/volumes/first-20220921215450-5916: read-only file system
	
	* Suggestion: Restart Docker
	* Related issue: https://github.com/kubernetes/minikube/issues/6825

                                                
                                                
** /stderr **
minikube_profile_test.go:46: test pre-condition failed. args "out/minikube-windows-amd64.exe start -p first-20220921215450-5916 --driver=docker": exit status 60
panic.go:522: *** TestMinikubeProfile FAILED at 2022-09-21 21:55:38.618723 +0000 GMT m=+1546.174652101
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMinikubeProfile]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect second-20220921215450-5916
helpers_test.go:231: (dbg) Non-zero exit: docker inspect second-20220921215450-5916: exit status 1 (239.4061ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: second-20220921215450-5916

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p second-20220921215450-5916 -n second-20220921215450-5916
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p second-20220921215450-5916 -n second-20220921215450-5916: exit status 85 (349.2021ms)

                                                
                                                
-- stdout --
	* Profile "second-20220921215450-5916" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p second-20220921215450-5916"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "second-20220921215450-5916" host is not running, skipping log retrieval (state="* Profile \"second-20220921215450-5916\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p second-20220921215450-5916\"")
helpers_test.go:175: Cleaning up "second-20220921215450-5916" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p second-20220921215450-5916
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p second-20220921215450-5916: (1.4127653s)
panic.go:522: *** TestMinikubeProfile FAILED at 2022-09-21 21:55:40.6302979 +0000 GMT m=+1548.186213301
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMinikubeProfile]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect first-20220921215450-5916
helpers_test.go:231: (dbg) Non-zero exit: docker inspect first-20220921215450-5916: exit status 1 (284.713ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: first-20220921215450-5916

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p first-20220921215450-5916 -n first-20220921215450-5916
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p first-20220921215450-5916 -n first-20220921215450-5916: exit status 7 (549.9693ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0921 21:55:41.442391    1900 status.go:247] status error: host: state: unknown state "first-20220921215450-5916": docker container inspect first-20220921215450-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: first-20220921215450-5916

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "first-20220921215450-5916" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:175: Cleaning up "first-20220921215450-5916" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p first-20220921215450-5916
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p first-20220921215450-5916: (1.582694s)
--- FAIL: TestMinikubeProfile (52.84s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (49.84s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-windows-amd64.exe start -p mount-start-1-20220921215543-5916 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker
mount_start_test.go:98: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p mount-start-1-20220921215543-5916 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker: exit status 60 (49.0261844s)

                                                
                                                
-- stdout --
	* [mount-start-1-20220921215543-5916] minikube v1.27.0 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	  - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=14995
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting minikube without Kubernetes in cluster mount-start-1-20220921215543-5916
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* docker "mount-start-1-20220921215543-5916" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	    > gcr.io/k8s-minikube/kicbase:  0 B [__________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase:  0 B [__________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase:  0 B [__________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase:  0 B [______________________] ?% ? p/s 800ms! minikube was unable to download gcr.io/k8s-minikube/kicbase:v0.0.34, but successfully downloaded gcr.io/k8s-minikube/kicbase:v0.0.34 as a fallback image
	E0921 21:55:49.624192    4580 network_create.go:104] error while trying to create docker network mount-start-1-20220921215543-5916 192.168.49.0/24: create docker network mount-start-1-20220921215543-5916 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=mount-start-1-20220921215543-5916 mount-start-1-20220921215543-5916: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 8258a63d133b171a7978f78ba40f39fb15ea830cd0f993efc95e9d5e4b17657e (br-8258a63d133b): conflicts with network a04d36bfb3cf4b1b1ab643a2ec70fadfc4a485878b32a275b4c7653037d66dd0 (br-a04d36bfb3cf): networks have overlapping IPv4
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network mount-start-1-20220921215543-5916 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=mount-start-1-20220921215543-5916 mount-start-1-20220921215543-5916: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 8258a63d133b171a7978f78ba40f39fb15ea830cd0f993efc95e9d5e4b17657e (br-8258a63d133b): conflicts with network a04d36bfb3cf4b1b1ab643a2ec70fadfc4a485878b32a275b4c7653037d66dd0 (br-a04d36bfb3cf): networks have overlapping IPv4
	
	! StartHost failed, but will try again: creating host: create: creating: setting up container node: creating volume for mount-start-1-20220921215543-5916 container: docker volume create mount-start-1-20220921215543-5916 --label name.minikube.sigs.k8s.io=mount-start-1-20220921215543-5916 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create mount-start-1-20220921215543-5916: error while creating volume root path '/var/lib/docker/volumes/mount-start-1-20220921215543-5916': mkdir /var/lib/docker/volumes/mount-start-1-20220921215543-5916: read-only file system
	
	E0921 21:56:21.767931    4580 network_create.go:104] error while trying to create docker network mount-start-1-20220921215543-5916 192.168.58.0/24: create docker network mount-start-1-20220921215543-5916 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=mount-start-1-20220921215543-5916 mount-start-1-20220921215543-5916: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 31a2b9786f88eebeed2601357f16919ddbd7d348ee59bc4fb8929580c0ca4f88 (br-31a2b9786f88): conflicts with network 8a3cd8d165a4af032276277dbee93b52b8ee4aed7ca6d8aceee6814901b31792 (br-8a3cd8d165a4): networks have overlapping IPv4
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network mount-start-1-20220921215543-5916 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=mount-start-1-20220921215543-5916 mount-start-1-20220921215543-5916: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 31a2b9786f88eebeed2601357f16919ddbd7d348ee59bc4fb8929580c0ca4f88 (br-31a2b9786f88): conflicts with network 8a3cd8d165a4af032276277dbee93b52b8ee4aed7ca6d8aceee6814901b31792 (br-8a3cd8d165a4): networks have overlapping IPv4
	
	* Failed to start docker container. Running "minikube delete -p mount-start-1-20220921215543-5916" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for mount-start-1-20220921215543-5916 container: docker volume create mount-start-1-20220921215543-5916 --label name.minikube.sigs.k8s.io=mount-start-1-20220921215543-5916 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create mount-start-1-20220921215543-5916: error while creating volume root path '/var/lib/docker/volumes/mount-start-1-20220921215543-5916': mkdir /var/lib/docker/volumes/mount-start-1-20220921215543-5916: read-only file system
	
	X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for mount-start-1-20220921215543-5916 container: docker volume create mount-start-1-20220921215543-5916 --label name.minikube.sigs.k8s.io=mount-start-1-20220921215543-5916 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create mount-start-1-20220921215543-5916: error while creating volume root path '/var/lib/docker/volumes/mount-start-1-20220921215543-5916': mkdir /var/lib/docker/volumes/mount-start-1-20220921215543-5916: read-only file system
	
	* Suggestion: Restart Docker
	* Related issue: https://github.com/kubernetes/minikube/issues/6825

                                                
                                                
** /stderr **
mount_start_test.go:100: failed to start minikube with args: "out/minikube-windows-amd64.exe start -p mount-start-1-20220921215543-5916 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker" : exit status 60
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMountStart/serial/StartWithMountFirst]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect mount-start-1-20220921215543-5916
helpers_test.go:231: (dbg) Non-zero exit: docker inspect mount-start-1-20220921215543-5916: exit status 1 (255.4822ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: mount-start-1-20220921215543-5916

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p mount-start-1-20220921215543-5916 -n mount-start-1-20220921215543-5916
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p mount-start-1-20220921215543-5916 -n mount-start-1-20220921215543-5916: exit status 7 (550.4498ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0921 21:56:32.868753    9104 status.go:247] status error: host: state: unknown state "mount-start-1-20220921215543-5916": docker container inspect mount-start-1-20220921215543-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: mount-start-1-20220921215543-5916

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "mount-start-1-20220921215543-5916" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMountStart/serial/StartWithMountFirst (49.84s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (49.32s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:83: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-20220921215635-5916 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker
multinode_test.go:83: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p multinode-20220921215635-5916 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker: exit status 60 (48.3986022s)

                                                
                                                
-- stdout --
	* [multinode-20220921215635-5916] minikube v1.27.0 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	  - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=14995
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting control plane node multinode-20220921215635-5916 in cluster multinode-20220921215635-5916
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* docker "multinode-20220921215635-5916" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0921 21:56:36.123577    2344 out.go:296] Setting OutFile to fd 940 ...
	I0921 21:56:36.175564    2344 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0921 21:56:36.175564    2344 out.go:309] Setting ErrFile to fd 932...
	I0921 21:56:36.175564    2344 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0921 21:56:36.205693    2344 out.go:303] Setting JSON to false
	I0921 21:56:36.208691    2344 start.go:115] hostinfo: {"hostname":"minikube2","uptime":3464,"bootTime":1663793932,"procs":148,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19044 Build 19044","kernelVersion":"10.0.19044 Build 19044","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"29dced62-21fb-45d8-a34e-472b66ced002"}
	W0921 21:56:36.208691    2344 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0921 21:56:36.213741    2344 out.go:177] * [multinode-20220921215635-5916] minikube v1.27.0 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	I0921 21:56:36.217741    2344 notify.go:214] Checking for updates...
	I0921 21:56:36.220222    2344 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	I0921 21:56:36.222983    2344 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	I0921 21:56:36.225900    2344 out.go:177]   - MINIKUBE_LOCATION=14995
	I0921 21:56:36.229307    2344 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0921 21:56:36.235900    2344 driver.go:365] Setting default libvirt URI to qemu:///system
	I0921 21:56:36.519711    2344 docker.go:137] docker version: linux-20.10.17
	I0921 21:56:36.527667    2344 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0921 21:56:37.032438    2344 info.go:265] docker info: {ID:6VFI:N2PO:PLBP:KXLR:EN7R:OR7P:564S:UY7N:ZWDA:5V2U:KGL2:KJZ7 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:43 OomKillDisable:true NGoroutines:58 SystemTime:2022-09-21 21:56:36.6773781 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.10.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.9] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-
plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.19.0]] Warnings:<nil>}}
	I0921 21:56:37.041429    2344 out.go:177] * Using the docker driver based on user configuration
	I0921 21:56:37.044264    2344 start.go:284] selected driver: docker
	I0921 21:56:37.044264    2344 start.go:808] validating driver "docker" against <nil>
	I0921 21:56:37.044264    2344 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0921 21:56:37.115508    2344 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0921 21:56:37.684648    2344 info.go:265] docker info: {ID:6VFI:N2PO:PLBP:KXLR:EN7R:OR7P:564S:UY7N:ZWDA:5V2U:KGL2:KJZ7 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:43 OomKillDisable:true NGoroutines:58 SystemTime:2022-09-21 21:56:37.3045079 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.10.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.9] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-
plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.19.0]] Warnings:<nil>}}
	I0921 21:56:37.684648    2344 start_flags.go:302] no existing cluster config was found, will generate one from the flags 
	I0921 21:56:37.685952    2344 start_flags.go:867] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0921 21:56:37.691945    2344 out.go:177] * Using Docker Desktop driver with root privileges
	I0921 21:56:37.693324    2344 cni.go:95] Creating CNI manager for ""
	I0921 21:56:37.693324    2344 cni.go:156] 0 nodes found, recommending kindnet
	I0921 21:56:37.693324    2344 start_flags.go:311] Found "CNI" CNI - setting NetworkPlugin=cni
	I0921 21:56:37.693324    2344 start_flags.go:316] config:
	{Name:multinode-20220921215635-5916 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.2 ClusterName:multinode-20220921215635-5916 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I0921 21:56:37.696042    2344 out.go:177] * Starting control plane node multinode-20220921215635-5916 in cluster multinode-20220921215635-5916
	I0921 21:56:37.698838    2344 cache.go:120] Beginning downloading kic base image for docker with docker
	I0921 21:56:37.701968    2344 out.go:177] * Pulling base image ...
	I0921 21:56:37.704495    2344 preload.go:132] Checking if preload exists for k8s version v1.25.2 and runtime docker
	I0921 21:56:37.704495    2344 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c in local docker daemon
	I0921 21:56:37.704495    2344 preload.go:148] Found local preload: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.25.2-docker-overlay2-amd64.tar.lz4
	I0921 21:56:37.704495    2344 cache.go:57] Caching tarball of preloaded images
	I0921 21:56:37.705507    2344 preload.go:174] Found C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.25.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0921 21:56:37.705719    2344 cache.go:60] Finished verifying existence of preloaded tar for  v1.25.2 on docker
	I0921 21:56:37.705850    2344 profile.go:148] Saving config to C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\multinode-20220921215635-5916\config.json ...
	I0921 21:56:37.706386    2344 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\multinode-20220921215635-5916\config.json: {Name:mk81d405dae96a7e325e4320a9dfe013a60aad6b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0921 21:56:37.917081    2344 cache.go:147] Downloading gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c to local cache
	I0921 21:56:37.917081    2344 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase_v0.0.34@sha256_f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c.tar
	I0921 21:56:37.917081    2344 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase_v0.0.34@sha256_f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c.tar
	I0921 21:56:37.917081    2344 image.go:59] Checking for gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c in local cache directory
	I0921 21:56:37.917081    2344 image.go:62] Found gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c in local cache directory, skipping pull
	I0921 21:56:37.917081    2344 image.go:103] gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c exists in cache, skipping pull
	I0921 21:56:37.917081    2344 cache.go:150] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c as a tarball
	I0921 21:56:37.917081    2344 cache.go:161] Loading gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c from local cache
	I0921 21:56:37.917081    2344 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase_v0.0.34@sha256_f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c.tar
	I0921 21:56:40.143407    2344 image.go:219] response: {"errorDetail":{"message":"mkdir /var/lib/docker/tmp/docker-import-824376577: read-only file system"},"error":"mkdir /var/lib/docker/tmp/docker-import-824376577: read-only file system"}
	I0921 21:56:40.143471    2344 cache.go:164] successfully loaded gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c from cached tarball
	I0921 21:56:40.143471    2344 cache.go:170] Downloading gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c to local daemon
	I0921 21:56:40.143471    2344 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c.d.lock -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase_v0.0.34@sha256_f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c.d.lock
	I0921 21:56:40.143471    2344 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c in local docker daemon
	I0921 21:56:40.382389    2344 image.go:243] Writing gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c to local daemon
	I0921 21:56:40.382584    2344 image.go:258] Getting image gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c
	I0921 21:56:40.639150    2344 image.go:272] Writing image gcr.io/k8s-minikube/kicbase:v0.0.34
	    > gcr.io/k8s-minikube/kicbase:  0 B [__________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase:  0 B [__________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase:  0 B [__________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase:  0 B [__________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase:  0 B [______________________] ?% ? p/s 800msI0921 21:56:41.449212    2344 image.go:306] Pulling image gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c
	I0921 21:56:41.817519    2344 cache.go:177] use image loaded from cache gcr.io/k8s-minikube/kicbase:v0.0.34
	W0921 21:56:41.817519    2344 out.go:239] ! minikube was unable to download gcr.io/k8s-minikube/kicbase:v0.0.34, but successfully downloaded gcr.io/k8s-minikube/kicbase:v0.0.34 as a fallback image
	! minikube was unable to download gcr.io/k8s-minikube/kicbase:v0.0.34, but successfully downloaded gcr.io/k8s-minikube/kicbase:v0.0.34 as a fallback image
	I0921 21:56:41.817519    2344 cache.go:208] Successfully downloaded all kic artifacts
	I0921 21:56:41.817519    2344 start.go:364] acquiring machines lock for multinode-20220921215635-5916: {Name:mk1da0b6aaf7b0158fd60ed6f72b6dfa2716f3be Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0921 21:56:41.818398    2344 start.go:368] acquired machines lock for "multinode-20220921215635-5916" in 0s
	I0921 21:56:41.818614    2344 start.go:93] Provisioning new machine with config: &{Name:multinode-20220921215635-5916 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.34 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.2 ClusterName:multinode-20220921215635-5916 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.25.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMn
etClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet} &{Name: IP: Port:8443 KubernetesVersion:v1.25.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0921 21:56:41.818790    2344 start.go:125] createHost starting for "" (driver="docker")
	I0921 21:56:41.822981    2344 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0921 21:56:41.823654    2344 start.go:159] libmachine.API.Create for "multinode-20220921215635-5916" (driver="docker")
	I0921 21:56:41.823654    2344 client.go:168] LocalClient.Create starting
	I0921 21:56:41.823654    2344 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I0921 21:56:41.824449    2344 main.go:134] libmachine: Decoding PEM data...
	I0921 21:56:41.824449    2344 main.go:134] libmachine: Parsing certificate...
	I0921 21:56:41.824449    2344 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I0921 21:56:41.824449    2344 main.go:134] libmachine: Decoding PEM data...
	I0921 21:56:41.824985    2344 main.go:134] libmachine: Parsing certificate...
	I0921 21:56:41.832434    2344 cli_runner.go:164] Run: docker network inspect multinode-20220921215635-5916 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0921 21:56:42.035310    2344 cli_runner.go:211] docker network inspect multinode-20220921215635-5916 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0921 21:56:42.042762    2344 network_create.go:272] running [docker network inspect multinode-20220921215635-5916] to gather additional debugging logs...
	I0921 21:56:42.042846    2344 cli_runner.go:164] Run: docker network inspect multinode-20220921215635-5916
	W0921 21:56:42.239915    2344 cli_runner.go:211] docker network inspect multinode-20220921215635-5916 returned with exit code 1
	I0921 21:56:42.240061    2344 network_create.go:275] error running [docker network inspect multinode-20220921215635-5916]: docker network inspect multinode-20220921215635-5916: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: multinode-20220921215635-5916
	I0921 21:56:42.240104    2344 network_create.go:277] output of [docker network inspect multinode-20220921215635-5916]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: multinode-20220921215635-5916
	
	** /stderr **
	I0921 21:56:42.247094    2344 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0921 21:56:42.448913    2344 network.go:290] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc00068ad08] misses:0}
	I0921 21:56:42.449793    2344 network.go:236] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0921 21:56:42.449793    2344 network_create.go:115] attempt to create docker network multinode-20220921215635-5916 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0921 21:56:42.454229    2344 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-20220921215635-5916 multinode-20220921215635-5916
	W0921 21:56:42.645949    2344 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-20220921215635-5916 multinode-20220921215635-5916 returned with exit code 1
	E0921 21:56:42.646121    2344 network_create.go:104] error while trying to create docker network multinode-20220921215635-5916 192.168.49.0/24: create docker network multinode-20220921215635-5916 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-20220921215635-5916 multinode-20220921215635-5916: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 08ae99a41dfbe9b805f94d5765d8515ff9404bb73ebb4a981785e73b81aed9df (br-08ae99a41dfb): conflicts with network a04d36bfb3cf4b1b1ab643a2ec70fadfc4a485878b32a275b4c7653037d66dd0 (br-a04d36bfb3cf): networks have overlapping IPv4
	W0921 21:56:42.646514    2344 out.go:239] ! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network multinode-20220921215635-5916 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-20220921215635-5916 multinode-20220921215635-5916: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 08ae99a41dfbe9b805f94d5765d8515ff9404bb73ebb4a981785e73b81aed9df (br-08ae99a41dfb): conflicts with network a04d36bfb3cf4b1b1ab643a2ec70fadfc4a485878b32a275b4c7653037d66dd0 (br-a04d36bfb3cf): networks have overlapping IPv4
	
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network multinode-20220921215635-5916 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-20220921215635-5916 multinode-20220921215635-5916: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 08ae99a41dfbe9b805f94d5765d8515ff9404bb73ebb4a981785e73b81aed9df (br-08ae99a41dfb): conflicts with network a04d36bfb3cf4b1b1ab643a2ec70fadfc4a485878b32a275b4c7653037d66dd0 (br-a04d36bfb3cf): networks have overlapping IPv4
	
	I0921 21:56:42.660003    2344 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0921 21:56:42.902490    2344 cli_runner.go:164] Run: docker volume create multinode-20220921215635-5916 --label name.minikube.sigs.k8s.io=multinode-20220921215635-5916 --label created_by.minikube.sigs.k8s.io=true
	W0921 21:56:43.097681    2344 cli_runner.go:211] docker volume create multinode-20220921215635-5916 --label name.minikube.sigs.k8s.io=multinode-20220921215635-5916 --label created_by.minikube.sigs.k8s.io=true returned with exit code 1
	I0921 21:56:43.097823    2344 client.go:171] LocalClient.Create took 1.2741598s
	I0921 21:56:45.126027    2344 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0921 21:56:45.131958    2344 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220921215635-5916
	W0921 21:56:45.312554    2344 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220921215635-5916 returned with exit code 1
	I0921 21:56:45.312930    2344 retry.go:31] will retry after 276.165072ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220921215635-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220921215635-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220921215635-5916
	I0921 21:56:45.605221    2344 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220921215635-5916
	W0921 21:56:45.802687    2344 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220921215635-5916 returned with exit code 1
	I0921 21:56:45.802687    2344 retry.go:31] will retry after 540.190908ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220921215635-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220921215635-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220921215635-5916
	I0921 21:56:46.356515    2344 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220921215635-5916
	W0921 21:56:46.550630    2344 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220921215635-5916 returned with exit code 1
	W0921 21:56:46.550936    2344 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220921215635-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220921215635-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220921215635-5916
	
	W0921 21:56:46.550936    2344 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220921215635-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220921215635-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220921215635-5916
	I0921 21:56:46.561300    2344 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0921 21:56:46.568477    2344 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220921215635-5916
	W0921 21:56:46.753448    2344 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220921215635-5916 returned with exit code 1
	I0921 21:56:46.753448    2344 retry.go:31] will retry after 234.428547ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220921215635-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220921215635-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220921215635-5916
	I0921 21:56:47.003295    2344 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220921215635-5916
	W0921 21:56:47.216538    2344 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220921215635-5916 returned with exit code 1
	I0921 21:56:47.216538    2344 retry.go:31] will retry after 346.739061ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220921215635-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220921215635-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220921215635-5916
	I0921 21:56:47.580593    2344 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220921215635-5916
	W0921 21:56:47.760342    2344 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220921215635-5916 returned with exit code 1
	I0921 21:56:47.760342    2344 retry.go:31] will retry after 667.587979ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220921215635-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220921215635-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220921215635-5916
	I0921 21:56:48.439147    2344 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220921215635-5916
	W0921 21:56:48.635891    2344 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220921215635-5916 returned with exit code 1
	W0921 21:56:48.635891    2344 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220921215635-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220921215635-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220921215635-5916
	
	W0921 21:56:48.635891    2344 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220921215635-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220921215635-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220921215635-5916
	I0921 21:56:48.635891    2344 start.go:128] duration metric: createHost completed in 6.8170535s
	I0921 21:56:48.635891    2344 start.go:83] releasing machines lock for "multinode-20220921215635-5916", held for 6.8174453s
	W0921 21:56:48.636475    2344 start.go:602] error starting host: creating host: create: creating: setting up container node: creating volume for multinode-20220921215635-5916 container: docker volume create multinode-20220921215635-5916 --label name.minikube.sigs.k8s.io=multinode-20220921215635-5916 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create multinode-20220921215635-5916: error while creating volume root path '/var/lib/docker/volumes/multinode-20220921215635-5916': mkdir /var/lib/docker/volumes/multinode-20220921215635-5916: read-only file system
	I0921 21:56:48.650173    2344 cli_runner.go:164] Run: docker container inspect multinode-20220921215635-5916 --format={{.State.Status}}
	W0921 21:56:48.837234    2344 cli_runner.go:211] docker container inspect multinode-20220921215635-5916 --format={{.State.Status}} returned with exit code 1
	I0921 21:56:48.837234    2344 delete.go:82] Unable to get host status for multinode-20220921215635-5916, assuming it has already been deleted: state: unknown state "multinode-20220921215635-5916": docker container inspect multinode-20220921215635-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220921215635-5916
	W0921 21:56:48.837234    2344 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: setting up container node: creating volume for multinode-20220921215635-5916 container: docker volume create multinode-20220921215635-5916 --label name.minikube.sigs.k8s.io=multinode-20220921215635-5916 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create multinode-20220921215635-5916: error while creating volume root path '/var/lib/docker/volumes/multinode-20220921215635-5916': mkdir /var/lib/docker/volumes/multinode-20220921215635-5916: read-only file system
	
	! StartHost failed, but will try again: creating host: create: creating: setting up container node: creating volume for multinode-20220921215635-5916 container: docker volume create multinode-20220921215635-5916 --label name.minikube.sigs.k8s.io=multinode-20220921215635-5916 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create multinode-20220921215635-5916: error while creating volume root path '/var/lib/docker/volumes/multinode-20220921215635-5916': mkdir /var/lib/docker/volumes/multinode-20220921215635-5916: read-only file system
	
	I0921 21:56:48.837234    2344 start.go:617] Will try again in 5 seconds ...
	I0921 21:56:53.842233    2344 start.go:364] acquiring machines lock for multinode-20220921215635-5916: {Name:mk1da0b6aaf7b0158fd60ed6f72b6dfa2716f3be Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0921 21:56:53.842233    2344 start.go:368] acquired machines lock for "multinode-20220921215635-5916" in 0s
	I0921 21:56:53.842814    2344 start.go:96] Skipping create...Using existing machine configuration
	I0921 21:56:53.842814    2344 fix.go:55] fixHost starting: 
	I0921 21:56:53.864648    2344 cli_runner.go:164] Run: docker container inspect multinode-20220921215635-5916 --format={{.State.Status}}
	W0921 21:56:54.048713    2344 cli_runner.go:211] docker container inspect multinode-20220921215635-5916 --format={{.State.Status}} returned with exit code 1
	I0921 21:56:54.048713    2344 fix.go:103] recreateIfNeeded on multinode-20220921215635-5916: state= err=unknown state "multinode-20220921215635-5916": docker container inspect multinode-20220921215635-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220921215635-5916
	I0921 21:56:54.048713    2344 fix.go:108] machineExists: false. err=machine does not exist
	I0921 21:56:54.052713    2344 out.go:177] * docker "multinode-20220921215635-5916" container is missing, will recreate.
	I0921 21:56:54.054725    2344 delete.go:124] DEMOLISHING multinode-20220921215635-5916 ...
	I0921 21:56:54.069582    2344 cli_runner.go:164] Run: docker container inspect multinode-20220921215635-5916 --format={{.State.Status}}
	W0921 21:56:54.252472    2344 cli_runner.go:211] docker container inspect multinode-20220921215635-5916 --format={{.State.Status}} returned with exit code 1
	W0921 21:56:54.252472    2344 stop.go:75] unable to get state: unknown state "multinode-20220921215635-5916": docker container inspect multinode-20220921215635-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220921215635-5916
	I0921 21:56:54.252472    2344 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "multinode-20220921215635-5916": docker container inspect multinode-20220921215635-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220921215635-5916
	I0921 21:56:54.265470    2344 cli_runner.go:164] Run: docker container inspect multinode-20220921215635-5916 --format={{.State.Status}}
	W0921 21:56:54.456021    2344 cli_runner.go:211] docker container inspect multinode-20220921215635-5916 --format={{.State.Status}} returned with exit code 1
	I0921 21:56:54.456021    2344 delete.go:82] Unable to get host status for multinode-20220921215635-5916, assuming it has already been deleted: state: unknown state "multinode-20220921215635-5916": docker container inspect multinode-20220921215635-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220921215635-5916
	I0921 21:56:54.466496    2344 cli_runner.go:164] Run: docker container inspect -f {{.Id}} multinode-20220921215635-5916
	W0921 21:56:54.655873    2344 cli_runner.go:211] docker container inspect -f {{.Id}} multinode-20220921215635-5916 returned with exit code 1
	I0921 21:56:54.655999    2344 kic.go:356] could not find the container multinode-20220921215635-5916 to remove it. will try anyways
	I0921 21:56:54.663221    2344 cli_runner.go:164] Run: docker container inspect multinode-20220921215635-5916 --format={{.State.Status}}
	W0921 21:56:54.841644    2344 cli_runner.go:211] docker container inspect multinode-20220921215635-5916 --format={{.State.Status}} returned with exit code 1
	W0921 21:56:54.841865    2344 oci.go:84] error getting container status, will try to delete anyways: unknown state "multinode-20220921215635-5916": docker container inspect multinode-20220921215635-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220921215635-5916
	I0921 21:56:54.850059    2344 cli_runner.go:164] Run: docker exec --privileged -t multinode-20220921215635-5916 /bin/bash -c "sudo init 0"
	W0921 21:56:55.042219    2344 cli_runner.go:211] docker exec --privileged -t multinode-20220921215635-5916 /bin/bash -c "sudo init 0" returned with exit code 1
	I0921 21:56:55.042307    2344 oci.go:646] error shutdown multinode-20220921215635-5916: docker exec --privileged -t multinode-20220921215635-5916 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: multinode-20220921215635-5916
	I0921 21:56:56.059417    2344 cli_runner.go:164] Run: docker container inspect multinode-20220921215635-5916 --format={{.State.Status}}
	W0921 21:56:56.265855    2344 cli_runner.go:211] docker container inspect multinode-20220921215635-5916 --format={{.State.Status}} returned with exit code 1
	I0921 21:56:56.266217    2344 oci.go:658] temporary error verifying shutdown: unknown state "multinode-20220921215635-5916": docker container inspect multinode-20220921215635-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220921215635-5916
	I0921 21:56:56.266285    2344 oci.go:660] temporary error: container multinode-20220921215635-5916 status is  but expect it to be exited
	I0921 21:56:56.266385    2344 retry.go:31] will retry after 328.259627ms: couldn't verify container is exited. %v: unknown state "multinode-20220921215635-5916": docker container inspect multinode-20220921215635-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220921215635-5916
	I0921 21:56:56.617547    2344 cli_runner.go:164] Run: docker container inspect multinode-20220921215635-5916 --format={{.State.Status}}
	W0921 21:56:56.811399    2344 cli_runner.go:211] docker container inspect multinode-20220921215635-5916 --format={{.State.Status}} returned with exit code 1
	I0921 21:56:56.811399    2344 oci.go:658] temporary error verifying shutdown: unknown state "multinode-20220921215635-5916": docker container inspect multinode-20220921215635-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220921215635-5916
	I0921 21:56:56.811399    2344 oci.go:660] temporary error: container multinode-20220921215635-5916 status is  but expect it to be exited
	I0921 21:56:56.811399    2344 retry.go:31] will retry after 447.727139ms: couldn't verify container is exited. %v: unknown state "multinode-20220921215635-5916": docker container inspect multinode-20220921215635-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220921215635-5916
	I0921 21:56:57.275099    2344 cli_runner.go:164] Run: docker container inspect multinode-20220921215635-5916 --format={{.State.Status}}
	W0921 21:56:57.501740    2344 cli_runner.go:211] docker container inspect multinode-20220921215635-5916 --format={{.State.Status}} returned with exit code 1
	I0921 21:56:57.502086    2344 oci.go:658] temporary error verifying shutdown: unknown state "multinode-20220921215635-5916": docker container inspect multinode-20220921215635-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220921215635-5916
	I0921 21:56:57.502086    2344 oci.go:660] temporary error: container multinode-20220921215635-5916 status is  but expect it to be exited
	I0921 21:56:57.502086    2344 retry.go:31] will retry after 901.025843ms: couldn't verify container is exited. %v: unknown state "multinode-20220921215635-5916": docker container inspect multinode-20220921215635-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220921215635-5916
	I0921 21:56:58.425439    2344 cli_runner.go:164] Run: docker container inspect multinode-20220921215635-5916 --format={{.State.Status}}
	W0921 21:56:58.605549    2344 cli_runner.go:211] docker container inspect multinode-20220921215635-5916 --format={{.State.Status}} returned with exit code 1
	I0921 21:56:58.605745    2344 oci.go:658] temporary error verifying shutdown: unknown state "multinode-20220921215635-5916": docker container inspect multinode-20220921215635-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220921215635-5916
	I0921 21:56:58.605745    2344 oci.go:660] temporary error: container multinode-20220921215635-5916 status is  but expect it to be exited
	I0921 21:56:58.605745    2344 retry.go:31] will retry after 1.713171311s: couldn't verify container is exited. %v: unknown state "multinode-20220921215635-5916": docker container inspect multinode-20220921215635-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220921215635-5916
	I0921 21:57:00.327040    2344 cli_runner.go:164] Run: docker container inspect multinode-20220921215635-5916 --format={{.State.Status}}
	W0921 21:57:00.537961    2344 cli_runner.go:211] docker container inspect multinode-20220921215635-5916 --format={{.State.Status}} returned with exit code 1
	I0921 21:57:00.537961    2344 oci.go:658] temporary error verifying shutdown: unknown state "multinode-20220921215635-5916": docker container inspect multinode-20220921215635-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220921215635-5916
	I0921 21:57:00.537961    2344 oci.go:660] temporary error: container multinode-20220921215635-5916 status is  but expect it to be exited
	I0921 21:57:00.537961    2344 retry.go:31] will retry after 3.325151152s: couldn't verify container is exited. %v: unknown state "multinode-20220921215635-5916": docker container inspect multinode-20220921215635-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220921215635-5916
	I0921 21:57:03.885266    2344 cli_runner.go:164] Run: docker container inspect multinode-20220921215635-5916 --format={{.State.Status}}
	W0921 21:57:04.096477    2344 cli_runner.go:211] docker container inspect multinode-20220921215635-5916 --format={{.State.Status}} returned with exit code 1
	I0921 21:57:04.096786    2344 oci.go:658] temporary error verifying shutdown: unknown state "multinode-20220921215635-5916": docker container inspect multinode-20220921215635-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220921215635-5916
	I0921 21:57:04.096786    2344 oci.go:660] temporary error: container multinode-20220921215635-5916 status is  but expect it to be exited
	I0921 21:57:04.096786    2344 retry.go:31] will retry after 2.711970641s: couldn't verify container is exited. %v: unknown state "multinode-20220921215635-5916": docker container inspect multinode-20220921215635-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220921215635-5916
	I0921 21:57:06.822387    2344 cli_runner.go:164] Run: docker container inspect multinode-20220921215635-5916 --format={{.State.Status}}
	W0921 21:57:07.015822    2344 cli_runner.go:211] docker container inspect multinode-20220921215635-5916 --format={{.State.Status}} returned with exit code 1
	I0921 21:57:07.015822    2344 oci.go:658] temporary error verifying shutdown: unknown state "multinode-20220921215635-5916": docker container inspect multinode-20220921215635-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220921215635-5916
	I0921 21:57:07.015822    2344 oci.go:660] temporary error: container multinode-20220921215635-5916 status is  but expect it to be exited
	I0921 21:57:07.015822    2344 retry.go:31] will retry after 5.015617898s: couldn't verify container is exited. %v: unknown state "multinode-20220921215635-5916": docker container inspect multinode-20220921215635-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220921215635-5916
	I0921 21:57:12.053286    2344 cli_runner.go:164] Run: docker container inspect multinode-20220921215635-5916 --format={{.State.Status}}
	W0921 21:57:12.231882    2344 cli_runner.go:211] docker container inspect multinode-20220921215635-5916 --format={{.State.Status}} returned with exit code 1
	I0921 21:57:12.231882    2344 oci.go:658] temporary error verifying shutdown: unknown state "multinode-20220921215635-5916": docker container inspect multinode-20220921215635-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220921215635-5916
	I0921 21:57:12.231882    2344 oci.go:660] temporary error: container multinode-20220921215635-5916 status is  but expect it to be exited
	I0921 21:57:12.231882    2344 oci.go:88] couldn't shut down multinode-20220921215635-5916 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "multinode-20220921215635-5916": docker container inspect multinode-20220921215635-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220921215635-5916
	 
	I0921 21:57:12.240931    2344 cli_runner.go:164] Run: docker rm -f -v multinode-20220921215635-5916
	I0921 21:57:12.443266    2344 cli_runner.go:164] Run: docker container inspect -f {{.Id}} multinode-20220921215635-5916
	W0921 21:57:12.637845    2344 cli_runner.go:211] docker container inspect -f {{.Id}} multinode-20220921215635-5916 returned with exit code 1
	I0921 21:57:12.644969    2344 cli_runner.go:164] Run: docker network inspect multinode-20220921215635-5916 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0921 21:57:12.824171    2344 cli_runner.go:211] docker network inspect multinode-20220921215635-5916 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0921 21:57:12.831181    2344 network_create.go:272] running [docker network inspect multinode-20220921215635-5916] to gather additional debugging logs...
	I0921 21:57:12.831181    2344 cli_runner.go:164] Run: docker network inspect multinode-20220921215635-5916
	W0921 21:57:13.027764    2344 cli_runner.go:211] docker network inspect multinode-20220921215635-5916 returned with exit code 1
	I0921 21:57:13.027764    2344 network_create.go:275] error running [docker network inspect multinode-20220921215635-5916]: docker network inspect multinode-20220921215635-5916: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: multinode-20220921215635-5916
	I0921 21:57:13.027764    2344 network_create.go:277] output of [docker network inspect multinode-20220921215635-5916]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: multinode-20220921215635-5916
	
	** /stderr **
	W0921 21:57:13.028757    2344 delete.go:139] delete failed (probably ok) <nil>
	I0921 21:57:13.028757    2344 fix.go:115] Sleeping 1 second for extra luck!
	I0921 21:57:14.041111    2344 start.go:125] createHost starting for "" (driver="docker")
	I0921 21:57:14.045660    2344 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0921 21:57:14.045914    2344 start.go:159] libmachine.API.Create for "multinode-20220921215635-5916" (driver="docker")
	I0921 21:57:14.045983    2344 client.go:168] LocalClient.Create starting
	I0921 21:57:14.046151    2344 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I0921 21:57:14.046151    2344 main.go:134] libmachine: Decoding PEM data...
	I0921 21:57:14.046151    2344 main.go:134] libmachine: Parsing certificate...
	I0921 21:57:14.046821    2344 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I0921 21:57:14.046858    2344 main.go:134] libmachine: Decoding PEM data...
	I0921 21:57:14.046858    2344 main.go:134] libmachine: Parsing certificate...
	I0921 21:57:14.055221    2344 cli_runner.go:164] Run: docker network inspect multinode-20220921215635-5916 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0921 21:57:14.258670    2344 cli_runner.go:211] docker network inspect multinode-20220921215635-5916 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0921 21:57:14.266038    2344 network_create.go:272] running [docker network inspect multinode-20220921215635-5916] to gather additional debugging logs...
	I0921 21:57:14.266038    2344 cli_runner.go:164] Run: docker network inspect multinode-20220921215635-5916
	W0921 21:57:14.461686    2344 cli_runner.go:211] docker network inspect multinode-20220921215635-5916 returned with exit code 1
	I0921 21:57:14.461686    2344 network_create.go:275] error running [docker network inspect multinode-20220921215635-5916]: docker network inspect multinode-20220921215635-5916: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: multinode-20220921215635-5916
	I0921 21:57:14.461686    2344 network_create.go:277] output of [docker network inspect multinode-20220921215635-5916]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: multinode-20220921215635-5916
	
	** /stderr **
	I0921 21:57:14.469581    2344 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0921 21:57:14.665006    2344 network.go:281] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00068ad08] amended:false}} dirty:map[] misses:0}
	I0921 21:57:14.665006    2344 network.go:239] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0921 21:57:14.679005    2344 network.go:290] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00068ad08] amended:true}} dirty:map[192.168.49.0:0xc00068ad08 192.168.58.0:0xc00012a710] misses:0}
	I0921 21:57:14.679005    2344 network.go:236] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0921 21:57:14.679897    2344 network_create.go:115] attempt to create docker network multinode-20220921215635-5916 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0921 21:57:14.686844    2344 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-20220921215635-5916 multinode-20220921215635-5916
	W0921 21:57:14.906955    2344 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-20220921215635-5916 multinode-20220921215635-5916 returned with exit code 1
	E0921 21:57:14.906955    2344 network_create.go:104] error while trying to create docker network multinode-20220921215635-5916 192.168.58.0/24: create docker network multinode-20220921215635-5916 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-20220921215635-5916 multinode-20220921215635-5916: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network b49cdda0d5874b2f2b19ed36a56dcad13a60daa803d3684e349b1832d2855c07 (br-b49cdda0d587): conflicts with network 8a3cd8d165a4af032276277dbee93b52b8ee4aed7ca6d8aceee6814901b31792 (br-8a3cd8d165a4): networks have overlapping IPv4
	W0921 21:57:14.906955    2344 out.go:239] ! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network multinode-20220921215635-5916 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-20220921215635-5916 multinode-20220921215635-5916: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network b49cdda0d5874b2f2b19ed36a56dcad13a60daa803d3684e349b1832d2855c07 (br-b49cdda0d587): conflicts with network 8a3cd8d165a4af032276277dbee93b52b8ee4aed7ca6d8aceee6814901b31792 (br-8a3cd8d165a4): networks have overlapping IPv4
	
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network multinode-20220921215635-5916 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-20220921215635-5916 multinode-20220921215635-5916: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network b49cdda0d5874b2f2b19ed36a56dcad13a60daa803d3684e349b1832d2855c07 (br-b49cdda0d587): conflicts with network 8a3cd8d165a4af032276277dbee93b52b8ee4aed7ca6d8aceee6814901b31792 (br-8a3cd8d165a4): networks have overlapping IPv4
	
	I0921 21:57:14.920528    2344 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0921 21:57:15.106374    2344 cli_runner.go:164] Run: docker volume create multinode-20220921215635-5916 --label name.minikube.sigs.k8s.io=multinode-20220921215635-5916 --label created_by.minikube.sigs.k8s.io=true
	W0921 21:57:15.314786    2344 cli_runner.go:211] docker volume create multinode-20220921215635-5916 --label name.minikube.sigs.k8s.io=multinode-20220921215635-5916 --label created_by.minikube.sigs.k8s.io=true returned with exit code 1
	I0921 21:57:15.315045    2344 client.go:171] LocalClient.Create took 1.2690529s
	I0921 21:57:17.338458    2344 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0921 21:57:17.345601    2344 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220921215635-5916
	W0921 21:57:17.530440    2344 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220921215635-5916 returned with exit code 1
	I0921 21:57:17.530440    2344 retry.go:31] will retry after 242.222461ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220921215635-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220921215635-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220921215635-5916
	I0921 21:57:17.795765    2344 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220921215635-5916
	W0921 21:57:17.988865    2344 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220921215635-5916 returned with exit code 1
	I0921 21:57:17.988929    2344 retry.go:31] will retry after 293.637806ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220921215635-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220921215635-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220921215635-5916
	I0921 21:57:18.298865    2344 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220921215635-5916
	W0921 21:57:18.479430    2344 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220921215635-5916 returned with exit code 1
	I0921 21:57:18.479746    2344 retry.go:31] will retry after 446.119795ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220921215635-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220921215635-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220921215635-5916
	I0921 21:57:18.934388    2344 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220921215635-5916
	W0921 21:57:19.134654    2344 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220921215635-5916 returned with exit code 1
	W0921 21:57:19.134654    2344 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220921215635-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220921215635-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220921215635-5916
	
	W0921 21:57:19.134654    2344 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220921215635-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220921215635-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220921215635-5916
	I0921 21:57:19.145662    2344 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0921 21:57:19.156650    2344 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220921215635-5916
	W0921 21:57:19.350805    2344 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220921215635-5916 returned with exit code 1
	I0921 21:57:19.350805    2344 retry.go:31] will retry after 179.638263ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220921215635-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220921215635-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220921215635-5916
	I0921 21:57:19.550910    2344 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220921215635-5916
	W0921 21:57:19.746607    2344 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220921215635-5916 returned with exit code 1
	I0921 21:57:19.746806    2344 retry.go:31] will retry after 263.695078ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220921215635-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220921215635-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220921215635-5916
	I0921 21:57:20.032010    2344 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220921215635-5916
	W0921 21:57:20.242002    2344 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220921215635-5916 returned with exit code 1
	I0921 21:57:20.242002    2344 retry.go:31] will retry after 484.240172ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220921215635-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220921215635-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220921215635-5916
	I0921 21:57:20.749424    2344 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220921215635-5916
	W0921 21:57:20.956871    2344 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220921215635-5916 returned with exit code 1
	W0921 21:57:20.957258    2344 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220921215635-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220921215635-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220921215635-5916
	
	W0921 21:57:20.957373    2344 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220921215635-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220921215635-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220921215635-5916
	I0921 21:57:20.957420    2344 start.go:128] duration metric: createHost completed in 6.9161048s
	I0921 21:57:20.969245    2344 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0921 21:57:20.974721    2344 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220921215635-5916
	W0921 21:57:21.159079    2344 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220921215635-5916 returned with exit code 1
	I0921 21:57:21.159079    2344 retry.go:31] will retry after 340.62286ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220921215635-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220921215635-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220921215635-5916
	I0921 21:57:21.517432    2344 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220921215635-5916
	W0921 21:57:21.727551    2344 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220921215635-5916 returned with exit code 1
	I0921 21:57:21.727551    2344 retry.go:31] will retry after 297.417842ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220921215635-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220921215635-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220921215635-5916
	I0921 21:57:22.042952    2344 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220921215635-5916
	W0921 21:57:22.219974    2344 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220921215635-5916 returned with exit code 1
	I0921 21:57:22.220348    2344 retry.go:31] will retry after 448.358942ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220921215635-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220921215635-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220921215635-5916
	I0921 21:57:22.684818    2344 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220921215635-5916
	W0921 21:57:22.892886    2344 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220921215635-5916 returned with exit code 1
	W0921 21:57:22.892886    2344 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220921215635-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220921215635-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220921215635-5916
	
	W0921 21:57:22.892886    2344 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220921215635-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220921215635-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220921215635-5916
	I0921 21:57:22.904886    2344 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0921 21:57:22.911583    2344 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220921215635-5916
	W0921 21:57:23.127590    2344 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220921215635-5916 returned with exit code 1
	I0921 21:57:23.127784    2344 retry.go:31] will retry after 176.645665ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220921215635-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220921215635-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220921215635-5916
	I0921 21:57:23.325044    2344 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220921215635-5916
	W0921 21:57:23.517598    2344 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220921215635-5916 returned with exit code 1
	I0921 21:57:23.517598    2344 retry.go:31] will retry after 512.00063ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220921215635-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220921215635-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220921215635-5916
	I0921 21:57:24.050380    2344 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220921215635-5916
	W0921 21:57:24.244578    2344 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220921215635-5916 returned with exit code 1
	W0921 21:57:24.244578    2344 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220921215635-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220921215635-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220921215635-5916
	
	W0921 21:57:24.244578    2344 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220921215635-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220921215635-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220921215635-5916
	I0921 21:57:24.244578    2344 fix.go:57] fixHost completed within 30.4015511s
	I0921 21:57:24.244578    2344 start.go:83] releasing machines lock for "multinode-20220921215635-5916", held for 30.402132s
	W0921 21:57:24.245604    2344 out.go:239] * Failed to start docker container. Running "minikube delete -p multinode-20220921215635-5916" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for multinode-20220921215635-5916 container: docker volume create multinode-20220921215635-5916 --label name.minikube.sigs.k8s.io=multinode-20220921215635-5916 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create multinode-20220921215635-5916: error while creating volume root path '/var/lib/docker/volumes/multinode-20220921215635-5916': mkdir /var/lib/docker/volumes/multinode-20220921215635-5916: read-only file system
	
	* Failed to start docker container. Running "minikube delete -p multinode-20220921215635-5916" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for multinode-20220921215635-5916 container: docker volume create multinode-20220921215635-5916 --label name.minikube.sigs.k8s.io=multinode-20220921215635-5916 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create multinode-20220921215635-5916: error while creating volume root path '/var/lib/docker/volumes/multinode-20220921215635-5916': mkdir /var/lib/docker/volumes/multinode-20220921215635-5916: read-only file system
	
	I0921 21:57:24.249995    2344 out.go:177] 
	W0921 21:57:24.252085    2344 out.go:239] X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for multinode-20220921215635-5916 container: docker volume create multinode-20220921215635-5916 --label name.minikube.sigs.k8s.io=multinode-20220921215635-5916 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create multinode-20220921215635-5916: error while creating volume root path '/var/lib/docker/volumes/multinode-20220921215635-5916': mkdir /var/lib/docker/volumes/multinode-20220921215635-5916: read-only file system
	
	X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for multinode-20220921215635-5916 container: docker volume create multinode-20220921215635-5916 --label name.minikube.sigs.k8s.io=multinode-20220921215635-5916 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create multinode-20220921215635-5916: error while creating volume root path '/var/lib/docker/volumes/multinode-20220921215635-5916': mkdir /var/lib/docker/volumes/multinode-20220921215635-5916: read-only file system
	
	W0921 21:57:24.252085    2344 out.go:239] * Suggestion: Restart Docker
	* Suggestion: Restart Docker
	W0921 21:57:24.253070    2344 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/6825
	* Related issue: https://github.com/kubernetes/minikube/issues/6825
	I0921 21:57:24.256072    2344 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:85: failed to start cluster. args "out/minikube-windows-amd64.exe start -p multinode-20220921215635-5916 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker" : exit status 60
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/FreshStart2Nodes]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-20220921215635-5916
helpers_test.go:231: (dbg) Non-zero exit: docker inspect multinode-20220921215635-5916: exit status 1 (257.9883ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: multinode-20220921215635-5916

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-20220921215635-5916 -n multinode-20220921215635-5916
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-20220921215635-5916 -n multinode-20220921215635-5916: exit status 7 (544.6417ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0921 21:57:25.188662    7024 status.go:247] status error: host: state: unknown state "multinode-20220921215635-5916": docker container inspect multinode-20220921215635-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220921215635-5916

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-20220921215635-5916" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/FreshStart2Nodes (49.32s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (4.33s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:479: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-20220921215635-5916 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p multinode-20220921215635-5916 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml: exit status 1 (490.2327ms)

                                                
                                                
** stderr ** 
	error: cluster "multinode-20220921215635-5916" does not exist

                                                
                                                
** /stderr **
multinode_test.go:481: failed to create busybox deployment to multinode cluster
multinode_test.go:484: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-20220921215635-5916 -- rollout status deployment/busybox
multinode_test.go:484: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p multinode-20220921215635-5916 -- rollout status deployment/busybox: exit status 1 (491.1238ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-20220921215635-5916"

                                                
                                                
** /stderr **
multinode_test.go:486: failed to deploy busybox to multinode cluster
multinode_test.go:490: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-20220921215635-5916 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:490: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p multinode-20220921215635-5916 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (502.4163ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-20220921215635-5916"

                                                
                                                
** /stderr **
multinode_test.go:492: failed to retrieve Pod IPs
multinode_test.go:496: expected 2 Pod IPs but got 1
multinode_test.go:502: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-20220921215635-5916 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:502: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p multinode-20220921215635-5916 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (486.5481ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-20220921215635-5916"

                                                
                                                
** /stderr **
multinode_test.go:504: failed get Pod names
multinode_test.go:510: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-20220921215635-5916 -- exec  -- nslookup kubernetes.io
multinode_test.go:510: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p multinode-20220921215635-5916 -- exec  -- nslookup kubernetes.io: exit status 1 (491.445ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-20220921215635-5916"

                                                
                                                
** /stderr **
multinode_test.go:512: Pod  could not resolve 'kubernetes.io': exit status 1
multinode_test.go:520: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-20220921215635-5916 -- exec  -- nslookup kubernetes.default
multinode_test.go:520: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p multinode-20220921215635-5916 -- exec  -- nslookup kubernetes.default: exit status 1 (540.0809ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-20220921215635-5916"

                                                
                                                
** /stderr **
multinode_test.go:522: Pod  could not resolve 'kubernetes.default': exit status 1
multinode_test.go:528: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-20220921215635-5916 -- exec  -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:528: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p multinode-20220921215635-5916 -- exec  -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (502.7786ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-20220921215635-5916"

                                                
                                                
** /stderr **
multinode_test.go:530: Pod  could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/DeployApp2Nodes]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-20220921215635-5916
helpers_test.go:231: (dbg) Non-zero exit: docker inspect multinode-20220921215635-5916: exit status 1 (236.0142ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: multinode-20220921215635-5916

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-20220921215635-5916 -n multinode-20220921215635-5916
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-20220921215635-5916 -n multinode-20220921215635-5916: exit status 7 (580.0483ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0921 21:57:29.514432    6968 status.go:247] status error: host: state: unknown state "multinode-20220921215635-5916": docker container inspect multinode-20220921215635-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220921215635-5916

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-20220921215635-5916" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/DeployApp2Nodes (4.33s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (1.35s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:538: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-20220921215635-5916 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:538: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p multinode-20220921215635-5916 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (517.3773ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-20220921215635-5916"

                                                
                                                
** /stderr **
multinode_test.go:540: failed to get Pod names: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-20220921215635-5916
helpers_test.go:231: (dbg) Non-zero exit: docker inspect multinode-20220921215635-5916: exit status 1 (274.941ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: multinode-20220921215635-5916

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-20220921215635-5916 -n multinode-20220921215635-5916
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-20220921215635-5916 -n multinode-20220921215635-5916: exit status 7 (552.048ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0921 21:57:30.867930    8728 status.go:247] status error: host: state: unknown state "multinode-20220921215635-5916": docker container inspect multinode-20220921215635-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220921215635-5916

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-20220921215635-5916" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (1.35s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (1.9s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:108: (dbg) Run:  out/minikube-windows-amd64.exe node add -p multinode-20220921215635-5916 -v 3 --alsologtostderr
multinode_test.go:108: (dbg) Non-zero exit: out/minikube-windows-amd64.exe node add -p multinode-20220921215635-5916 -v 3 --alsologtostderr: exit status 80 (1.084794s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0921 21:57:31.146070    1152 out.go:296] Setting OutFile to fd 704 ...
	I0921 21:57:31.217628    1152 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0921 21:57:31.217628    1152 out.go:309] Setting ErrFile to fd 880...
	I0921 21:57:31.217628    1152 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0921 21:57:31.230274    1152 mustload.go:65] Loading cluster: multinode-20220921215635-5916
	I0921 21:57:31.230955    1152 config.go:180] Loaded profile config "multinode-20220921215635-5916": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.2
	I0921 21:57:31.244874    1152 cli_runner.go:164] Run: docker container inspect multinode-20220921215635-5916 --format={{.State.Status}}
	W0921 21:57:31.466987    1152 cli_runner.go:211] docker container inspect multinode-20220921215635-5916 --format={{.State.Status}} returned with exit code 1
	I0921 21:57:31.472696    1152 out.go:177] 
	W0921 21:57:31.475562    1152 out.go:239] X Exiting due to GUEST_STATUS: state: unknown state "multinode-20220921215635-5916": docker container inspect multinode-20220921215635-5916 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220921215635-5916
	
	X Exiting due to GUEST_STATUS: state: unknown state "multinode-20220921215635-5916": docker container inspect multinode-20220921215635-5916 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220921215635-5916
	
	W0921 21:57:31.475562    1152 out.go:239] * 
	* 
	W0921 21:57:31.945763    1152 out.go:239] ╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                       │
	│    * If the above advice does not help, please let us know:                                                           │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                         │
	│                                                                                                                       │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                              │
	│    * Please also attach the following file to the GitHub issue:                                                       │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_node_e3f75f9fdd712fd5423563a6a11e787bf6359068_30.log    │
	│                                                                                                                       │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                       │
	│    * If the above advice does not help, please let us know:                                                           │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                         │
	│                                                                                                                       │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                              │
	│    * Please also attach the following file to the GitHub issue:                                                       │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_node_e3f75f9fdd712fd5423563a6a11e787bf6359068_30.log    │
	│                                                                                                                       │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0921 21:57:31.949048    1152 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:110: failed to add node to current cluster. args "out/minikube-windows-amd64.exe node add -p multinode-20220921215635-5916 -v 3 --alsologtostderr" : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/AddNode]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-20220921215635-5916
helpers_test.go:231: (dbg) Non-zero exit: docker inspect multinode-20220921215635-5916: exit status 1 (241.3037ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: multinode-20220921215635-5916

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-20220921215635-5916 -n multinode-20220921215635-5916
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-20220921215635-5916 -n multinode-20220921215635-5916: exit status 7 (564.5168ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0921 21:57:32.768827    1144 status.go:247] status error: host: state: unknown state "multinode-20220921215635-5916": docker container inspect multinode-20220921215635-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220921215635-5916

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-20220921215635-5916" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/AddNode (1.90s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (1.58s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:130: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
multinode_test.go:153: expected profile "multinode-20220921215635-5916" in json of 'profile list' include 3 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"multinode-20220921215635-5916\",\"Status\":\"Unknown\",\"Config\":{\"Name\":\"multinode-20220921215635-5916\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"VMDriver\":\"\",\"Driver\":\"docker\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPort\":0,\"DockerOpt\":null,\"DisableDriver
Mounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.25.2\",\"ClusterName\":\"multinode-20220921215635-5916\",\"Namespace\":\"default\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\",\"NodeIP\":\"\",\"NodePort\":8443,\"NodeName\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.25.2\",\"Contain
erRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"C:\\\\Users\\\\jenkins.minikube2:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\"},\"Active\":false}]}"*. arg
s: "out/minikube-windows-amd64.exe profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/ProfileList]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-20220921215635-5916
helpers_test.go:231: (dbg) Non-zero exit: docker inspect multinode-20220921215635-5916: exit status 1 (240.9638ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: multinode-20220921215635-5916

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-20220921215635-5916 -n multinode-20220921215635-5916
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-20220921215635-5916 -n multinode-20220921215635-5916: exit status 7 (563.3056ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0921 21:57:34.349226    3732 status.go:247] status error: host: state: unknown state "multinode-20220921215635-5916": docker container inspect multinode-20220921215635-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220921215635-5916

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-20220921215635-5916" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/ProfileList (1.58s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (1.44s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:171: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220921215635-5916 status --output json --alsologtostderr
multinode_test.go:171: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-20220921215635-5916 status --output json --alsologtostderr: exit status 7 (563.1099ms)

                                                
                                                
-- stdout --
	{"Name":"multinode-20220921215635-5916","Host":"Nonexistent","Kubelet":"Nonexistent","APIServer":"Nonexistent","Kubeconfig":"Nonexistent","Worker":false}

                                                
                                                
-- /stdout --
** stderr ** 
	I0921 21:57:34.613608    2740 out.go:296] Setting OutFile to fd 624 ...
	I0921 21:57:34.671849    2740 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0921 21:57:34.671849    2740 out.go:309] Setting ErrFile to fd 744...
	I0921 21:57:34.671849    2740 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0921 21:57:34.687533    2740 out.go:303] Setting JSON to true
	I0921 21:57:34.687533    2740 mustload.go:65] Loading cluster: multinode-20220921215635-5916
	I0921 21:57:34.688288    2740 config.go:180] Loaded profile config "multinode-20220921215635-5916": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.2
	I0921 21:57:34.688288    2740 status.go:253] checking status of multinode-20220921215635-5916 ...
	I0921 21:57:34.700721    2740 cli_runner.go:164] Run: docker container inspect multinode-20220921215635-5916 --format={{.State.Status}}
	W0921 21:57:34.911487    2740 cli_runner.go:211] docker container inspect multinode-20220921215635-5916 --format={{.State.Status}} returned with exit code 1
	I0921 21:57:34.911487    2740 status.go:328] multinode-20220921215635-5916 host status = "" (err=state: unknown state "multinode-20220921215635-5916": docker container inspect multinode-20220921215635-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220921215635-5916
	)
	I0921 21:57:34.911487    2740 status.go:255] multinode-20220921215635-5916 status: &{Name:multinode-20220921215635-5916 Host:Nonexistent Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Nonexistent Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0921 21:57:34.911487    2740 status.go:258] status error: host: state: unknown state "multinode-20220921215635-5916": docker container inspect multinode-20220921215635-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220921215635-5916
	E0921 21:57:34.911487    2740 status.go:261] The "multinode-20220921215635-5916" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:178: failed to decode json from status: args "out/minikube-windows-amd64.exe -p multinode-20220921215635-5916 status --output json --alsologtostderr": json: cannot unmarshal object into Go value of type []cmd.Status
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/CopyFile]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-20220921215635-5916
helpers_test.go:231: (dbg) Non-zero exit: docker inspect multinode-20220921215635-5916: exit status 1 (266.5623ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: multinode-20220921215635-5916

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-20220921215635-5916 -n multinode-20220921215635-5916
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-20220921215635-5916 -n multinode-20220921215635-5916: exit status 7 (597.7266ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0921 21:57:35.787057    7220 status.go:247] status error: host: state: unknown state "multinode-20220921215635-5916": docker container inspect multinode-20220921215635-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220921215635-5916

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-20220921215635-5916" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/CopyFile (1.44s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.76s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:208: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220921215635-5916 node stop m03
multinode_test.go:208: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-20220921215635-5916 node stop m03: exit status 85 (841.2682ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_RETRIEVE: Could not find node m03
	* 
	╭──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                      │
	│    * If the above advice does not help, please let us know:                                                          │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                        │
	│                                                                                                                      │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                             │
	│    * Please also attach the following file to the GitHub issue:                                                      │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_node_a721422985a44b3996d93fcfe1a29c6759a29372_3.log    │
	│                                                                                                                      │
	╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:210: node stop returned an error. args "out/minikube-windows-amd64.exe -p multinode-20220921215635-5916 node stop m03": exit status 85
multinode_test.go:214: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220921215635-5916 status
multinode_test.go:214: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-20220921215635-5916 status: exit status 7 (571.7445ms)

                                                
                                                
-- stdout --
	multinode-20220921215635-5916
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0921 21:57:37.205103    4580 status.go:258] status error: host: state: unknown state "multinode-20220921215635-5916": docker container inspect multinode-20220921215635-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220921215635-5916
	E0921 21:57:37.205103    4580 status.go:261] The "multinode-20220921215635-5916" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:221: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220921215635-5916 status --alsologtostderr
multinode_test.go:221: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-20220921215635-5916 status --alsologtostderr: exit status 7 (560.975ms)

                                                
                                                
-- stdout --
	multinode-20220921215635-5916
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0921 21:57:37.492072    5424 out.go:296] Setting OutFile to fd 588 ...
	I0921 21:57:37.548599    5424 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0921 21:57:37.548599    5424 out.go:309] Setting ErrFile to fd 868...
	I0921 21:57:37.548599    5424 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0921 21:57:37.558105    5424 out.go:303] Setting JSON to false
	I0921 21:57:37.558105    5424 mustload.go:65] Loading cluster: multinode-20220921215635-5916
	I0921 21:57:37.559201    5424 config.go:180] Loaded profile config "multinode-20220921215635-5916": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.2
	I0921 21:57:37.559201    5424 status.go:253] checking status of multinode-20220921215635-5916 ...
	I0921 21:57:37.572374    5424 cli_runner.go:164] Run: docker container inspect multinode-20220921215635-5916 --format={{.State.Status}}
	W0921 21:57:37.766109    5424 cli_runner.go:211] docker container inspect multinode-20220921215635-5916 --format={{.State.Status}} returned with exit code 1
	I0921 21:57:37.766109    5424 status.go:328] multinode-20220921215635-5916 host status = "" (err=state: unknown state "multinode-20220921215635-5916": docker container inspect multinode-20220921215635-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220921215635-5916
	)
	I0921 21:57:37.766109    5424 status.go:255] multinode-20220921215635-5916 status: &{Name:multinode-20220921215635-5916 Host:Nonexistent Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Nonexistent Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0921 21:57:37.766109    5424 status.go:258] status error: host: state: unknown state "multinode-20220921215635-5916": docker container inspect multinode-20220921215635-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220921215635-5916
	E0921 21:57:37.766109    5424 status.go:261] The "multinode-20220921215635-5916" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:227: incorrect number of running kubelets: args "out/minikube-windows-amd64.exe -p multinode-20220921215635-5916 status --alsologtostderr": multinode-20220921215635-5916
type: Control Plane
host: Nonexistent
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Nonexistent

                                                
                                                
multinode_test.go:231: incorrect number of stopped hosts: args "out/minikube-windows-amd64.exe -p multinode-20220921215635-5916 status --alsologtostderr": multinode-20220921215635-5916
type: Control Plane
host: Nonexistent
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Nonexistent

                                                
                                                
multinode_test.go:235: incorrect number of stopped kubelets: args "out/minikube-windows-amd64.exe -p multinode-20220921215635-5916 status --alsologtostderr": multinode-20220921215635-5916
type: Control Plane
host: Nonexistent
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Nonexistent

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/StopNode]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-20220921215635-5916
helpers_test.go:231: (dbg) Non-zero exit: docker inspect multinode-20220921215635-5916: exit status 1 (238.3638ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: multinode-20220921215635-5916

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-20220921215635-5916 -n multinode-20220921215635-5916
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-20220921215635-5916 -n multinode-20220921215635-5916: exit status 7 (533.7923ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0921 21:57:38.546856    9024 status.go:247] status error: host: state: unknown state "multinode-20220921215635-5916": docker container inspect multinode-20220921215635-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220921215635-5916

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-20220921215635-5916" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/StopNode (2.76s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (2.48s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:242: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:252: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220921215635-5916 node start m03 --alsologtostderr
multinode_test.go:252: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-20220921215635-5916 node start m03 --alsologtostderr: exit status 85 (837.0474ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0921 21:57:39.118415    6788 out.go:296] Setting OutFile to fd 928 ...
	I0921 21:57:39.191173    6788 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0921 21:57:39.191279    6788 out.go:309] Setting ErrFile to fd 704...
	I0921 21:57:39.191279    6788 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0921 21:57:39.203303    6788 mustload.go:65] Loading cluster: multinode-20220921215635-5916
	I0921 21:57:39.203947    6788 config.go:180] Loaded profile config "multinode-20220921215635-5916": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.2
	I0921 21:57:39.208258    6788 out.go:177] 
	W0921 21:57:39.210683    6788 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: Could not find node m03
	X Exiting due to GUEST_NODE_RETRIEVE: Could not find node m03
	W0921 21:57:39.210683    6788 out.go:239] * 
	* 
	W0921 21:57:39.681058    6788 out.go:239] ╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                       │
	│    * If the above advice does not help, please let us know:                                                           │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                         │
	│                                                                                                                       │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                              │
	│    * Please also attach the following file to the GitHub issue:                                                       │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_node_6eb326fa97d317035b4344941f9b9e6dd8ab3d92_20.log    │
	│                                                                                                                       │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                       │
	│    * If the above advice does not help, please let us know:                                                           │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                         │
	│                                                                                                                       │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                              │
	│    * Please also attach the following file to the GitHub issue:                                                       │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_node_6eb326fa97d317035b4344941f9b9e6dd8ab3d92_20.log    │
	│                                                                                                                       │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0921 21:57:39.684058    6788 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:254: I0921 21:57:39.118415    6788 out.go:296] Setting OutFile to fd 928 ...
I0921 21:57:39.191173    6788 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0921 21:57:39.191279    6788 out.go:309] Setting ErrFile to fd 704...
I0921 21:57:39.191279    6788 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0921 21:57:39.203303    6788 mustload.go:65] Loading cluster: multinode-20220921215635-5916
I0921 21:57:39.203947    6788 config.go:180] Loaded profile config "multinode-20220921215635-5916": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.2
I0921 21:57:39.208258    6788 out.go:177] 
W0921 21:57:39.210683    6788 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: Could not find node m03
X Exiting due to GUEST_NODE_RETRIEVE: Could not find node m03
W0921 21:57:39.210683    6788 out.go:239] * 
* 
W0921 21:57:39.681058    6788 out.go:239] ╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                       │
│    * If the above advice does not help, please let us know:                                                           │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                         │
│                                                                                                                       │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                              │
│    * Please also attach the following file to the GitHub issue:                                                       │
│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_node_6eb326fa97d317035b4344941f9b9e6dd8ab3d92_20.log    │
│                                                                                                                       │
╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                       │
│    * If the above advice does not help, please let us know:                                                           │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                         │
│                                                                                                                       │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                              │
│    * Please also attach the following file to the GitHub issue:                                                       │
│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_node_6eb326fa97d317035b4344941f9b9e6dd8ab3d92_20.log    │
│                                                                                                                       │
╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I0921 21:57:39.684058    6788 out.go:177] 
multinode_test.go:255: node start returned an error. args "out/minikube-windows-amd64.exe -p multinode-20220921215635-5916 node start m03 --alsologtostderr": exit status 85
multinode_test.go:259: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220921215635-5916 status
multinode_test.go:259: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-20220921215635-5916 status: exit status 7 (551.4501ms)

                                                
                                                
-- stdout --
	multinode-20220921215635-5916
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0921 21:57:40.233404    6052 status.go:258] status error: host: state: unknown state "multinode-20220921215635-5916": docker container inspect multinode-20220921215635-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220921215635-5916
	E0921 21:57:40.233524    6052 status.go:261] The "multinode-20220921215635-5916" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:261: failed to run minikube status. args "out/minikube-windows-amd64.exe -p multinode-20220921215635-5916 status" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/StartAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-20220921215635-5916
helpers_test.go:231: (dbg) Non-zero exit: docker inspect multinode-20220921215635-5916: exit status 1 (239.7746ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: multinode-20220921215635-5916

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-20220921215635-5916 -n multinode-20220921215635-5916
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-20220921215635-5916 -n multinode-20220921215635-5916: exit status 7 (546.6156ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0921 21:57:41.030380    8132 status.go:247] status error: host: state: unknown state "multinode-20220921215635-5916": docker container inspect multinode-20220921215635-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220921215635-5916

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-20220921215635-5916" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/StartAfterStop (2.48s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (96.38s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:281: (dbg) Run:  out/minikube-windows-amd64.exe node list -p multinode-20220921215635-5916
multinode_test.go:288: (dbg) Run:  out/minikube-windows-amd64.exe stop -p multinode-20220921215635-5916
multinode_test.go:288: (dbg) Non-zero exit: out/minikube-windows-amd64.exe stop -p multinode-20220921215635-5916: exit status 82 (19.2159422s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-20220921215635-5916"  ...
	* Stopping node "multinode-20220921215635-5916"  ...
	* Stopping node "multinode-20220921215635-5916"  ...
	* Stopping node "multinode-20220921215635-5916"  ...
	* Stopping node "multinode-20220921215635-5916"  ...
	* Stopping node "multinode-20220921215635-5916"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0921 21:57:45.315242    6060 daemonize_windows.go:38] error terminating scheduled stop for profile multinode-20220921215635-5916: stopping schedule-stop service for profile multinode-20220921215635-5916: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220921215635-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220921215635-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220921215635-5916
	X Exiting due to GUEST_STOP_TIMEOUT: docker container inspect multinode-20220921215635-5916 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220921215635-5916
	
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                        │
	│    * If the above advice does not help, please let us know:                                                            │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                          │
	│                                                                                                                        │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                               │
	│    * Please also attach the following file to the GitHub issue:                                                        │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_stop_eeafc14e343686f0df1f1d4295ac2d3042636ff8_153.log    │
	│                                                                                                                        │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:290: failed to run minikube stop. args "out/minikube-windows-amd64.exe node list -p multinode-20220921215635-5916" : exit status 82
multinode_test.go:293: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-20220921215635-5916 --wait=true -v=8 --alsologtostderr
multinode_test.go:293: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p multinode-20220921215635-5916 --wait=true -v=8 --alsologtostderr: exit status 60 (1m15.426332s)

                                                
                                                
-- stdout --
	* [multinode-20220921215635-5916] minikube v1.27.0 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	  - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=14995
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on existing profile
	* Starting control plane node multinode-20220921215635-5916 in cluster multinode-20220921215635-5916
	* Pulling base image ...
	* docker "multinode-20220921215635-5916" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* docker "multinode-20220921215635-5916" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0921 21:58:00.865156    2676 out.go:296] Setting OutFile to fd 948 ...
	I0921 21:58:00.923871    2676 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0921 21:58:00.923871    2676 out.go:309] Setting ErrFile to fd 944...
	I0921 21:58:00.923871    2676 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0921 21:58:00.941344    2676 out.go:303] Setting JSON to false
	I0921 21:58:00.943858    2676 start.go:115] hostinfo: {"hostname":"minikube2","uptime":3549,"bootTime":1663793931,"procs":148,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19044 Build 19044","kernelVersion":"10.0.19044 Build 19044","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"29dced62-21fb-45d8-a34e-472b66ced002"}
	W0921 21:58:00.943996    2676 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0921 21:58:00.949250    2676 out.go:177] * [multinode-20220921215635-5916] minikube v1.27.0 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	I0921 21:58:00.951090    2676 notify.go:214] Checking for updates...
	I0921 21:58:00.953475    2676 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	I0921 21:58:00.957493    2676 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	I0921 21:58:00.960340    2676 out.go:177]   - MINIKUBE_LOCATION=14995
	I0921 21:58:00.962833    2676 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0921 21:58:00.967803    2676 config.go:180] Loaded profile config "multinode-20220921215635-5916": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.2
	I0921 21:58:00.968400    2676 driver.go:365] Setting default libvirt URI to qemu:///system
	I0921 21:58:01.262156    2676 docker.go:137] docker version: linux-20.10.17
	I0921 21:58:01.270529    2676 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0921 21:58:01.810510    2676 info.go:265] docker info: {ID:6VFI:N2PO:PLBP:KXLR:EN7R:OR7P:564S:UY7N:ZWDA:5V2U:KGL2:KJZ7 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:43 OomKillDisable:true NGoroutines:59 SystemTime:2022-09-21 21:58:01.4167442 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.10.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.9] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-
plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.19.0]] Warnings:<nil>}}
	I0921 21:58:01.814810    2676 out.go:177] * Using the docker driver based on existing profile
	I0921 21:58:01.816953    2676 start.go:284] selected driver: docker
	I0921 21:58:01.816953    2676 start.go:808] validating driver "docker" against &{Name:multinode-20220921215635-5916 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.2 ClusterName:multinode-20220921215635-5916 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.25.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpti
mizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I0921 21:58:01.816953    2676 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0921 21:58:01.831391    2676 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0921 21:58:02.343448    2676 info.go:265] docker info: {ID:6VFI:N2PO:PLBP:KXLR:EN7R:OR7P:564S:UY7N:ZWDA:5V2U:KGL2:KJZ7 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:43 OomKillDisable:true NGoroutines:59 SystemTime:2022-09-21 21:58:01.9849599 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.10.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.9] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-
plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.19.0]] Warnings:<nil>}}
	I0921 21:58:02.405727    2676 start_flags.go:867] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0921 21:58:02.405850    2676 cni.go:95] Creating CNI manager for ""
	I0921 21:58:02.405850    2676 cni.go:156] 1 nodes found, recommending kindnet
	I0921 21:58:02.405850    2676 start_flags.go:316] config:
	{Name:multinode-20220921215635-5916 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.2 ClusterName:multinode-20220921215635-5916 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.25.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPa
th:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I0921 21:58:02.410642    2676 out.go:177] * Starting control plane node multinode-20220921215635-5916 in cluster multinode-20220921215635-5916
	I0921 21:58:02.412788    2676 cache.go:120] Beginning downloading kic base image for docker with docker
	I0921 21:58:02.414845    2676 out.go:177] * Pulling base image ...
	I0921 21:58:02.417967    2676 preload.go:132] Checking if preload exists for k8s version v1.25.2 and runtime docker
	I0921 21:58:02.418941    2676 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c in local docker daemon
	I0921 21:58:02.418941    2676 preload.go:148] Found local preload: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.25.2-docker-overlay2-amd64.tar.lz4
	I0921 21:58:02.418941    2676 cache.go:57] Caching tarball of preloaded images
	I0921 21:58:02.418941    2676 preload.go:174] Found C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.25.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0921 21:58:02.418941    2676 cache.go:60] Finished verifying existence of preloaded tar for  v1.25.2 on docker
	I0921 21:58:02.418941    2676 profile.go:148] Saving config to C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\multinode-20220921215635-5916\config.json ...
	I0921 21:58:02.638974    2676 cache.go:147] Downloading gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c to local cache
	I0921 21:58:02.638974    2676 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase_v0.0.34@sha256_f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c.tar
	I0921 21:58:02.638974    2676 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase_v0.0.34@sha256_f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c.tar
	I0921 21:58:02.638974    2676 image.go:59] Checking for gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c in local cache directory
	I0921 21:58:02.639510    2676 image.go:62] Found gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c in local cache directory, skipping pull
	I0921 21:58:02.639510    2676 image.go:103] gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c exists in cache, skipping pull
	I0921 21:58:02.639639    2676 cache.go:150] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c as a tarball
	I0921 21:58:02.639639    2676 cache.go:161] Loading gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c from local cache
	I0921 21:58:02.639750    2676 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase_v0.0.34@sha256_f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c.tar
	I0921 21:58:04.865125    2676 image.go:219] response: {"errorDetail":{"message":"mkdir /var/lib/docker/tmp/docker-import-1500409646: read-only file system"},"error":"mkdir /var/lib/docker/tmp/docker-import-1500409646: read-only file system"}
	I0921 21:58:04.865125    2676 cache.go:164] successfully loaded gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c from cached tarball
	I0921 21:58:04.865125    2676 cache.go:170] Downloading gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c to local daemon
	I0921 21:58:04.865125    2676 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c.d.lock -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase_v0.0.34@sha256_f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c.d.lock
	I0921 21:58:04.865866    2676 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c in local docker daemon
	I0921 21:58:05.086527    2676 image.go:243] Writing gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c to local daemon
	I0921 21:58:05.086906    2676 image.go:258] Getting image gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c
	I0921 21:58:05.329360    2676 image.go:272] Writing image gcr.io/k8s-minikube/kicbase:v0.0.34
	    > gcr.io/k8s-minikube/kicbase:  0 B [__________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase:  0 B [__________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase:  0 B [__________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase:  0 B [______________________] ?% ? p/s 700msI0921 21:58:06.075415    2676 image.go:306] Pulling image gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c
	I0921 21:58:06.420337    2676 cache.go:177] use image loaded from cache gcr.io/k8s-minikube/kicbase:v0.0.34
	W0921 21:58:06.420337    2676 out.go:239] ! minikube was unable to download gcr.io/k8s-minikube/kicbase:v0.0.34, but successfully downloaded gcr.io/k8s-minikube/kicbase:v0.0.34 as a fallback image
	! minikube was unable to download gcr.io/k8s-minikube/kicbase:v0.0.34, but successfully downloaded gcr.io/k8s-minikube/kicbase:v0.0.34 as a fallback image
	I0921 21:58:06.420337    2676 cache.go:208] Successfully downloaded all kic artifacts
	I0921 21:58:06.420337    2676 start.go:364] acquiring machines lock for multinode-20220921215635-5916: {Name:mk1da0b6aaf7b0158fd60ed6f72b6dfa2716f3be Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0921 21:58:06.420337    2676 start.go:368] acquired machines lock for "multinode-20220921215635-5916" in 0s
	I0921 21:58:06.421003    2676 start.go:96] Skipping create...Using existing machine configuration
	I0921 21:58:06.421093    2676 fix.go:55] fixHost starting: 
	I0921 21:58:06.436243    2676 cli_runner.go:164] Run: docker container inspect multinode-20220921215635-5916 --format={{.State.Status}}
	W0921 21:58:06.652395    2676 cli_runner.go:211] docker container inspect multinode-20220921215635-5916 --format={{.State.Status}} returned with exit code 1
	I0921 21:58:06.652726    2676 fix.go:103] recreateIfNeeded on multinode-20220921215635-5916: state= err=unknown state "multinode-20220921215635-5916": docker container inspect multinode-20220921215635-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220921215635-5916
	I0921 21:58:06.652726    2676 fix.go:108] machineExists: false. err=machine does not exist
	I0921 21:58:06.656362    2676 out.go:177] * docker "multinode-20220921215635-5916" container is missing, will recreate.
	I0921 21:58:06.659803    2676 delete.go:124] DEMOLISHING multinode-20220921215635-5916 ...
	I0921 21:58:06.671920    2676 cli_runner.go:164] Run: docker container inspect multinode-20220921215635-5916 --format={{.State.Status}}
	W0921 21:58:06.870538    2676 cli_runner.go:211] docker container inspect multinode-20220921215635-5916 --format={{.State.Status}} returned with exit code 1
	W0921 21:58:06.870662    2676 stop.go:75] unable to get state: unknown state "multinode-20220921215635-5916": docker container inspect multinode-20220921215635-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220921215635-5916
	I0921 21:58:06.870662    2676 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "multinode-20220921215635-5916": docker container inspect multinode-20220921215635-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220921215635-5916
	I0921 21:58:06.885978    2676 cli_runner.go:164] Run: docker container inspect multinode-20220921215635-5916 --format={{.State.Status}}
	W0921 21:58:07.078214    2676 cli_runner.go:211] docker container inspect multinode-20220921215635-5916 --format={{.State.Status}} returned with exit code 1
	I0921 21:58:07.078313    2676 delete.go:82] Unable to get host status for multinode-20220921215635-5916, assuming it has already been deleted: state: unknown state "multinode-20220921215635-5916": docker container inspect multinode-20220921215635-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220921215635-5916
	I0921 21:58:07.085608    2676 cli_runner.go:164] Run: docker container inspect -f {{.Id}} multinode-20220921215635-5916
	W0921 21:58:07.295708    2676 cli_runner.go:211] docker container inspect -f {{.Id}} multinode-20220921215635-5916 returned with exit code 1
	I0921 21:58:07.295819    2676 kic.go:356] could not find the container multinode-20220921215635-5916 to remove it. will try anyways
	I0921 21:58:07.302671    2676 cli_runner.go:164] Run: docker container inspect multinode-20220921215635-5916 --format={{.State.Status}}
	W0921 21:58:07.484189    2676 cli_runner.go:211] docker container inspect multinode-20220921215635-5916 --format={{.State.Status}} returned with exit code 1
	W0921 21:58:07.484310    2676 oci.go:84] error getting container status, will try to delete anyways: unknown state "multinode-20220921215635-5916": docker container inspect multinode-20220921215635-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220921215635-5916
	I0921 21:58:07.492068    2676 cli_runner.go:164] Run: docker exec --privileged -t multinode-20220921215635-5916 /bin/bash -c "sudo init 0"
	W0921 21:58:07.669807    2676 cli_runner.go:211] docker exec --privileged -t multinode-20220921215635-5916 /bin/bash -c "sudo init 0" returned with exit code 1
	I0921 21:58:07.669901    2676 oci.go:646] error shutdown multinode-20220921215635-5916: docker exec --privileged -t multinode-20220921215635-5916 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: multinode-20220921215635-5916
	I0921 21:58:08.686439    2676 cli_runner.go:164] Run: docker container inspect multinode-20220921215635-5916 --format={{.State.Status}}
	W0921 21:58:08.878076    2676 cli_runner.go:211] docker container inspect multinode-20220921215635-5916 --format={{.State.Status}} returned with exit code 1
	I0921 21:58:08.878076    2676 oci.go:658] temporary error verifying shutdown: unknown state "multinode-20220921215635-5916": docker container inspect multinode-20220921215635-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220921215635-5916
	I0921 21:58:08.878076    2676 oci.go:660] temporary error: container multinode-20220921215635-5916 status is  but expect it to be exited
	I0921 21:58:08.878076    2676 retry.go:31] will retry after 552.330144ms: couldn't verify container is exited. %v: unknown state "multinode-20220921215635-5916": docker container inspect multinode-20220921215635-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220921215635-5916
	I0921 21:58:09.440531    2676 cli_runner.go:164] Run: docker container inspect multinode-20220921215635-5916 --format={{.State.Status}}
	W0921 21:58:09.617267    2676 cli_runner.go:211] docker container inspect multinode-20220921215635-5916 --format={{.State.Status}} returned with exit code 1
	I0921 21:58:09.617487    2676 oci.go:658] temporary error verifying shutdown: unknown state "multinode-20220921215635-5916": docker container inspect multinode-20220921215635-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220921215635-5916
	I0921 21:58:09.617487    2676 oci.go:660] temporary error: container multinode-20220921215635-5916 status is  but expect it to be exited
	I0921 21:58:09.617487    2676 retry.go:31] will retry after 1.080381816s: couldn't verify container is exited. %v: unknown state "multinode-20220921215635-5916": docker container inspect multinode-20220921215635-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220921215635-5916
	I0921 21:58:10.709283    2676 cli_runner.go:164] Run: docker container inspect multinode-20220921215635-5916 --format={{.State.Status}}
	W0921 21:58:10.918438    2676 cli_runner.go:211] docker container inspect multinode-20220921215635-5916 --format={{.State.Status}} returned with exit code 1
	I0921 21:58:10.918438    2676 oci.go:658] temporary error verifying shutdown: unknown state "multinode-20220921215635-5916": docker container inspect multinode-20220921215635-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220921215635-5916
	I0921 21:58:10.918438    2676 oci.go:660] temporary error: container multinode-20220921215635-5916 status is  but expect it to be exited
	I0921 21:58:10.918438    2676 retry.go:31] will retry after 1.31013006s: couldn't verify container is exited. %v: unknown state "multinode-20220921215635-5916": docker container inspect multinode-20220921215635-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220921215635-5916
	I0921 21:58:12.242192    2676 cli_runner.go:164] Run: docker container inspect multinode-20220921215635-5916 --format={{.State.Status}}
	W0921 21:58:12.451519    2676 cli_runner.go:211] docker container inspect multinode-20220921215635-5916 --format={{.State.Status}} returned with exit code 1
	I0921 21:58:12.451593    2676 oci.go:658] temporary error verifying shutdown: unknown state "multinode-20220921215635-5916": docker container inspect multinode-20220921215635-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220921215635-5916
	I0921 21:58:12.451593    2676 oci.go:660] temporary error: container multinode-20220921215635-5916 status is  but expect it to be exited
	I0921 21:58:12.451593    2676 retry.go:31] will retry after 1.582392691s: couldn't verify container is exited. %v: unknown state "multinode-20220921215635-5916": docker container inspect multinode-20220921215635-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220921215635-5916
	I0921 21:58:14.046812    2676 cli_runner.go:164] Run: docker container inspect multinode-20220921215635-5916 --format={{.State.Status}}
	W0921 21:58:14.254660    2676 cli_runner.go:211] docker container inspect multinode-20220921215635-5916 --format={{.State.Status}} returned with exit code 1
	I0921 21:58:14.254999    2676 oci.go:658] temporary error verifying shutdown: unknown state "multinode-20220921215635-5916": docker container inspect multinode-20220921215635-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220921215635-5916
	I0921 21:58:14.255046    2676 oci.go:660] temporary error: container multinode-20220921215635-5916 status is  but expect it to be exited
	I0921 21:58:14.255135    2676 retry.go:31] will retry after 2.340488664s: couldn't verify container is exited. %v: unknown state "multinode-20220921215635-5916": docker container inspect multinode-20220921215635-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220921215635-5916
	I0921 21:58:16.623220    2676 cli_runner.go:164] Run: docker container inspect multinode-20220921215635-5916 --format={{.State.Status}}
	W0921 21:58:16.821750    2676 cli_runner.go:211] docker container inspect multinode-20220921215635-5916 --format={{.State.Status}} returned with exit code 1
	I0921 21:58:16.821750    2676 oci.go:658] temporary error verifying shutdown: unknown state "multinode-20220921215635-5916": docker container inspect multinode-20220921215635-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220921215635-5916
	I0921 21:58:16.821750    2676 oci.go:660] temporary error: container multinode-20220921215635-5916 status is  but expect it to be exited
	I0921 21:58:16.821750    2676 retry.go:31] will retry after 4.506218855s: couldn't verify container is exited. %v: unknown state "multinode-20220921215635-5916": docker container inspect multinode-20220921215635-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220921215635-5916
	I0921 21:58:21.349856    2676 cli_runner.go:164] Run: docker container inspect multinode-20220921215635-5916 --format={{.State.Status}}
	W0921 21:58:21.573407    2676 cli_runner.go:211] docker container inspect multinode-20220921215635-5916 --format={{.State.Status}} returned with exit code 1
	I0921 21:58:21.573407    2676 oci.go:658] temporary error verifying shutdown: unknown state "multinode-20220921215635-5916": docker container inspect multinode-20220921215635-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220921215635-5916
	I0921 21:58:21.573407    2676 oci.go:660] temporary error: container multinode-20220921215635-5916 status is  but expect it to be exited
	I0921 21:58:21.573407    2676 retry.go:31] will retry after 3.221479586s: couldn't verify container is exited. %v: unknown state "multinode-20220921215635-5916": docker container inspect multinode-20220921215635-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220921215635-5916
	I0921 21:58:24.817930    2676 cli_runner.go:164] Run: docker container inspect multinode-20220921215635-5916 --format={{.State.Status}}
	W0921 21:58:25.004995    2676 cli_runner.go:211] docker container inspect multinode-20220921215635-5916 --format={{.State.Status}} returned with exit code 1
	I0921 21:58:25.004995    2676 oci.go:658] temporary error verifying shutdown: unknown state "multinode-20220921215635-5916": docker container inspect multinode-20220921215635-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220921215635-5916
	I0921 21:58:25.004995    2676 oci.go:660] temporary error: container multinode-20220921215635-5916 status is  but expect it to be exited
	I0921 21:58:25.004995    2676 oci.go:88] couldn't shut down multinode-20220921215635-5916 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "multinode-20220921215635-5916": docker container inspect multinode-20220921215635-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220921215635-5916
	 
	I0921 21:58:25.010950    2676 cli_runner.go:164] Run: docker rm -f -v multinode-20220921215635-5916
	I0921 21:58:25.263599    2676 cli_runner.go:164] Run: docker container inspect -f {{.Id}} multinode-20220921215635-5916
	W0921 21:58:25.457209    2676 cli_runner.go:211] docker container inspect -f {{.Id}} multinode-20220921215635-5916 returned with exit code 1
	I0921 21:58:25.466790    2676 cli_runner.go:164] Run: docker network inspect multinode-20220921215635-5916 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0921 21:58:25.651493    2676 cli_runner.go:211] docker network inspect multinode-20220921215635-5916 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0921 21:58:25.658789    2676 network_create.go:272] running [docker network inspect multinode-20220921215635-5916] to gather additional debugging logs...
	I0921 21:58:25.658789    2676 cli_runner.go:164] Run: docker network inspect multinode-20220921215635-5916
	W0921 21:58:25.853654    2676 cli_runner.go:211] docker network inspect multinode-20220921215635-5916 returned with exit code 1
	I0921 21:58:25.853654    2676 network_create.go:275] error running [docker network inspect multinode-20220921215635-5916]: docker network inspect multinode-20220921215635-5916: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: multinode-20220921215635-5916
	I0921 21:58:25.853654    2676 network_create.go:277] output of [docker network inspect multinode-20220921215635-5916]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: multinode-20220921215635-5916
	
	** /stderr **
	W0921 21:58:25.854872    2676 delete.go:139] delete failed (probably ok) <nil>
	I0921 21:58:25.854872    2676 fix.go:115] Sleeping 1 second for extra luck!
	I0921 21:58:26.863491    2676 start.go:125] createHost starting for "" (driver="docker")
	I0921 21:58:26.867073    2676 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0921 21:58:26.867698    2676 start.go:159] libmachine.API.Create for "multinode-20220921215635-5916" (driver="docker")
	I0921 21:58:26.867770    2676 client.go:168] LocalClient.Create starting
	I0921 21:58:26.868383    2676 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I0921 21:58:26.868432    2676 main.go:134] libmachine: Decoding PEM data...
	I0921 21:58:26.868432    2676 main.go:134] libmachine: Parsing certificate...
	I0921 21:58:26.868432    2676 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I0921 21:58:26.869140    2676 main.go:134] libmachine: Decoding PEM data...
	I0921 21:58:26.869227    2676 main.go:134] libmachine: Parsing certificate...
	I0921 21:58:26.878577    2676 cli_runner.go:164] Run: docker network inspect multinode-20220921215635-5916 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0921 21:58:27.065128    2676 cli_runner.go:211] docker network inspect multinode-20220921215635-5916 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0921 21:58:27.074389    2676 network_create.go:272] running [docker network inspect multinode-20220921215635-5916] to gather additional debugging logs...
	I0921 21:58:27.074389    2676 cli_runner.go:164] Run: docker network inspect multinode-20220921215635-5916
	W0921 21:58:27.267817    2676 cli_runner.go:211] docker network inspect multinode-20220921215635-5916 returned with exit code 1
	I0921 21:58:27.267817    2676 network_create.go:275] error running [docker network inspect multinode-20220921215635-5916]: docker network inspect multinode-20220921215635-5916: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: multinode-20220921215635-5916
	I0921 21:58:27.267817    2676 network_create.go:277] output of [docker network inspect multinode-20220921215635-5916]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: multinode-20220921215635-5916
	
	** /stderr **
	I0921 21:58:27.276364    2676 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0921 21:58:27.492865    2676 network.go:290] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc000afe2e0] misses:0}
	I0921 21:58:27.493502    2676 network.go:236] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0921 21:58:27.493502    2676 network_create.go:115] attempt to create docker network multinode-20220921215635-5916 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0921 21:58:27.501366    2676 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-20220921215635-5916 multinode-20220921215635-5916
	W0921 21:58:27.688534    2676 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-20220921215635-5916 multinode-20220921215635-5916 returned with exit code 1
	E0921 21:58:27.688655    2676 network_create.go:104] error while trying to create docker network multinode-20220921215635-5916 192.168.49.0/24: create docker network multinode-20220921215635-5916 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-20220921215635-5916 multinode-20220921215635-5916: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network d2295f02ed6a5d30255d18e41f4a3e866528803aa2030a1b2830a1925b9f6bf9 (br-d2295f02ed6a): conflicts with network a04d36bfb3cf4b1b1ab643a2ec70fadfc4a485878b32a275b4c7653037d66dd0 (br-a04d36bfb3cf): networks have overlapping IPv4
	W0921 21:58:27.688986    2676 out.go:239] ! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network multinode-20220921215635-5916 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-20220921215635-5916 multinode-20220921215635-5916: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network d2295f02ed6a5d30255d18e41f4a3e866528803aa2030a1b2830a1925b9f6bf9 (br-d2295f02ed6a): conflicts with network a04d36bfb3cf4b1b1ab643a2ec70fadfc4a485878b32a275b4c7653037d66dd0 (br-a04d36bfb3cf): networks have overlapping IPv4
	
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network multinode-20220921215635-5916 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-20220921215635-5916 multinode-20220921215635-5916: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network d2295f02ed6a5d30255d18e41f4a3e866528803aa2030a1b2830a1925b9f6bf9 (br-d2295f02ed6a): conflicts with network a04d36bfb3cf4b1b1ab643a2ec70fadfc4a485878b32a275b4c7653037d66dd0 (br-a04d36bfb3cf): networks have overlapping IPv4
	
	I0921 21:58:27.702972    2676 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0921 21:58:27.915664    2676 cli_runner.go:164] Run: docker volume create multinode-20220921215635-5916 --label name.minikube.sigs.k8s.io=multinode-20220921215635-5916 --label created_by.minikube.sigs.k8s.io=true
	W0921 21:58:28.113693    2676 cli_runner.go:211] docker volume create multinode-20220921215635-5916 --label name.minikube.sigs.k8s.io=multinode-20220921215635-5916 --label created_by.minikube.sigs.k8s.io=true returned with exit code 1
	I0921 21:58:28.113784    2676 client.go:171] LocalClient.Create took 1.2460049s
	I0921 21:58:30.136402    2676 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0921 21:58:30.143517    2676 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220921215635-5916
	W0921 21:58:30.343800    2676 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220921215635-5916 returned with exit code 1
	I0921 21:58:30.343800    2676 retry.go:31] will retry after 149.242379ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220921215635-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220921215635-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220921215635-5916
	I0921 21:58:30.511455    2676 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220921215635-5916
	W0921 21:58:30.691991    2676 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220921215635-5916 returned with exit code 1
	I0921 21:58:30.691991    2676 retry.go:31] will retry after 300.341948ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220921215635-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220921215635-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220921215635-5916
	I0921 21:58:31.002672    2676 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220921215635-5916
	W0921 21:58:31.197571    2676 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220921215635-5916 returned with exit code 1
	I0921 21:58:31.197571    2676 retry.go:31] will retry after 571.057104ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220921215635-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220921215635-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220921215635-5916
	I0921 21:58:31.776854    2676 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220921215635-5916
	W0921 21:58:31.955558    2676 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220921215635-5916 returned with exit code 1
	W0921 21:58:31.955558    2676 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220921215635-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220921215635-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220921215635-5916
	
	W0921 21:58:31.955558    2676 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220921215635-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220921215635-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220921215635-5916
	I0921 21:58:31.967008    2676 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0921 21:58:31.973679    2676 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220921215635-5916
	W0921 21:58:32.173818    2676 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220921215635-5916 returned with exit code 1
	I0921 21:58:32.174117    2676 retry.go:31] will retry after 178.565968ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220921215635-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220921215635-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220921215635-5916
	I0921 21:58:32.369617    2676 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220921215635-5916
	W0921 21:58:32.562228    2676 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220921215635-5916 returned with exit code 1
	I0921 21:58:32.562228    2676 retry.go:31] will retry after 330.246446ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220921215635-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220921215635-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220921215635-5916
	I0921 21:58:32.915939    2676 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220921215635-5916
	W0921 21:58:33.114610    2676 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220921215635-5916 returned with exit code 1
	I0921 21:58:33.114610    2676 retry.go:31] will retry after 460.157723ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220921215635-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220921215635-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220921215635-5916
	I0921 21:58:33.596760    2676 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220921215635-5916
	W0921 21:58:33.775674    2676 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220921215635-5916 returned with exit code 1
	W0921 21:58:33.775674    2676 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220921215635-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220921215635-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220921215635-5916
	
	W0921 21:58:33.775674    2676 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220921215635-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220921215635-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220921215635-5916
	I0921 21:58:33.775674    2676 start.go:128] duration metric: createHost completed in 6.9121335s
	I0921 21:58:33.787249    2676 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0921 21:58:33.792863    2676 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220921215635-5916
	W0921 21:58:33.990838    2676 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220921215635-5916 returned with exit code 1
	I0921 21:58:33.990838    2676 retry.go:31] will retry after 195.758538ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220921215635-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220921215635-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220921215635-5916
	I0921 21:58:34.202266    2676 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220921215635-5916
	W0921 21:58:34.395386    2676 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220921215635-5916 returned with exit code 1
	I0921 21:58:34.395537    2676 retry.go:31] will retry after 297.413196ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220921215635-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220921215635-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220921215635-5916
	I0921 21:58:34.704159    2676 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220921215635-5916
	W0921 21:58:34.900441    2676 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220921215635-5916 returned with exit code 1
	I0921 21:58:34.900666    2676 retry.go:31] will retry after 663.23513ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220921215635-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220921215635-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220921215635-5916
	I0921 21:58:35.571508    2676 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220921215635-5916
	W0921 21:58:35.765937    2676 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220921215635-5916 returned with exit code 1
	W0921 21:58:35.766024    2676 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220921215635-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220921215635-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220921215635-5916
	
	W0921 21:58:35.766024    2676 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220921215635-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220921215635-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220921215635-5916
	I0921 21:58:35.779120    2676 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0921 21:58:35.784834    2676 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220921215635-5916
	W0921 21:58:35.982472    2676 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220921215635-5916 returned with exit code 1
	I0921 21:58:35.982472    2676 retry.go:31] will retry after 175.796719ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220921215635-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220921215635-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220921215635-5916
	I0921 21:58:36.176455    2676 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220921215635-5916
	W0921 21:58:36.354910    2676 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220921215635-5916 returned with exit code 1
	I0921 21:58:36.355009    2676 retry.go:31] will retry after 322.826781ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220921215635-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220921215635-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220921215635-5916
	I0921 21:58:36.686048    2676 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220921215635-5916
	W0921 21:58:36.887791    2676 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220921215635-5916 returned with exit code 1
	I0921 21:58:36.887791    2676 retry.go:31] will retry after 602.253718ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220921215635-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220921215635-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220921215635-5916
	I0921 21:58:37.504884    2676 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220921215635-5916
	W0921 21:58:37.715138    2676 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220921215635-5916 returned with exit code 1
	W0921 21:58:37.715138    2676 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220921215635-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220921215635-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220921215635-5916
	
	W0921 21:58:37.715138    2676 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220921215635-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220921215635-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220921215635-5916
	I0921 21:58:37.715138    2676 fix.go:57] fixHost completed within 31.2939135s
	I0921 21:58:37.715138    2676 start.go:83] releasing machines lock for "multinode-20220921215635-5916", held for 31.2945795s
	W0921 21:58:37.715138    2676 start.go:602] error starting host: recreate: creating host: create: creating: setting up container node: creating volume for multinode-20220921215635-5916 container: docker volume create multinode-20220921215635-5916 --label name.minikube.sigs.k8s.io=multinode-20220921215635-5916 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create multinode-20220921215635-5916: error while creating volume root path '/var/lib/docker/volumes/multinode-20220921215635-5916': mkdir /var/lib/docker/volumes/multinode-20220921215635-5916: read-only file system
	W0921 21:58:37.715713    2676 out.go:239] ! StartHost failed, but will try again: recreate: creating host: create: creating: setting up container node: creating volume for multinode-20220921215635-5916 container: docker volume create multinode-20220921215635-5916 --label name.minikube.sigs.k8s.io=multinode-20220921215635-5916 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create multinode-20220921215635-5916: error while creating volume root path '/var/lib/docker/volumes/multinode-20220921215635-5916': mkdir /var/lib/docker/volumes/multinode-20220921215635-5916: read-only file system
	
	! StartHost failed, but will try again: recreate: creating host: create: creating: setting up container node: creating volume for multinode-20220921215635-5916 container: docker volume create multinode-20220921215635-5916 --label name.minikube.sigs.k8s.io=multinode-20220921215635-5916 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create multinode-20220921215635-5916: error while creating volume root path '/var/lib/docker/volumes/multinode-20220921215635-5916': mkdir /var/lib/docker/volumes/multinode-20220921215635-5916: read-only file system
	
	I0921 21:58:37.715713    2676 start.go:617] Will try again in 5 seconds ...
	I0921 21:58:42.719689    2676 start.go:364] acquiring machines lock for multinode-20220921215635-5916: {Name:mk1da0b6aaf7b0158fd60ed6f72b6dfa2716f3be Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0921 21:58:42.720116    2676 start.go:368] acquired machines lock for "multinode-20220921215635-5916" in 311.2µs
	I0921 21:58:42.720116    2676 start.go:96] Skipping create...Using existing machine configuration
	I0921 21:58:42.720116    2676 fix.go:55] fixHost starting: 
	I0921 21:58:42.734953    2676 cli_runner.go:164] Run: docker container inspect multinode-20220921215635-5916 --format={{.State.Status}}
	W0921 21:58:42.935740    2676 cli_runner.go:211] docker container inspect multinode-20220921215635-5916 --format={{.State.Status}} returned with exit code 1
	I0921 21:58:42.935794    2676 fix.go:103] recreateIfNeeded on multinode-20220921215635-5916: state= err=unknown state "multinode-20220921215635-5916": docker container inspect multinode-20220921215635-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220921215635-5916
	I0921 21:58:42.935794    2676 fix.go:108] machineExists: false. err=machine does not exist
	I0921 21:58:42.939793    2676 out.go:177] * docker "multinode-20220921215635-5916" container is missing, will recreate.
	I0921 21:58:42.942018    2676 delete.go:124] DEMOLISHING multinode-20220921215635-5916 ...
	I0921 21:58:42.959162    2676 cli_runner.go:164] Run: docker container inspect multinode-20220921215635-5916 --format={{.State.Status}}
	W0921 21:58:43.160372    2676 cli_runner.go:211] docker container inspect multinode-20220921215635-5916 --format={{.State.Status}} returned with exit code 1
	W0921 21:58:43.160695    2676 stop.go:75] unable to get state: unknown state "multinode-20220921215635-5916": docker container inspect multinode-20220921215635-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220921215635-5916
	I0921 21:58:43.160695    2676 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "multinode-20220921215635-5916": docker container inspect multinode-20220921215635-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220921215635-5916
	I0921 21:58:43.173402    2676 cli_runner.go:164] Run: docker container inspect multinode-20220921215635-5916 --format={{.State.Status}}
	W0921 21:58:43.363430    2676 cli_runner.go:211] docker container inspect multinode-20220921215635-5916 --format={{.State.Status}} returned with exit code 1
	I0921 21:58:43.363616    2676 delete.go:82] Unable to get host status for multinode-20220921215635-5916, assuming it has already been deleted: state: unknown state "multinode-20220921215635-5916": docker container inspect multinode-20220921215635-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220921215635-5916
	I0921 21:58:43.371467    2676 cli_runner.go:164] Run: docker container inspect -f {{.Id}} multinode-20220921215635-5916
	W0921 21:58:43.563749    2676 cli_runner.go:211] docker container inspect -f {{.Id}} multinode-20220921215635-5916 returned with exit code 1
	I0921 21:58:43.563749    2676 kic.go:356] could not find the container multinode-20220921215635-5916 to remove it. will try anyways
	I0921 21:58:43.571508    2676 cli_runner.go:164] Run: docker container inspect multinode-20220921215635-5916 --format={{.State.Status}}
	W0921 21:58:43.750450    2676 cli_runner.go:211] docker container inspect multinode-20220921215635-5916 --format={{.State.Status}} returned with exit code 1
	W0921 21:58:43.750450    2676 oci.go:84] error getting container status, will try to delete anyways: unknown state "multinode-20220921215635-5916": docker container inspect multinode-20220921215635-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220921215635-5916
	I0921 21:58:43.757427    2676 cli_runner.go:164] Run: docker exec --privileged -t multinode-20220921215635-5916 /bin/bash -c "sudo init 0"
	W0921 21:58:43.954961    2676 cli_runner.go:211] docker exec --privileged -t multinode-20220921215635-5916 /bin/bash -c "sudo init 0" returned with exit code 1
	I0921 21:58:43.954961    2676 oci.go:646] error shutdown multinode-20220921215635-5916: docker exec --privileged -t multinode-20220921215635-5916 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: multinode-20220921215635-5916
	I0921 21:58:44.972440    2676 cli_runner.go:164] Run: docker container inspect multinode-20220921215635-5916 --format={{.State.Status}}
	W0921 21:58:45.169052    2676 cli_runner.go:211] docker container inspect multinode-20220921215635-5916 --format={{.State.Status}} returned with exit code 1
	I0921 21:58:45.169109    2676 oci.go:658] temporary error verifying shutdown: unknown state "multinode-20220921215635-5916": docker container inspect multinode-20220921215635-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220921215635-5916
	I0921 21:58:45.169109    2676 oci.go:660] temporary error: container multinode-20220921215635-5916 status is  but expect it to be exited
	I0921 21:58:45.169109    2676 retry.go:31] will retry after 396.557122ms: couldn't verify container is exited. %v: unknown state "multinode-20220921215635-5916": docker container inspect multinode-20220921215635-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220921215635-5916
	I0921 21:58:45.590683    2676 cli_runner.go:164] Run: docker container inspect multinode-20220921215635-5916 --format={{.State.Status}}
	W0921 21:58:45.783348    2676 cli_runner.go:211] docker container inspect multinode-20220921215635-5916 --format={{.State.Status}} returned with exit code 1
	I0921 21:58:45.783528    2676 oci.go:658] temporary error verifying shutdown: unknown state "multinode-20220921215635-5916": docker container inspect multinode-20220921215635-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220921215635-5916
	I0921 21:58:45.783528    2676 oci.go:660] temporary error: container multinode-20220921215635-5916 status is  but expect it to be exited
	I0921 21:58:45.783558    2676 retry.go:31] will retry after 597.811922ms: couldn't verify container is exited. %v: unknown state "multinode-20220921215635-5916": docker container inspect multinode-20220921215635-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220921215635-5916
	I0921 21:58:46.396488    2676 cli_runner.go:164] Run: docker container inspect multinode-20220921215635-5916 --format={{.State.Status}}
	W0921 21:58:46.591822    2676 cli_runner.go:211] docker container inspect multinode-20220921215635-5916 --format={{.State.Status}} returned with exit code 1
	I0921 21:58:46.591955    2676 oci.go:658] temporary error verifying shutdown: unknown state "multinode-20220921215635-5916": docker container inspect multinode-20220921215635-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220921215635-5916
	I0921 21:58:46.592147    2676 oci.go:660] temporary error: container multinode-20220921215635-5916 status is  but expect it to be exited
	I0921 21:58:46.592147    2676 retry.go:31] will retry after 1.409144665s: couldn't verify container is exited. %v: unknown state "multinode-20220921215635-5916": docker container inspect multinode-20220921215635-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220921215635-5916
	I0921 21:58:48.017723    2676 cli_runner.go:164] Run: docker container inspect multinode-20220921215635-5916 --format={{.State.Status}}
	W0921 21:58:48.196526    2676 cli_runner.go:211] docker container inspect multinode-20220921215635-5916 --format={{.State.Status}} returned with exit code 1
	I0921 21:58:48.196526    2676 oci.go:658] temporary error verifying shutdown: unknown state "multinode-20220921215635-5916": docker container inspect multinode-20220921215635-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220921215635-5916
	I0921 21:58:48.196526    2676 oci.go:660] temporary error: container multinode-20220921215635-5916 status is  but expect it to be exited
	I0921 21:58:48.196526    2676 retry.go:31] will retry after 1.192358242s: couldn't verify container is exited. %v: unknown state "multinode-20220921215635-5916": docker container inspect multinode-20220921215635-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220921215635-5916
	I0921 21:58:49.405334    2676 cli_runner.go:164] Run: docker container inspect multinode-20220921215635-5916 --format={{.State.Status}}
	W0921 21:58:49.599456    2676 cli_runner.go:211] docker container inspect multinode-20220921215635-5916 --format={{.State.Status}} returned with exit code 1
	I0921 21:58:49.599850    2676 oci.go:658] temporary error verifying shutdown: unknown state "multinode-20220921215635-5916": docker container inspect multinode-20220921215635-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220921215635-5916
	I0921 21:58:49.599949    2676 oci.go:660] temporary error: container multinode-20220921215635-5916 status is  but expect it to be exited
	I0921 21:58:49.600009    2676 retry.go:31] will retry after 3.456004252s: couldn't verify container is exited. %v: unknown state "multinode-20220921215635-5916": docker container inspect multinode-20220921215635-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220921215635-5916
	I0921 21:58:53.069690    2676 cli_runner.go:164] Run: docker container inspect multinode-20220921215635-5916 --format={{.State.Status}}
	W0921 21:58:53.251258    2676 cli_runner.go:211] docker container inspect multinode-20220921215635-5916 --format={{.State.Status}} returned with exit code 1
	I0921 21:58:53.251604    2676 oci.go:658] temporary error verifying shutdown: unknown state "multinode-20220921215635-5916": docker container inspect multinode-20220921215635-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220921215635-5916
	I0921 21:58:53.251644    2676 oci.go:660] temporary error: container multinode-20220921215635-5916 status is  but expect it to be exited
	I0921 21:58:53.251644    2676 retry.go:31] will retry after 4.543793083s: couldn't verify container is exited. %v: unknown state "multinode-20220921215635-5916": docker container inspect multinode-20220921215635-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220921215635-5916
	I0921 21:58:57.814483    2676 cli_runner.go:164] Run: docker container inspect multinode-20220921215635-5916 --format={{.State.Status}}
	W0921 21:58:57.993094    2676 cli_runner.go:211] docker container inspect multinode-20220921215635-5916 --format={{.State.Status}} returned with exit code 1
	I0921 21:58:57.993094    2676 oci.go:658] temporary error verifying shutdown: unknown state "multinode-20220921215635-5916": docker container inspect multinode-20220921215635-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220921215635-5916
	I0921 21:58:57.993094    2676 oci.go:660] temporary error: container multinode-20220921215635-5916 status is  but expect it to be exited
	I0921 21:58:57.993094    2676 retry.go:31] will retry after 5.830976587s: couldn't verify container is exited. %v: unknown state "multinode-20220921215635-5916": docker container inspect multinode-20220921215635-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220921215635-5916
	I0921 21:59:03.842914    2676 cli_runner.go:164] Run: docker container inspect multinode-20220921215635-5916 --format={{.State.Status}}
	W0921 21:59:04.055943    2676 cli_runner.go:211] docker container inspect multinode-20220921215635-5916 --format={{.State.Status}} returned with exit code 1
	I0921 21:59:04.056067    2676 oci.go:658] temporary error verifying shutdown: unknown state "multinode-20220921215635-5916": docker container inspect multinode-20220921215635-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220921215635-5916
	I0921 21:59:04.056107    2676 oci.go:660] temporary error: container multinode-20220921215635-5916 status is  but expect it to be exited
	I0921 21:59:04.056136    2676 oci.go:88] couldn't shut down multinode-20220921215635-5916 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "multinode-20220921215635-5916": docker container inspect multinode-20220921215635-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220921215635-5916
	 
	I0921 21:59:04.066233    2676 cli_runner.go:164] Run: docker rm -f -v multinode-20220921215635-5916
	I0921 21:59:04.281257    2676 cli_runner.go:164] Run: docker container inspect -f {{.Id}} multinode-20220921215635-5916
	W0921 21:59:04.475754    2676 cli_runner.go:211] docker container inspect -f {{.Id}} multinode-20220921215635-5916 returned with exit code 1
	I0921 21:59:04.483579    2676 cli_runner.go:164] Run: docker network inspect multinode-20220921215635-5916 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0921 21:59:04.661051    2676 cli_runner.go:211] docker network inspect multinode-20220921215635-5916 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0921 21:59:04.668707    2676 network_create.go:272] running [docker network inspect multinode-20220921215635-5916] to gather additional debugging logs...
	I0921 21:59:04.668707    2676 cli_runner.go:164] Run: docker network inspect multinode-20220921215635-5916
	W0921 21:59:04.847922    2676 cli_runner.go:211] docker network inspect multinode-20220921215635-5916 returned with exit code 1
	I0921 21:59:04.848070    2676 network_create.go:275] error running [docker network inspect multinode-20220921215635-5916]: docker network inspect multinode-20220921215635-5916: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: multinode-20220921215635-5916
	I0921 21:59:04.848070    2676 network_create.go:277] output of [docker network inspect multinode-20220921215635-5916]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: multinode-20220921215635-5916
	
	** /stderr **
	W0921 21:59:04.848925    2676 delete.go:139] delete failed (probably ok) <nil>
	I0921 21:59:04.848925    2676 fix.go:115] Sleeping 1 second for extra luck!
	I0921 21:59:05.849429    2676 start.go:125] createHost starting for "" (driver="docker")
	I0921 21:59:05.855838    2676 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0921 21:59:05.856330    2676 start.go:159] libmachine.API.Create for "multinode-20220921215635-5916" (driver="docker")
	I0921 21:59:05.856396    2676 client.go:168] LocalClient.Create starting
	I0921 21:59:05.856963    2676 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I0921 21:59:05.857188    2676 main.go:134] libmachine: Decoding PEM data...
	I0921 21:59:05.857188    2676 main.go:134] libmachine: Parsing certificate...
	I0921 21:59:05.857188    2676 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I0921 21:59:05.857188    2676 main.go:134] libmachine: Decoding PEM data...
	I0921 21:59:05.857765    2676 main.go:134] libmachine: Parsing certificate...
	I0921 21:59:05.867028    2676 cli_runner.go:164] Run: docker network inspect multinode-20220921215635-5916 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0921 21:59:06.051145    2676 cli_runner.go:211] docker network inspect multinode-20220921215635-5916 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0921 21:59:06.060931    2676 network_create.go:272] running [docker network inspect multinode-20220921215635-5916] to gather additional debugging logs...
	I0921 21:59:06.060931    2676 cli_runner.go:164] Run: docker network inspect multinode-20220921215635-5916
	W0921 21:59:06.252968    2676 cli_runner.go:211] docker network inspect multinode-20220921215635-5916 returned with exit code 1
	I0921 21:59:06.252968    2676 network_create.go:275] error running [docker network inspect multinode-20220921215635-5916]: docker network inspect multinode-20220921215635-5916: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: multinode-20220921215635-5916
	I0921 21:59:06.252968    2676 network_create.go:277] output of [docker network inspect multinode-20220921215635-5916]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: multinode-20220921215635-5916
	
	** /stderr **
	I0921 21:59:06.261595    2676 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0921 21:59:06.473158    2676 network.go:281] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000afe2e0] amended:false}} dirty:map[] misses:0}
	I0921 21:59:06.473235    2676 network.go:239] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0921 21:59:06.487184    2676 network.go:290] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000afe2e0] amended:true}} dirty:map[192.168.49.0:0xc000afe2e0 192.168.58.0:0xc000afe818] misses:0}
	I0921 21:59:06.487184    2676 network.go:236] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0921 21:59:06.487184    2676 network_create.go:115] attempt to create docker network multinode-20220921215635-5916 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0921 21:59:06.496053    2676 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-20220921215635-5916 multinode-20220921215635-5916
	W0921 21:59:06.689977    2676 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-20220921215635-5916 multinode-20220921215635-5916 returned with exit code 1
	E0921 21:59:06.689977    2676 network_create.go:104] error while trying to create docker network multinode-20220921215635-5916 192.168.58.0/24: create docker network multinode-20220921215635-5916 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-20220921215635-5916 multinode-20220921215635-5916: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 6d9125ce5e4669c0b9f2e5b93bcef4b83da592394c6052b8e465492b616a61d8 (br-6d9125ce5e46): conflicts with network 8a3cd8d165a4af032276277dbee93b52b8ee4aed7ca6d8aceee6814901b31792 (br-8a3cd8d165a4): networks have overlapping IPv4
	W0921 21:59:06.689977    2676 out.go:239] ! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network multinode-20220921215635-5916 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-20220921215635-5916 multinode-20220921215635-5916: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 6d9125ce5e4669c0b9f2e5b93bcef4b83da592394c6052b8e465492b616a61d8 (br-6d9125ce5e46): conflicts with network 8a3cd8d165a4af032276277dbee93b52b8ee4aed7ca6d8aceee6814901b31792 (br-8a3cd8d165a4): networks have overlapping IPv4
	
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network multinode-20220921215635-5916 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-20220921215635-5916 multinode-20220921215635-5916: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 6d9125ce5e4669c0b9f2e5b93bcef4b83da592394c6052b8e465492b616a61d8 (br-6d9125ce5e46): conflicts with network 8a3cd8d165a4af032276277dbee93b52b8ee4aed7ca6d8aceee6814901b31792 (br-8a3cd8d165a4): networks have overlapping IPv4
	
	I0921 21:59:06.706722    2676 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0921 21:59:06.933620    2676 cli_runner.go:164] Run: docker volume create multinode-20220921215635-5916 --label name.minikube.sigs.k8s.io=multinode-20220921215635-5916 --label created_by.minikube.sigs.k8s.io=true
	W0921 21:59:07.127640    2676 cli_runner.go:211] docker volume create multinode-20220921215635-5916 --label name.minikube.sigs.k8s.io=multinode-20220921215635-5916 --label created_by.minikube.sigs.k8s.io=true returned with exit code 1
	I0921 21:59:07.127700    2676 client.go:171] LocalClient.Create took 1.2712945s
	I0921 21:59:09.148101    2676 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0921 21:59:09.153788    2676 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220921215635-5916
	W0921 21:59:09.335081    2676 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220921215635-5916 returned with exit code 1
	I0921 21:59:09.335081    2676 retry.go:31] will retry after 164.582069ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220921215635-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220921215635-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220921215635-5916
	I0921 21:59:09.514256    2676 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220921215635-5916
	W0921 21:59:09.693782    2676 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220921215635-5916 returned with exit code 1
	I0921 21:59:09.694110    2676 retry.go:31] will retry after 415.22004ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220921215635-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220921215635-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220921215635-5916
	I0921 21:59:10.126897    2676 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220921215635-5916
	W0921 21:59:10.362706    2676 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220921215635-5916 returned with exit code 1
	W0921 21:59:10.362885    2676 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220921215635-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220921215635-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220921215635-5916
	
	W0921 21:59:10.362885    2676 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220921215635-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220921215635-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220921215635-5916
	I0921 21:59:10.373351    2676 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0921 21:59:10.380078    2676 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220921215635-5916
	W0921 21:59:10.565385    2676 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220921215635-5916 returned with exit code 1
	I0921 21:59:10.565385    2676 retry.go:31] will retry after 144.863405ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220921215635-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220921215635-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220921215635-5916
	I0921 21:59:10.720736    2676 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220921215635-5916
	W0921 21:59:10.918104    2676 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220921215635-5916 returned with exit code 1
	I0921 21:59:10.918221    2676 retry.go:31] will retry after 410.553224ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220921215635-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220921215635-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220921215635-5916
	I0921 21:59:11.341235    2676 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220921215635-5916
	W0921 21:59:11.563453    2676 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220921215635-5916 returned with exit code 1
	I0921 21:59:11.563654    2676 retry.go:31] will retry after 314.505366ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220921215635-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220921215635-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220921215635-5916
	I0921 21:59:11.903326    2676 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220921215635-5916
	W0921 21:59:12.109059    2676 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220921215635-5916 returned with exit code 1
	W0921 21:59:12.109059    2676 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220921215635-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220921215635-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220921215635-5916
	
	W0921 21:59:12.109059    2676 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220921215635-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220921215635-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220921215635-5916
	I0921 21:59:12.109059    2676 start.go:128] duration metric: createHost completed in 6.2593915s
	I0921 21:59:12.120594    2676 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0921 21:59:12.126199    2676 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220921215635-5916
	W0921 21:59:12.311884    2676 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220921215635-5916 returned with exit code 1
	I0921 21:59:12.312286    2676 retry.go:31] will retry after 200.38067ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220921215635-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220921215635-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220921215635-5916
	I0921 21:59:12.527174    2676 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220921215635-5916
	W0921 21:59:12.736167    2676 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220921215635-5916 returned with exit code 1
	I0921 21:59:12.736306    2676 retry.go:31] will retry after 252.474839ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220921215635-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220921215635-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220921215635-5916
	I0921 21:59:13.001289    2676 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220921215635-5916
	W0921 21:59:13.196067    2676 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220921215635-5916 returned with exit code 1
	I0921 21:59:13.196067    2676 retry.go:31] will retry after 585.618668ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220921215635-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220921215635-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220921215635-5916
	I0921 21:59:13.800922    2676 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220921215635-5916
	W0921 21:59:14.019897    2676 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220921215635-5916 returned with exit code 1
	W0921 21:59:14.019897    2676 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220921215635-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220921215635-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220921215635-5916
	
	W0921 21:59:14.019897    2676 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220921215635-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220921215635-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220921215635-5916
	I0921 21:59:14.030397    2676 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0921 21:59:14.036293    2676 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220921215635-5916
	W0921 21:59:14.220913    2676 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220921215635-5916 returned with exit code 1
	I0921 21:59:14.220913    2676 retry.go:31] will retry after 194.626905ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220921215635-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220921215635-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220921215635-5916
	I0921 21:59:14.430451    2676 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220921215635-5916
	W0921 21:59:14.622420    2676 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220921215635-5916 returned with exit code 1
	I0921 21:59:14.622635    2676 retry.go:31] will retry after 346.182076ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220921215635-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220921215635-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220921215635-5916
	I0921 21:59:14.989444    2676 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220921215635-5916
	W0921 21:59:15.199908    2676 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220921215635-5916 returned with exit code 1
	I0921 21:59:15.199908    2676 retry.go:31] will retry after 579.704465ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220921215635-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220921215635-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220921215635-5916
	I0921 21:59:15.797555    2676 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220921215635-5916
	W0921 21:59:15.991040    2676 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220921215635-5916 returned with exit code 1
	W0921 21:59:15.991299    2676 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220921215635-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220921215635-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220921215635-5916
	
	W0921 21:59:15.991299    2676 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220921215635-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220921215635-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220921215635-5916
	I0921 21:59:15.991299    2676 fix.go:57] fixHost completed within 33.2709471s
	I0921 21:59:15.991299    2676 start.go:83] releasing machines lock for "multinode-20220921215635-5916", held for 33.2709471s
	W0921 21:59:15.992212    2676 out.go:239] * Failed to start docker container. Running "minikube delete -p multinode-20220921215635-5916" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for multinode-20220921215635-5916 container: docker volume create multinode-20220921215635-5916 --label name.minikube.sigs.k8s.io=multinode-20220921215635-5916 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create multinode-20220921215635-5916: error while creating volume root path '/var/lib/docker/volumes/multinode-20220921215635-5916': mkdir /var/lib/docker/volumes/multinode-20220921215635-5916: read-only file system
	
	* Failed to start docker container. Running "minikube delete -p multinode-20220921215635-5916" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for multinode-20220921215635-5916 container: docker volume create multinode-20220921215635-5916 --label name.minikube.sigs.k8s.io=multinode-20220921215635-5916 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create multinode-20220921215635-5916: error while creating volume root path '/var/lib/docker/volumes/multinode-20220921215635-5916': mkdir /var/lib/docker/volumes/multinode-20220921215635-5916: read-only file system
	
	I0921 21:59:16.007635    2676 out.go:177] 
	W0921 21:59:16.010572    2676 out.go:239] X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for multinode-20220921215635-5916 container: docker volume create multinode-20220921215635-5916 --label name.minikube.sigs.k8s.io=multinode-20220921215635-5916 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create multinode-20220921215635-5916: error while creating volume root path '/var/lib/docker/volumes/multinode-20220921215635-5916': mkdir /var/lib/docker/volumes/multinode-20220921215635-5916: read-only file system
	
	X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for multinode-20220921215635-5916 container: docker volume create multinode-20220921215635-5916 --label name.minikube.sigs.k8s.io=multinode-20220921215635-5916 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create multinode-20220921215635-5916: error while creating volume root path '/var/lib/docker/volumes/multinode-20220921215635-5916': mkdir /var/lib/docker/volumes/multinode-20220921215635-5916: read-only file system
	
	W0921 21:59:16.010572    2676 out.go:239] * Suggestion: Restart Docker
	* Suggestion: Restart Docker
	W0921 21:59:16.012825    2676 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/6825
	* Related issue: https://github.com/kubernetes/minikube/issues/6825
	I0921 21:59:16.015218    2676 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:295: failed to run minikube start. args "out/minikube-windows-amd64.exe node list -p multinode-20220921215635-5916" : exit status 60
multinode_test.go:298: (dbg) Run:  out/minikube-windows-amd64.exe node list -p multinode-20220921215635-5916
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/RestartKeepsNodes]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-20220921215635-5916
helpers_test.go:231: (dbg) Non-zero exit: docker inspect multinode-20220921215635-5916: exit status 1 (252.85ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: multinode-20220921215635-5916

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-20220921215635-5916 -n multinode-20220921215635-5916
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-20220921215635-5916 -n multinode-20220921215635-5916: exit status 7 (565.9587ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0921 21:59:17.406137    8884 status.go:247] status error: host: state: unknown state "multinode-20220921215635-5916": docker container inspect multinode-20220921215635-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220921215635-5916

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-20220921215635-5916" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (96.38s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.39s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:392: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220921215635-5916 node delete m03
multinode_test.go:392: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-20220921215635-5916 node delete m03: exit status 80 (1.0399744s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "multinode-20220921215635-5916": docker container inspect multinode-20220921215635-5916 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220921215635-5916
	
	* 
	╭──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                      │
	│    * If the above advice does not help, please let us know:                                                          │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                        │
	│                                                                                                                      │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                             │
	│    * Please also attach the following file to the GitHub issue:                                                      │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_node_207105384607abbf0a822abec5db82084f27bc08_7.log    │
	│                                                                                                                      │
	╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:394: node stop returned an error. args "out/minikube-windows-amd64.exe -p multinode-20220921215635-5916 node delete m03": exit status 80
multinode_test.go:398: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220921215635-5916 status --alsologtostderr
multinode_test.go:398: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-20220921215635-5916 status --alsologtostderr: exit status 7 (533.0863ms)

                                                
                                                
-- stdout --
	multinode-20220921215635-5916
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0921 21:59:18.708875    2964 out.go:296] Setting OutFile to fd 944 ...
	I0921 21:59:18.762882    2964 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0921 21:59:18.762882    2964 out.go:309] Setting ErrFile to fd 788...
	I0921 21:59:18.762882    2964 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0921 21:59:18.774045    2964 out.go:303] Setting JSON to false
	I0921 21:59:18.774045    2964 mustload.go:65] Loading cluster: multinode-20220921215635-5916
	I0921 21:59:18.774785    2964 config.go:180] Loaded profile config "multinode-20220921215635-5916": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.2
	I0921 21:59:18.774785    2964 status.go:253] checking status of multinode-20220921215635-5916 ...
	I0921 21:59:18.790564    2964 cli_runner.go:164] Run: docker container inspect multinode-20220921215635-5916 --format={{.State.Status}}
	W0921 21:59:18.979782    2964 cli_runner.go:211] docker container inspect multinode-20220921215635-5916 --format={{.State.Status}} returned with exit code 1
	I0921 21:59:18.979782    2964 status.go:328] multinode-20220921215635-5916 host status = "" (err=state: unknown state "multinode-20220921215635-5916": docker container inspect multinode-20220921215635-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220921215635-5916
	)
	I0921 21:59:18.979782    2964 status.go:255] multinode-20220921215635-5916 status: &{Name:multinode-20220921215635-5916 Host:Nonexistent Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Nonexistent Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0921 21:59:18.979782    2964 status.go:258] status error: host: state: unknown state "multinode-20220921215635-5916": docker container inspect multinode-20220921215635-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220921215635-5916
	E0921 21:59:18.979782    2964 status.go:261] The "multinode-20220921215635-5916" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:400: failed to run minikube status. args "out/minikube-windows-amd64.exe -p multinode-20220921215635-5916 status --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/DeleteNode]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-20220921215635-5916
helpers_test.go:231: (dbg) Non-zero exit: docker inspect multinode-20220921215635-5916: exit status 1 (239.4988ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: multinode-20220921215635-5916

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-20220921215635-5916 -n multinode-20220921215635-5916
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-20220921215635-5916 -n multinode-20220921215635-5916: exit status 7 (564.9158ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0921 21:59:19.793003    8320 status.go:247] status error: host: state: unknown state "multinode-20220921215635-5916": docker container inspect multinode-20220921215635-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220921215635-5916

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-20220921215635-5916" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/DeleteNode (2.39s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (20.97s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:312: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220921215635-5916 stop
multinode_test.go:312: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-20220921215635-5916 stop: exit status 82 (19.111733s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-20220921215635-5916"  ...
	* Stopping node "multinode-20220921215635-5916"  ...
	* Stopping node "multinode-20220921215635-5916"  ...
	* Stopping node "multinode-20220921215635-5916"  ...
	* Stopping node "multinode-20220921215635-5916"  ...
	* Stopping node "multinode-20220921215635-5916"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0921 21:59:23.674339    7956 daemonize_windows.go:38] error terminating scheduled stop for profile multinode-20220921215635-5916: stopping schedule-stop service for profile multinode-20220921215635-5916: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220921215635-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220921215635-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220921215635-5916
	X Exiting due to GUEST_STOP_TIMEOUT: docker container inspect multinode-20220921215635-5916 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220921215635-5916
	
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                        │
	│    * If the above advice does not help, please let us know:                                                            │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                          │
	│                                                                                                                        │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                               │
	│    * Please also attach the following file to the GitHub issue:                                                        │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_stop_eeafc14e343686f0df1f1d4295ac2d3042636ff8_153.log    │
	│                                                                                                                        │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:314: node stop returned an error. args "out/minikube-windows-amd64.exe -p multinode-20220921215635-5916 stop": exit status 82
multinode_test.go:318: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220921215635-5916 status
multinode_test.go:318: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-20220921215635-5916 status: exit status 7 (550.8334ms)

                                                
                                                
-- stdout --
	multinode-20220921215635-5916
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0921 21:59:39.456128    7336 status.go:258] status error: host: state: unknown state "multinode-20220921215635-5916": docker container inspect multinode-20220921215635-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220921215635-5916
	E0921 21:59:39.456128    7336 status.go:261] The "multinode-20220921215635-5916" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:325: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220921215635-5916 status --alsologtostderr
multinode_test.go:325: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-20220921215635-5916 status --alsologtostderr: exit status 7 (525.1313ms)

                                                
                                                
-- stdout --
	multinode-20220921215635-5916
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0921 21:59:39.716075     740 out.go:296] Setting OutFile to fd 976 ...
	I0921 21:59:39.770078     740 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0921 21:59:39.770078     740 out.go:309] Setting ErrFile to fd 744...
	I0921 21:59:39.770078     740 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0921 21:59:39.780070     740 out.go:303] Setting JSON to false
	I0921 21:59:39.780070     740 mustload.go:65] Loading cluster: multinode-20220921215635-5916
	I0921 21:59:39.781078     740 config.go:180] Loaded profile config "multinode-20220921215635-5916": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.2
	I0921 21:59:39.781078     740 status.go:253] checking status of multinode-20220921215635-5916 ...
	I0921 21:59:39.794072     740 cli_runner.go:164] Run: docker container inspect multinode-20220921215635-5916 --format={{.State.Status}}
	W0921 21:59:39.982252     740 cli_runner.go:211] docker container inspect multinode-20220921215635-5916 --format={{.State.Status}} returned with exit code 1
	I0921 21:59:39.982510     740 status.go:328] multinode-20220921215635-5916 host status = "" (err=state: unknown state "multinode-20220921215635-5916": docker container inspect multinode-20220921215635-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220921215635-5916
	)
	I0921 21:59:39.982510     740 status.go:255] multinode-20220921215635-5916 status: &{Name:multinode-20220921215635-5916 Host:Nonexistent Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Nonexistent Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0921 21:59:39.982510     740 status.go:258] status error: host: state: unknown state "multinode-20220921215635-5916": docker container inspect multinode-20220921215635-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220921215635-5916
	E0921 21:59:39.982510     740 status.go:261] The "multinode-20220921215635-5916" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:331: incorrect number of stopped hosts: args "out/minikube-windows-amd64.exe -p multinode-20220921215635-5916 status --alsologtostderr": multinode-20220921215635-5916
type: Control Plane
host: Nonexistent
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Nonexistent

                                                
                                                
multinode_test.go:335: incorrect number of stopped kubelets: args "out/minikube-windows-amd64.exe -p multinode-20220921215635-5916 status --alsologtostderr": multinode-20220921215635-5916
type: Control Plane
host: Nonexistent
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Nonexistent

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/StopMultiNode]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-20220921215635-5916
helpers_test.go:231: (dbg) Non-zero exit: docker inspect multinode-20220921215635-5916: exit status 1 (252.7337ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: multinode-20220921215635-5916

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-20220921215635-5916 -n multinode-20220921215635-5916
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-20220921215635-5916 -n multinode-20220921215635-5916: exit status 7 (520.9515ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0921 21:59:40.766205    5008 status.go:247] status error: host: state: unknown state "multinode-20220921215635-5916": docker container inspect multinode-20220921215635-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220921215635-5916

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-20220921215635-5916" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/StopMultiNode (20.97s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (76.75s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:342: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:352: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-20220921215635-5916 --wait=true -v=8 --alsologtostderr --driver=docker
multinode_test.go:352: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p multinode-20220921215635-5916 --wait=true -v=8 --alsologtostderr --driver=docker: exit status 60 (1m15.5191962s)

                                                
                                                
-- stdout --
	* [multinode-20220921215635-5916] minikube v1.27.0 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	  - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=14995
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on existing profile
	* Starting control plane node multinode-20220921215635-5916 in cluster multinode-20220921215635-5916
	* Pulling base image ...
	* docker "multinode-20220921215635-5916" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* docker "multinode-20220921215635-5916" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0921 21:59:41.278504    6032 out.go:296] Setting OutFile to fd 604 ...
	I0921 21:59:41.332524    6032 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0921 21:59:41.332524    6032 out.go:309] Setting ErrFile to fd 784...
	I0921 21:59:41.332524    6032 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0921 21:59:41.349493    6032 out.go:303] Setting JSON to false
	I0921 21:59:41.351492    6032 start.go:115] hostinfo: {"hostname":"minikube2","uptime":3649,"bootTime":1663793932,"procs":146,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19044 Build 19044","kernelVersion":"10.0.19044 Build 19044","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"29dced62-21fb-45d8-a34e-472b66ced002"}
	W0921 21:59:41.352489    6032 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0921 21:59:41.357481    6032 out.go:177] * [multinode-20220921215635-5916] minikube v1.27.0 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	I0921 21:59:41.360493    6032 notify.go:214] Checking for updates...
	I0921 21:59:41.362495    6032 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	I0921 21:59:41.365483    6032 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	I0921 21:59:41.368496    6032 out.go:177]   - MINIKUBE_LOCATION=14995
	I0921 21:59:41.370483    6032 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0921 21:59:41.373493    6032 config.go:180] Loaded profile config "multinode-20220921215635-5916": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.2
	I0921 21:59:41.374494    6032 driver.go:365] Setting default libvirt URI to qemu:///system
	I0921 21:59:41.682879    6032 docker.go:137] docker version: linux-20.10.17
	I0921 21:59:41.691044    6032 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0921 21:59:42.198801    6032 info.go:265] docker info: {ID:6VFI:N2PO:PLBP:KXLR:EN7R:OR7P:564S:UY7N:ZWDA:5V2U:KGL2:KJZ7 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:43 OomKillDisable:true NGoroutines:60 SystemTime:2022-09-21 21:59:41.8349012 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.10.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.9] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-
plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.19.0]] Warnings:<nil>}}
	I0921 21:59:42.209583    6032 out.go:177] * Using the docker driver based on existing profile
	I0921 21:59:42.212628    6032 start.go:284] selected driver: docker
	I0921 21:59:42.212628    6032 start.go:808] validating driver "docker" against &{Name:multinode-20220921215635-5916 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.2 ClusterName:multinode-20220921215635-5916 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.25.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpti
mizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I0921 21:59:42.213076    6032 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0921 21:59:42.227691    6032 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0921 21:59:42.760708    6032 info.go:265] docker info: {ID:6VFI:N2PO:PLBP:KXLR:EN7R:OR7P:564S:UY7N:ZWDA:5V2U:KGL2:KJZ7 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:43 OomKillDisable:true NGoroutines:60 SystemTime:2022-09-21 21:59:42.3844259 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.10.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.9] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-
plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.19.0]] Warnings:<nil>}}
	I0921 21:59:42.868761    6032 start_flags.go:867] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0921 21:59:42.868761    6032 cni.go:95] Creating CNI manager for ""
	I0921 21:59:42.868761    6032 cni.go:156] 1 nodes found, recommending kindnet
	I0921 21:59:42.868761    6032 start_flags.go:316] config:
	{Name:multinode-20220921215635-5916 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.2 ClusterName:multinode-20220921215635-5916 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.25.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPa
th:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I0921 21:59:42.873744    6032 out.go:177] * Starting control plane node multinode-20220921215635-5916 in cluster multinode-20220921215635-5916
	I0921 21:59:42.876558    6032 cache.go:120] Beginning downloading kic base image for docker with docker
	I0921 21:59:42.879570    6032 out.go:177] * Pulling base image ...
	I0921 21:59:42.881863    6032 preload.go:132] Checking if preload exists for k8s version v1.25.2 and runtime docker
	I0921 21:59:42.881863    6032 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c in local docker daemon
	I0921 21:59:42.882049    6032 preload.go:148] Found local preload: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.25.2-docker-overlay2-amd64.tar.lz4
	I0921 21:59:42.882097    6032 cache.go:57] Caching tarball of preloaded images
	I0921 21:59:42.882644    6032 preload.go:174] Found C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.25.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0921 21:59:42.882842    6032 cache.go:60] Finished verifying existence of preloaded tar for  v1.25.2 on docker
	I0921 21:59:42.883052    6032 profile.go:148] Saving config to C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\multinode-20220921215635-5916\config.json ...
	I0921 21:59:43.102227    6032 cache.go:147] Downloading gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c to local cache
	I0921 21:59:43.102294    6032 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase_v0.0.34@sha256_f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c.tar
	I0921 21:59:43.102420    6032 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase_v0.0.34@sha256_f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c.tar
	I0921 21:59:43.102420    6032 image.go:59] Checking for gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c in local cache directory
	I0921 21:59:43.102420    6032 image.go:62] Found gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c in local cache directory, skipping pull
	I0921 21:59:43.102420    6032 image.go:103] gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c exists in cache, skipping pull
	I0921 21:59:43.102960    6032 cache.go:150] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c as a tarball
	I0921 21:59:43.102960    6032 cache.go:161] Loading gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c from local cache
	I0921 21:59:43.102960    6032 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase_v0.0.34@sha256_f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c.tar
	I0921 21:59:45.283303    6032 image.go:219] response: {"errorDetail":{"message":"mkdir /var/lib/docker/tmp/docker-import-3817641379: read-only file system"},"error":"mkdir /var/lib/docker/tmp/docker-import-3817641379: read-only file system"}
	I0921 21:59:45.283303    6032 cache.go:164] successfully loaded gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c from cached tarball
	I0921 21:59:45.283826    6032 cache.go:170] Downloading gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c to local daemon
	I0921 21:59:45.283826    6032 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c.d.lock -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase_v0.0.34@sha256_f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c.d.lock
	I0921 21:59:45.284143    6032 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c in local docker daemon
	I0921 21:59:45.486244    6032 image.go:243] Writing gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c to local daemon
	I0921 21:59:45.486244    6032 image.go:258] Getting image gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c
	I0921 21:59:45.776864    6032 image.go:272] Writing image gcr.io/k8s-minikube/kicbase:v0.0.34
	    > gcr.io/k8s-minikube/kicbase:  0 B [__________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase:  0 B [__________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase:  0 B [__________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase:  0 B [__________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase:  0 B [_______________________] ?% ? p/s 1.0sI0921 21:59:46.728645    6032 image.go:306] Pulling image gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c
	I0921 21:59:47.081169    6032 cache.go:177] use image loaded from cache gcr.io/k8s-minikube/kicbase:v0.0.34
	W0921 21:59:47.081319    6032 out.go:239] ! minikube was unable to download gcr.io/k8s-minikube/kicbase:v0.0.34, but successfully downloaded gcr.io/k8s-minikube/kicbase:v0.0.34 as a fallback image
	! minikube was unable to download gcr.io/k8s-minikube/kicbase:v0.0.34, but successfully downloaded gcr.io/k8s-minikube/kicbase:v0.0.34 as a fallback image
	I0921 21:59:47.081319    6032 cache.go:208] Successfully downloaded all kic artifacts
	I0921 21:59:47.081487    6032 start.go:364] acquiring machines lock for multinode-20220921215635-5916: {Name:mk1da0b6aaf7b0158fd60ed6f72b6dfa2716f3be Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0921 21:59:47.081707    6032 start.go:368] acquired machines lock for "multinode-20220921215635-5916" in 166.3µs
	I0921 21:59:47.081767    6032 start.go:96] Skipping create...Using existing machine configuration
	I0921 21:59:47.081767    6032 fix.go:55] fixHost starting: 
	I0921 21:59:47.095666    6032 cli_runner.go:164] Run: docker container inspect multinode-20220921215635-5916 --format={{.State.Status}}
	W0921 21:59:47.284594    6032 cli_runner.go:211] docker container inspect multinode-20220921215635-5916 --format={{.State.Status}} returned with exit code 1
	I0921 21:59:47.284720    6032 fix.go:103] recreateIfNeeded on multinode-20220921215635-5916: state= err=unknown state "multinode-20220921215635-5916": docker container inspect multinode-20220921215635-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220921215635-5916
	I0921 21:59:47.284748    6032 fix.go:108] machineExists: false. err=machine does not exist
	I0921 21:59:47.288029    6032 out.go:177] * docker "multinode-20220921215635-5916" container is missing, will recreate.
	I0921 21:59:47.291599    6032 delete.go:124] DEMOLISHING multinode-20220921215635-5916 ...
	I0921 21:59:47.304606    6032 cli_runner.go:164] Run: docker container inspect multinode-20220921215635-5916 --format={{.State.Status}}
	W0921 21:59:47.485845    6032 cli_runner.go:211] docker container inspect multinode-20220921215635-5916 --format={{.State.Status}} returned with exit code 1
	W0921 21:59:47.486020    6032 stop.go:75] unable to get state: unknown state "multinode-20220921215635-5916": docker container inspect multinode-20220921215635-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220921215635-5916
	I0921 21:59:47.486020    6032 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "multinode-20220921215635-5916": docker container inspect multinode-20220921215635-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220921215635-5916
	I0921 21:59:47.499389    6032 cli_runner.go:164] Run: docker container inspect multinode-20220921215635-5916 --format={{.State.Status}}
	W0921 21:59:47.706539    6032 cli_runner.go:211] docker container inspect multinode-20220921215635-5916 --format={{.State.Status}} returned with exit code 1
	I0921 21:59:47.706799    6032 delete.go:82] Unable to get host status for multinode-20220921215635-5916, assuming it has already been deleted: state: unknown state "multinode-20220921215635-5916": docker container inspect multinode-20220921215635-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220921215635-5916
	I0921 21:59:47.714402    6032 cli_runner.go:164] Run: docker container inspect -f {{.Id}} multinode-20220921215635-5916
	W0921 21:59:47.907468    6032 cli_runner.go:211] docker container inspect -f {{.Id}} multinode-20220921215635-5916 returned with exit code 1
	I0921 21:59:47.907682    6032 kic.go:356] could not find the container multinode-20220921215635-5916 to remove it. will try anyways
	I0921 21:59:47.914847    6032 cli_runner.go:164] Run: docker container inspect multinode-20220921215635-5916 --format={{.State.Status}}
	W0921 21:59:48.109799    6032 cli_runner.go:211] docker container inspect multinode-20220921215635-5916 --format={{.State.Status}} returned with exit code 1
	W0921 21:59:48.109799    6032 oci.go:84] error getting container status, will try to delete anyways: unknown state "multinode-20220921215635-5916": docker container inspect multinode-20220921215635-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220921215635-5916
	I0921 21:59:48.115817    6032 cli_runner.go:164] Run: docker exec --privileged -t multinode-20220921215635-5916 /bin/bash -c "sudo init 0"
	W0921 21:59:48.313108    6032 cli_runner.go:211] docker exec --privileged -t multinode-20220921215635-5916 /bin/bash -c "sudo init 0" returned with exit code 1
	I0921 21:59:48.313108    6032 oci.go:646] error shutdown multinode-20220921215635-5916: docker exec --privileged -t multinode-20220921215635-5916 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: multinode-20220921215635-5916
	I0921 21:59:49.320944    6032 cli_runner.go:164] Run: docker container inspect multinode-20220921215635-5916 --format={{.State.Status}}
	W0921 21:59:49.547588    6032 cli_runner.go:211] docker container inspect multinode-20220921215635-5916 --format={{.State.Status}} returned with exit code 1
	I0921 21:59:49.547761    6032 oci.go:658] temporary error verifying shutdown: unknown state "multinode-20220921215635-5916": docker container inspect multinode-20220921215635-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220921215635-5916
	I0921 21:59:49.547822    6032 oci.go:660] temporary error: container multinode-20220921215635-5916 status is  but expect it to be exited
	I0921 21:59:49.547900    6032 retry.go:31] will retry after 552.330144ms: couldn't verify container is exited. %v: unknown state "multinode-20220921215635-5916": docker container inspect multinode-20220921215635-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220921215635-5916
	I0921 21:59:50.111651    6032 cli_runner.go:164] Run: docker container inspect multinode-20220921215635-5916 --format={{.State.Status}}
	W0921 21:59:50.289576    6032 cli_runner.go:211] docker container inspect multinode-20220921215635-5916 --format={{.State.Status}} returned with exit code 1
	I0921 21:59:50.289576    6032 oci.go:658] temporary error verifying shutdown: unknown state "multinode-20220921215635-5916": docker container inspect multinode-20220921215635-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220921215635-5916
	I0921 21:59:50.289576    6032 oci.go:660] temporary error: container multinode-20220921215635-5916 status is  but expect it to be exited
	I0921 21:59:50.289576    6032 retry.go:31] will retry after 1.080381816s: couldn't verify container is exited. %v: unknown state "multinode-20220921215635-5916": docker container inspect multinode-20220921215635-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220921215635-5916
	I0921 21:59:51.383785    6032 cli_runner.go:164] Run: docker container inspect multinode-20220921215635-5916 --format={{.State.Status}}
	W0921 21:59:51.596434    6032 cli_runner.go:211] docker container inspect multinode-20220921215635-5916 --format={{.State.Status}} returned with exit code 1
	I0921 21:59:51.596533    6032 oci.go:658] temporary error verifying shutdown: unknown state "multinode-20220921215635-5916": docker container inspect multinode-20220921215635-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220921215635-5916
	I0921 21:59:51.596691    6032 oci.go:660] temporary error: container multinode-20220921215635-5916 status is  but expect it to be exited
	I0921 21:59:51.596691    6032 retry.go:31] will retry after 1.31013006s: couldn't verify container is exited. %v: unknown state "multinode-20220921215635-5916": docker container inspect multinode-20220921215635-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220921215635-5916
	I0921 21:59:52.917722    6032 cli_runner.go:164] Run: docker container inspect multinode-20220921215635-5916 --format={{.State.Status}}
	W0921 21:59:53.126367    6032 cli_runner.go:211] docker container inspect multinode-20220921215635-5916 --format={{.State.Status}} returned with exit code 1
	I0921 21:59:53.126471    6032 oci.go:658] temporary error verifying shutdown: unknown state "multinode-20220921215635-5916": docker container inspect multinode-20220921215635-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220921215635-5916
	I0921 21:59:53.126471    6032 oci.go:660] temporary error: container multinode-20220921215635-5916 status is  but expect it to be exited
	I0921 21:59:53.126641    6032 retry.go:31] will retry after 1.582392691s: couldn't verify container is exited. %v: unknown state "multinode-20220921215635-5916": docker container inspect multinode-20220921215635-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220921215635-5916
	I0921 21:59:54.724665    6032 cli_runner.go:164] Run: docker container inspect multinode-20220921215635-5916 --format={{.State.Status}}
	W0921 21:59:54.906671    6032 cli_runner.go:211] docker container inspect multinode-20220921215635-5916 --format={{.State.Status}} returned with exit code 1
	I0921 21:59:54.906671    6032 oci.go:658] temporary error verifying shutdown: unknown state "multinode-20220921215635-5916": docker container inspect multinode-20220921215635-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220921215635-5916
	I0921 21:59:54.906671    6032 oci.go:660] temporary error: container multinode-20220921215635-5916 status is  but expect it to be exited
	I0921 21:59:54.906671    6032 retry.go:31] will retry after 2.340488664s: couldn't verify container is exited. %v: unknown state "multinode-20220921215635-5916": docker container inspect multinode-20220921215635-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220921215635-5916
	I0921 21:59:57.263541    6032 cli_runner.go:164] Run: docker container inspect multinode-20220921215635-5916 --format={{.State.Status}}
	W0921 21:59:57.455505    6032 cli_runner.go:211] docker container inspect multinode-20220921215635-5916 --format={{.State.Status}} returned with exit code 1
	I0921 21:59:57.455868    6032 oci.go:658] temporary error verifying shutdown: unknown state "multinode-20220921215635-5916": docker container inspect multinode-20220921215635-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220921215635-5916
	I0921 21:59:57.455900    6032 oci.go:660] temporary error: container multinode-20220921215635-5916 status is  but expect it to be exited
	I0921 21:59:57.455928    6032 retry.go:31] will retry after 4.506218855s: couldn't verify container is exited. %v: unknown state "multinode-20220921215635-5916": docker container inspect multinode-20220921215635-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220921215635-5916
	I0921 22:00:01.972201    6032 cli_runner.go:164] Run: docker container inspect multinode-20220921215635-5916 --format={{.State.Status}}
	W0921 22:00:02.168711    6032 cli_runner.go:211] docker container inspect multinode-20220921215635-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:00:02.168711    6032 oci.go:658] temporary error verifying shutdown: unknown state "multinode-20220921215635-5916": docker container inspect multinode-20220921215635-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220921215635-5916
	I0921 22:00:02.168711    6032 oci.go:660] temporary error: container multinode-20220921215635-5916 status is  but expect it to be exited
	I0921 22:00:02.168711    6032 retry.go:31] will retry after 3.221479586s: couldn't verify container is exited. %v: unknown state "multinode-20220921215635-5916": docker container inspect multinode-20220921215635-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220921215635-5916
	I0921 22:00:05.399755    6032 cli_runner.go:164] Run: docker container inspect multinode-20220921215635-5916 --format={{.State.Status}}
	W0921 22:00:05.592828    6032 cli_runner.go:211] docker container inspect multinode-20220921215635-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:00:05.592905    6032 oci.go:658] temporary error verifying shutdown: unknown state "multinode-20220921215635-5916": docker container inspect multinode-20220921215635-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220921215635-5916
	I0921 22:00:05.592905    6032 oci.go:660] temporary error: container multinode-20220921215635-5916 status is  but expect it to be exited
	I0921 22:00:05.592905    6032 oci.go:88] couldn't shut down multinode-20220921215635-5916 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "multinode-20220921215635-5916": docker container inspect multinode-20220921215635-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220921215635-5916
	 
	I0921 22:00:05.600905    6032 cli_runner.go:164] Run: docker rm -f -v multinode-20220921215635-5916
	I0921 22:00:05.816793    6032 cli_runner.go:164] Run: docker container inspect -f {{.Id}} multinode-20220921215635-5916
	W0921 22:00:05.996110    6032 cli_runner.go:211] docker container inspect -f {{.Id}} multinode-20220921215635-5916 returned with exit code 1
	I0921 22:00:06.004875    6032 cli_runner.go:164] Run: docker network inspect multinode-20220921215635-5916 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0921 22:00:06.199255    6032 cli_runner.go:211] docker network inspect multinode-20220921215635-5916 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0921 22:00:06.206315    6032 network_create.go:272] running [docker network inspect multinode-20220921215635-5916] to gather additional debugging logs...
	I0921 22:00:06.206315    6032 cli_runner.go:164] Run: docker network inspect multinode-20220921215635-5916
	W0921 22:00:06.402135    6032 cli_runner.go:211] docker network inspect multinode-20220921215635-5916 returned with exit code 1
	I0921 22:00:06.402331    6032 network_create.go:275] error running [docker network inspect multinode-20220921215635-5916]: docker network inspect multinode-20220921215635-5916: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: multinode-20220921215635-5916
	I0921 22:00:06.402331    6032 network_create.go:277] output of [docker network inspect multinode-20220921215635-5916]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: multinode-20220921215635-5916
	
	** /stderr **
	W0921 22:00:06.403002    6032 delete.go:139] delete failed (probably ok) <nil>
	I0921 22:00:06.403002    6032 fix.go:115] Sleeping 1 second for extra luck!
	I0921 22:00:07.415982    6032 start.go:125] createHost starting for "" (driver="docker")
	I0921 22:00:07.421397    6032 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0921 22:00:07.421660    6032 start.go:159] libmachine.API.Create for "multinode-20220921215635-5916" (driver="docker")
	I0921 22:00:07.421660    6032 client.go:168] LocalClient.Create starting
	I0921 22:00:07.422243    6032 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I0921 22:00:07.422516    6032 main.go:134] libmachine: Decoding PEM data...
	I0921 22:00:07.422516    6032 main.go:134] libmachine: Parsing certificate...
	I0921 22:00:07.422516    6032 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I0921 22:00:07.422516    6032 main.go:134] libmachine: Decoding PEM data...
	I0921 22:00:07.422516    6032 main.go:134] libmachine: Parsing certificate...
	I0921 22:00:07.432198    6032 cli_runner.go:164] Run: docker network inspect multinode-20220921215635-5916 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0921 22:00:07.632995    6032 cli_runner.go:211] docker network inspect multinode-20220921215635-5916 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0921 22:00:07.639799    6032 network_create.go:272] running [docker network inspect multinode-20220921215635-5916] to gather additional debugging logs...
	I0921 22:00:07.639799    6032 cli_runner.go:164] Run: docker network inspect multinode-20220921215635-5916
	W0921 22:00:07.835081    6032 cli_runner.go:211] docker network inspect multinode-20220921215635-5916 returned with exit code 1
	I0921 22:00:07.835131    6032 network_create.go:275] error running [docker network inspect multinode-20220921215635-5916]: docker network inspect multinode-20220921215635-5916: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: multinode-20220921215635-5916
	I0921 22:00:07.835188    6032 network_create.go:277] output of [docker network inspect multinode-20220921215635-5916]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: multinode-20220921215635-5916
	
	** /stderr **
	I0921 22:00:07.843444    6032 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0921 22:00:08.056947    6032 network.go:290] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc00000a5b0] misses:0}
	I0921 22:00:08.057797    6032 network.go:236] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0921 22:00:08.057797    6032 network_create.go:115] attempt to create docker network multinode-20220921215635-5916 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0921 22:00:08.065881    6032 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-20220921215635-5916 multinode-20220921215635-5916
	W0921 22:00:08.257355    6032 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-20220921215635-5916 multinode-20220921215635-5916 returned with exit code 1
	E0921 22:00:08.257355    6032 network_create.go:104] error while trying to create docker network multinode-20220921215635-5916 192.168.49.0/24: create docker network multinode-20220921215635-5916 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-20220921215635-5916 multinode-20220921215635-5916: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 9e153cebc8b7e2536383917fa65fb679b09bd5b629cc99af67acd7234d67dcbc (br-9e153cebc8b7): conflicts with network a04d36bfb3cf4b1b1ab643a2ec70fadfc4a485878b32a275b4c7653037d66dd0 (br-a04d36bfb3cf): networks have overlapping IPv4
	W0921 22:00:08.257355    6032 out.go:239] ! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network multinode-20220921215635-5916 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-20220921215635-5916 multinode-20220921215635-5916: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 9e153cebc8b7e2536383917fa65fb679b09bd5b629cc99af67acd7234d67dcbc (br-9e153cebc8b7): conflicts with network a04d36bfb3cf4b1b1ab643a2ec70fadfc4a485878b32a275b4c7653037d66dd0 (br-a04d36bfb3cf): networks have overlapping IPv4
	
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network multinode-20220921215635-5916 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-20220921215635-5916 multinode-20220921215635-5916: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 9e153cebc8b7e2536383917fa65fb679b09bd5b629cc99af67acd7234d67dcbc (br-9e153cebc8b7): conflicts with network a04d36bfb3cf4b1b1ab643a2ec70fadfc4a485878b32a275b4c7653037d66dd0 (br-a04d36bfb3cf): networks have overlapping IPv4
	
	I0921 22:00:08.272537    6032 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0921 22:00:08.468878    6032 cli_runner.go:164] Run: docker volume create multinode-20220921215635-5916 --label name.minikube.sigs.k8s.io=multinode-20220921215635-5916 --label created_by.minikube.sigs.k8s.io=true
	W0921 22:00:08.659386    6032 cli_runner.go:211] docker volume create multinode-20220921215635-5916 --label name.minikube.sigs.k8s.io=multinode-20220921215635-5916 --label created_by.minikube.sigs.k8s.io=true returned with exit code 1
	I0921 22:00:08.659548    6032 client.go:171] LocalClient.Create took 1.2377994s
	I0921 22:00:10.682047    6032 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0921 22:00:10.688161    6032 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220921215635-5916
	W0921 22:00:10.888966    6032 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220921215635-5916 returned with exit code 1
	I0921 22:00:10.889220    6032 retry.go:31] will retry after 149.242379ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220921215635-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220921215635-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220921215635-5916
	I0921 22:00:11.060526    6032 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220921215635-5916
	W0921 22:00:11.238376    6032 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220921215635-5916 returned with exit code 1
	I0921 22:00:11.238376    6032 retry.go:31] will retry after 300.341948ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220921215635-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220921215635-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220921215635-5916
	I0921 22:00:11.550009    6032 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220921215635-5916
	W0921 22:00:11.744737    6032 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220921215635-5916 returned with exit code 1
	I0921 22:00:11.744737    6032 retry.go:31] will retry after 571.057104ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220921215635-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220921215635-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220921215635-5916
	I0921 22:00:12.330525    6032 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220921215635-5916
	W0921 22:00:12.535311    6032 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220921215635-5916 returned with exit code 1
	W0921 22:00:12.535732    6032 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220921215635-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220921215635-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220921215635-5916
	
	W0921 22:00:12.535832    6032 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220921215635-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220921215635-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220921215635-5916
	I0921 22:00:12.548952    6032 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0921 22:00:12.555411    6032 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220921215635-5916
	W0921 22:00:12.769326    6032 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220921215635-5916 returned with exit code 1
	I0921 22:00:12.769470    6032 retry.go:31] will retry after 178.565968ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220921215635-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220921215635-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220921215635-5916
	I0921 22:00:12.959503    6032 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220921215635-5916
	W0921 22:00:13.155664    6032 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220921215635-5916 returned with exit code 1
	I0921 22:00:13.155888    6032 retry.go:31] will retry after 330.246446ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220921215635-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220921215635-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220921215635-5916
	I0921 22:00:13.494914    6032 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220921215635-5916
	W0921 22:00:13.672872    6032 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220921215635-5916 returned with exit code 1
	I0921 22:00:13.672872    6032 retry.go:31] will retry after 460.157723ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220921215635-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220921215635-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220921215635-5916
	I0921 22:00:14.155427    6032 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220921215635-5916
	W0921 22:00:14.340831    6032 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220921215635-5916 returned with exit code 1
	W0921 22:00:14.340831    6032 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220921215635-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220921215635-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220921215635-5916
	
	W0921 22:00:14.340831    6032 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220921215635-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220921215635-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220921215635-5916
	I0921 22:00:14.340831    6032 start.go:128] duration metric: createHost completed in 6.9247993s
	I0921 22:00:14.353646    6032 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0921 22:00:14.362743    6032 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220921215635-5916
	W0921 22:00:14.543729    6032 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220921215635-5916 returned with exit code 1
	I0921 22:00:14.543729    6032 retry.go:31] will retry after 195.758538ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220921215635-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220921215635-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220921215635-5916
	I0921 22:00:14.754456    6032 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220921215635-5916
	W0921 22:00:14.948281    6032 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220921215635-5916 returned with exit code 1
	I0921 22:00:14.948607    6032 retry.go:31] will retry after 297.413196ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220921215635-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220921215635-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220921215635-5916
	I0921 22:00:15.267116    6032 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220921215635-5916
	W0921 22:00:15.461761    6032 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220921215635-5916 returned with exit code 1
	I0921 22:00:15.461761    6032 retry.go:31] will retry after 663.23513ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220921215635-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220921215635-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220921215635-5916
	I0921 22:00:16.134090    6032 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220921215635-5916
	W0921 22:00:16.328326    6032 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220921215635-5916 returned with exit code 1
	W0921 22:00:16.328326    6032 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220921215635-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220921215635-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220921215635-5916
	
	W0921 22:00:16.328326    6032 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220921215635-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220921215635-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220921215635-5916
	I0921 22:00:16.339666    6032 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0921 22:00:16.345660    6032 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220921215635-5916
	W0921 22:00:16.530348    6032 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220921215635-5916 returned with exit code 1
	I0921 22:00:16.530408    6032 retry.go:31] will retry after 175.796719ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220921215635-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220921215635-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220921215635-5916
	I0921 22:00:16.723559    6032 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220921215635-5916
	W0921 22:00:16.931669    6032 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220921215635-5916 returned with exit code 1
	I0921 22:00:16.931855    6032 retry.go:31] will retry after 322.826781ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220921215635-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220921215635-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220921215635-5916
	I0921 22:00:17.268405    6032 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220921215635-5916
	W0921 22:00:17.461999    6032 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220921215635-5916 returned with exit code 1
	I0921 22:00:17.462360    6032 retry.go:31] will retry after 602.253718ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220921215635-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220921215635-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220921215635-5916
	I0921 22:00:18.084959    6032 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220921215635-5916
	W0921 22:00:18.278644    6032 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220921215635-5916 returned with exit code 1
	W0921 22:00:18.278644    6032 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220921215635-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220921215635-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220921215635-5916
	
	W0921 22:00:18.278644    6032 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220921215635-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220921215635-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220921215635-5916
	I0921 22:00:18.278644    6032 fix.go:57] fixHost completed within 31.1966519s
	I0921 22:00:18.278644    6032 start.go:83] releasing machines lock for "multinode-20220921215635-5916", held for 31.1967118s
	W0921 22:00:18.278644    6032 start.go:602] error starting host: recreate: creating host: create: creating: setting up container node: creating volume for multinode-20220921215635-5916 container: docker volume create multinode-20220921215635-5916 --label name.minikube.sigs.k8s.io=multinode-20220921215635-5916 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create multinode-20220921215635-5916: error while creating volume root path '/var/lib/docker/volumes/multinode-20220921215635-5916': mkdir /var/lib/docker/volumes/multinode-20220921215635-5916: read-only file system
	W0921 22:00:18.278644    6032 out.go:239] ! StartHost failed, but will try again: recreate: creating host: create: creating: setting up container node: creating volume for multinode-20220921215635-5916 container: docker volume create multinode-20220921215635-5916 --label name.minikube.sigs.k8s.io=multinode-20220921215635-5916 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create multinode-20220921215635-5916: error while creating volume root path '/var/lib/docker/volumes/multinode-20220921215635-5916': mkdir /var/lib/docker/volumes/multinode-20220921215635-5916: read-only file system
	
	! StartHost failed, but will try again: recreate: creating host: create: creating: setting up container node: creating volume for multinode-20220921215635-5916 container: docker volume create multinode-20220921215635-5916 --label name.minikube.sigs.k8s.io=multinode-20220921215635-5916 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create multinode-20220921215635-5916: error while creating volume root path '/var/lib/docker/volumes/multinode-20220921215635-5916': mkdir /var/lib/docker/volumes/multinode-20220921215635-5916: read-only file system
	
	I0921 22:00:18.278644    6032 start.go:617] Will try again in 5 seconds ...
	I0921 22:00:23.293021    6032 start.go:364] acquiring machines lock for multinode-20220921215635-5916: {Name:mk1da0b6aaf7b0158fd60ed6f72b6dfa2716f3be Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0921 22:00:23.293542    6032 start.go:368] acquired machines lock for "multinode-20220921215635-5916" in 332.1µs
	I0921 22:00:23.293773    6032 start.go:96] Skipping create...Using existing machine configuration
	I0921 22:00:23.293773    6032 fix.go:55] fixHost starting: 
	I0921 22:00:23.307570    6032 cli_runner.go:164] Run: docker container inspect multinode-20220921215635-5916 --format={{.State.Status}}
	W0921 22:00:23.519108    6032 cli_runner.go:211] docker container inspect multinode-20220921215635-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:00:23.519108    6032 fix.go:103] recreateIfNeeded on multinode-20220921215635-5916: state= err=unknown state "multinode-20220921215635-5916": docker container inspect multinode-20220921215635-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220921215635-5916
	I0921 22:00:23.519108    6032 fix.go:108] machineExists: false. err=machine does not exist
	I0921 22:00:23.523005    6032 out.go:177] * docker "multinode-20220921215635-5916" container is missing, will recreate.
	I0921 22:00:23.526400    6032 delete.go:124] DEMOLISHING multinode-20220921215635-5916 ...
	I0921 22:00:23.544017    6032 cli_runner.go:164] Run: docker container inspect multinode-20220921215635-5916 --format={{.State.Status}}
	W0921 22:00:23.720694    6032 cli_runner.go:211] docker container inspect multinode-20220921215635-5916 --format={{.State.Status}} returned with exit code 1
	W0921 22:00:23.720945    6032 stop.go:75] unable to get state: unknown state "multinode-20220921215635-5916": docker container inspect multinode-20220921215635-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220921215635-5916
	I0921 22:00:23.720988    6032 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "multinode-20220921215635-5916": docker container inspect multinode-20220921215635-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220921215635-5916
	I0921 22:00:23.730904    6032 cli_runner.go:164] Run: docker container inspect multinode-20220921215635-5916 --format={{.State.Status}}
	W0921 22:00:23.925694    6032 cli_runner.go:211] docker container inspect multinode-20220921215635-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:00:23.925694    6032 delete.go:82] Unable to get host status for multinode-20220921215635-5916, assuming it has already been deleted: state: unknown state "multinode-20220921215635-5916": docker container inspect multinode-20220921215635-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220921215635-5916
	I0921 22:00:23.935421    6032 cli_runner.go:164] Run: docker container inspect -f {{.Id}} multinode-20220921215635-5916
	W0921 22:00:24.160714    6032 cli_runner.go:211] docker container inspect -f {{.Id}} multinode-20220921215635-5916 returned with exit code 1
	I0921 22:00:24.161070    6032 kic.go:356] could not find the container multinode-20220921215635-5916 to remove it. will try anyways
	I0921 22:00:24.168790    6032 cli_runner.go:164] Run: docker container inspect multinode-20220921215635-5916 --format={{.State.Status}}
	W0921 22:00:24.371394    6032 cli_runner.go:211] docker container inspect multinode-20220921215635-5916 --format={{.State.Status}} returned with exit code 1
	W0921 22:00:24.371550    6032 oci.go:84] error getting container status, will try to delete anyways: unknown state "multinode-20220921215635-5916": docker container inspect multinode-20220921215635-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220921215635-5916
	I0921 22:00:24.381906    6032 cli_runner.go:164] Run: docker exec --privileged -t multinode-20220921215635-5916 /bin/bash -c "sudo init 0"
	W0921 22:00:24.558897    6032 cli_runner.go:211] docker exec --privileged -t multinode-20220921215635-5916 /bin/bash -c "sudo init 0" returned with exit code 1
	I0921 22:00:24.558897    6032 oci.go:646] error shutdown multinode-20220921215635-5916: docker exec --privileged -t multinode-20220921215635-5916 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: multinode-20220921215635-5916
	I0921 22:00:25.573466    6032 cli_runner.go:164] Run: docker container inspect multinode-20220921215635-5916 --format={{.State.Status}}
	W0921 22:00:25.766418    6032 cli_runner.go:211] docker container inspect multinode-20220921215635-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:00:25.766418    6032 oci.go:658] temporary error verifying shutdown: unknown state "multinode-20220921215635-5916": docker container inspect multinode-20220921215635-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220921215635-5916
	I0921 22:00:25.766418    6032 oci.go:660] temporary error: container multinode-20220921215635-5916 status is  but expect it to be exited
	I0921 22:00:25.766418    6032 retry.go:31] will retry after 396.557122ms: couldn't verify container is exited. %v: unknown state "multinode-20220921215635-5916": docker container inspect multinode-20220921215635-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220921215635-5916
	I0921 22:00:26.179459    6032 cli_runner.go:164] Run: docker container inspect multinode-20220921215635-5916 --format={{.State.Status}}
	W0921 22:00:26.388612    6032 cli_runner.go:211] docker container inspect multinode-20220921215635-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:00:26.388612    6032 oci.go:658] temporary error verifying shutdown: unknown state "multinode-20220921215635-5916": docker container inspect multinode-20220921215635-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220921215635-5916
	I0921 22:00:26.388612    6032 oci.go:660] temporary error: container multinode-20220921215635-5916 status is  but expect it to be exited
	I0921 22:00:26.388612    6032 retry.go:31] will retry after 597.811922ms: couldn't verify container is exited. %v: unknown state "multinode-20220921215635-5916": docker container inspect multinode-20220921215635-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220921215635-5916
	I0921 22:00:27.003199    6032 cli_runner.go:164] Run: docker container inspect multinode-20220921215635-5916 --format={{.State.Status}}
	W0921 22:00:27.198947    6032 cli_runner.go:211] docker container inspect multinode-20220921215635-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:00:27.199322    6032 oci.go:658] temporary error verifying shutdown: unknown state "multinode-20220921215635-5916": docker container inspect multinode-20220921215635-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220921215635-5916
	I0921 22:00:27.199322    6032 oci.go:660] temporary error: container multinode-20220921215635-5916 status is  but expect it to be exited
	I0921 22:00:27.199418    6032 retry.go:31] will retry after 1.409144665s: couldn't verify container is exited. %v: unknown state "multinode-20220921215635-5916": docker container inspect multinode-20220921215635-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220921215635-5916
	I0921 22:00:28.627145    6032 cli_runner.go:164] Run: docker container inspect multinode-20220921215635-5916 --format={{.State.Status}}
	W0921 22:00:28.818320    6032 cli_runner.go:211] docker container inspect multinode-20220921215635-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:00:28.818469    6032 oci.go:658] temporary error verifying shutdown: unknown state "multinode-20220921215635-5916": docker container inspect multinode-20220921215635-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220921215635-5916
	I0921 22:00:28.818496    6032 oci.go:660] temporary error: container multinode-20220921215635-5916 status is  but expect it to be exited
	I0921 22:00:28.818496    6032 retry.go:31] will retry after 1.192358242s: couldn't verify container is exited. %v: unknown state "multinode-20220921215635-5916": docker container inspect multinode-20220921215635-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220921215635-5916
	I0921 22:00:30.024906    6032 cli_runner.go:164] Run: docker container inspect multinode-20220921215635-5916 --format={{.State.Status}}
	W0921 22:00:30.242258    6032 cli_runner.go:211] docker container inspect multinode-20220921215635-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:00:30.242258    6032 oci.go:658] temporary error verifying shutdown: unknown state "multinode-20220921215635-5916": docker container inspect multinode-20220921215635-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220921215635-5916
	I0921 22:00:30.242258    6032 oci.go:660] temporary error: container multinode-20220921215635-5916 status is  but expect it to be exited
	I0921 22:00:30.242258    6032 retry.go:31] will retry after 3.456004252s: couldn't verify container is exited. %v: unknown state "multinode-20220921215635-5916": docker container inspect multinode-20220921215635-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220921215635-5916
	I0921 22:00:33.713242    6032 cli_runner.go:164] Run: docker container inspect multinode-20220921215635-5916 --format={{.State.Status}}
	W0921 22:00:33.905926    6032 cli_runner.go:211] docker container inspect multinode-20220921215635-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:00:33.905926    6032 oci.go:658] temporary error verifying shutdown: unknown state "multinode-20220921215635-5916": docker container inspect multinode-20220921215635-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220921215635-5916
	I0921 22:00:33.905926    6032 oci.go:660] temporary error: container multinode-20220921215635-5916 status is  but expect it to be exited
	I0921 22:00:33.905926    6032 retry.go:31] will retry after 4.543793083s: couldn't verify container is exited. %v: unknown state "multinode-20220921215635-5916": docker container inspect multinode-20220921215635-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220921215635-5916
	I0921 22:00:38.460144    6032 cli_runner.go:164] Run: docker container inspect multinode-20220921215635-5916 --format={{.State.Status}}
	W0921 22:00:38.654983    6032 cli_runner.go:211] docker container inspect multinode-20220921215635-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:00:38.655097    6032 oci.go:658] temporary error verifying shutdown: unknown state "multinode-20220921215635-5916": docker container inspect multinode-20220921215635-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220921215635-5916
	I0921 22:00:38.655196    6032 oci.go:660] temporary error: container multinode-20220921215635-5916 status is  but expect it to be exited
	I0921 22:00:38.655196    6032 retry.go:31] will retry after 5.830976587s: couldn't verify container is exited. %v: unknown state "multinode-20220921215635-5916": docker container inspect multinode-20220921215635-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220921215635-5916
	I0921 22:00:44.507193    6032 cli_runner.go:164] Run: docker container inspect multinode-20220921215635-5916 --format={{.State.Status}}
	W0921 22:00:44.687669    6032 cli_runner.go:211] docker container inspect multinode-20220921215635-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:00:44.687798    6032 oci.go:658] temporary error verifying shutdown: unknown state "multinode-20220921215635-5916": docker container inspect multinode-20220921215635-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220921215635-5916
	I0921 22:00:44.687798    6032 oci.go:660] temporary error: container multinode-20220921215635-5916 status is  but expect it to be exited
	I0921 22:00:44.687798    6032 oci.go:88] couldn't shut down multinode-20220921215635-5916 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "multinode-20220921215635-5916": docker container inspect multinode-20220921215635-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220921215635-5916
	 
	I0921 22:00:44.695398    6032 cli_runner.go:164] Run: docker rm -f -v multinode-20220921215635-5916
	I0921 22:00:44.898540    6032 cli_runner.go:164] Run: docker container inspect -f {{.Id}} multinode-20220921215635-5916
	W0921 22:00:45.076400    6032 cli_runner.go:211] docker container inspect -f {{.Id}} multinode-20220921215635-5916 returned with exit code 1
	I0921 22:00:45.083576    6032 cli_runner.go:164] Run: docker network inspect multinode-20220921215635-5916 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0921 22:00:45.278628    6032 cli_runner.go:211] docker network inspect multinode-20220921215635-5916 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0921 22:00:45.287210    6032 network_create.go:272] running [docker network inspect multinode-20220921215635-5916] to gather additional debugging logs...
	I0921 22:00:45.287210    6032 cli_runner.go:164] Run: docker network inspect multinode-20220921215635-5916
	W0921 22:00:45.481855    6032 cli_runner.go:211] docker network inspect multinode-20220921215635-5916 returned with exit code 1
	I0921 22:00:45.481855    6032 network_create.go:275] error running [docker network inspect multinode-20220921215635-5916]: docker network inspect multinode-20220921215635-5916: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: multinode-20220921215635-5916
	I0921 22:00:45.481855    6032 network_create.go:277] output of [docker network inspect multinode-20220921215635-5916]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: multinode-20220921215635-5916
	
	** /stderr **
	W0921 22:00:45.483279    6032 delete.go:139] delete failed (probably ok) <nil>
	I0921 22:00:45.483462    6032 fix.go:115] Sleeping 1 second for extra luck!
	I0921 22:00:46.484922    6032 start.go:125] createHost starting for "" (driver="docker")
	I0921 22:00:46.490372    6032 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0921 22:00:46.490672    6032 start.go:159] libmachine.API.Create for "multinode-20220921215635-5916" (driver="docker")
	I0921 22:00:46.490764    6032 client.go:168] LocalClient.Create starting
	I0921 22:00:46.491672    6032 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I0921 22:00:46.492077    6032 main.go:134] libmachine: Decoding PEM data...
	I0921 22:00:46.492192    6032 main.go:134] libmachine: Parsing certificate...
	I0921 22:00:46.492417    6032 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I0921 22:00:46.492729    6032 main.go:134] libmachine: Decoding PEM data...
	I0921 22:00:46.492729    6032 main.go:134] libmachine: Parsing certificate...
	I0921 22:00:46.506384    6032 cli_runner.go:164] Run: docker network inspect multinode-20220921215635-5916 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0921 22:00:46.703521    6032 cli_runner.go:211] docker network inspect multinode-20220921215635-5916 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0921 22:00:46.713336    6032 network_create.go:272] running [docker network inspect multinode-20220921215635-5916] to gather additional debugging logs...
	I0921 22:00:46.713336    6032 cli_runner.go:164] Run: docker network inspect multinode-20220921215635-5916
	W0921 22:00:46.919985    6032 cli_runner.go:211] docker network inspect multinode-20220921215635-5916 returned with exit code 1
	I0921 22:00:46.920208    6032 network_create.go:275] error running [docker network inspect multinode-20220921215635-5916]: docker network inspect multinode-20220921215635-5916: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: multinode-20220921215635-5916
	I0921 22:00:46.920301    6032 network_create.go:277] output of [docker network inspect multinode-20220921215635-5916]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: multinode-20220921215635-5916
	
	** /stderr **
	I0921 22:00:46.927489    6032 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0921 22:00:47.154184    6032 network.go:281] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00000a5b0] amended:false}} dirty:map[] misses:0}
	I0921 22:00:47.154184    6032 network.go:239] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0921 22:00:47.170632    6032 network.go:290] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00000a5b0] amended:true}} dirty:map[192.168.49.0:0xc00000a5b0 192.168.58.0:0xc0004ca8a0] misses:0}
	I0921 22:00:47.170632    6032 network.go:236] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0921 22:00:47.170632    6032 network_create.go:115] attempt to create docker network multinode-20220921215635-5916 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0921 22:00:47.180068    6032 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-20220921215635-5916 multinode-20220921215635-5916
	W0921 22:00:47.383763    6032 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-20220921215635-5916 multinode-20220921215635-5916 returned with exit code 1
	E0921 22:00:47.384018    6032 network_create.go:104] error while trying to create docker network multinode-20220921215635-5916 192.168.58.0/24: create docker network multinode-20220921215635-5916 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-20220921215635-5916 multinode-20220921215635-5916: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 76f46da60127c0a13a1e9c2f2da670b04b81ac62e30e106037621348b19e498c (br-76f46da60127): conflicts with network 8a3cd8d165a4af032276277dbee93b52b8ee4aed7ca6d8aceee6814901b31792 (br-8a3cd8d165a4): networks have overlapping IPv4
	W0921 22:00:47.384302    6032 out.go:239] ! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network multinode-20220921215635-5916 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-20220921215635-5916 multinode-20220921215635-5916: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 76f46da60127c0a13a1e9c2f2da670b04b81ac62e30e106037621348b19e498c (br-76f46da60127): conflicts with network 8a3cd8d165a4af032276277dbee93b52b8ee4aed7ca6d8aceee6814901b31792 (br-8a3cd8d165a4): networks have overlapping IPv4
	
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network multinode-20220921215635-5916 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-20220921215635-5916 multinode-20220921215635-5916: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 76f46da60127c0a13a1e9c2f2da670b04b81ac62e30e106037621348b19e498c (br-76f46da60127): conflicts with network 8a3cd8d165a4af032276277dbee93b52b8ee4aed7ca6d8aceee6814901b31792 (br-8a3cd8d165a4): networks have overlapping IPv4
	
	I0921 22:00:47.399080    6032 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0921 22:00:47.593786    6032 cli_runner.go:164] Run: docker volume create multinode-20220921215635-5916 --label name.minikube.sigs.k8s.io=multinode-20220921215635-5916 --label created_by.minikube.sigs.k8s.io=true
	W0921 22:00:47.777912    6032 cli_runner.go:211] docker volume create multinode-20220921215635-5916 --label name.minikube.sigs.k8s.io=multinode-20220921215635-5916 --label created_by.minikube.sigs.k8s.io=true returned with exit code 1
	I0921 22:00:47.778151    6032 client.go:171] LocalClient.Create took 1.2873606s
	I0921 22:00:49.802344    6032 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0921 22:00:49.809715    6032 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220921215635-5916
	W0921 22:00:49.990207    6032 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220921215635-5916 returned with exit code 1
	I0921 22:00:49.990207    6032 retry.go:31] will retry after 164.582069ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220921215635-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220921215635-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220921215635-5916
	I0921 22:00:50.180094    6032 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220921215635-5916
	W0921 22:00:50.357645    6032 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220921215635-5916 returned with exit code 1
	I0921 22:00:50.357796    6032 retry.go:31] will retry after 415.22004ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220921215635-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220921215635-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220921215635-5916
	I0921 22:00:50.796711    6032 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220921215635-5916
	W0921 22:00:50.986721    6032 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220921215635-5916 returned with exit code 1
	W0921 22:00:50.986979    6032 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220921215635-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220921215635-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220921215635-5916
	
	W0921 22:00:50.986979    6032 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220921215635-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220921215635-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220921215635-5916
	I0921 22:00:50.996889    6032 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0921 22:00:51.003638    6032 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220921215635-5916
	W0921 22:00:51.205301    6032 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220921215635-5916 returned with exit code 1
	I0921 22:00:51.205301    6032 retry.go:31] will retry after 144.863405ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220921215635-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220921215635-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220921215635-5916
	I0921 22:00:51.359177    6032 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220921215635-5916
	W0921 22:00:51.556979    6032 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220921215635-5916 returned with exit code 1
	I0921 22:00:51.556979    6032 retry.go:31] will retry after 410.553224ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220921215635-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220921215635-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220921215635-5916
	I0921 22:00:51.979849    6032 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220921215635-5916
	W0921 22:00:52.173316    6032 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220921215635-5916 returned with exit code 1
	I0921 22:00:52.173518    6032 retry.go:31] will retry after 314.505366ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220921215635-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220921215635-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220921215635-5916
	I0921 22:00:52.508291    6032 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220921215635-5916
	W0921 22:00:52.703072    6032 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220921215635-5916 returned with exit code 1
	W0921 22:00:52.703188    6032 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220921215635-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220921215635-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220921215635-5916
	
	W0921 22:00:52.703414    6032 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220921215635-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220921215635-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220921215635-5916
	I0921 22:00:52.703414    6032 start.go:128] duration metric: createHost completed in 6.2184473s
	I0921 22:00:52.713314    6032 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0921 22:00:52.719613    6032 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220921215635-5916
	W0921 22:00:52.905108    6032 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220921215635-5916 returned with exit code 1
	I0921 22:00:52.905248    6032 retry.go:31] will retry after 200.38067ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220921215635-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220921215635-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220921215635-5916
	I0921 22:00:53.125765    6032 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220921215635-5916
	W0921 22:00:53.316203    6032 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220921215635-5916 returned with exit code 1
	I0921 22:00:53.316203    6032 retry.go:31] will retry after 252.474839ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220921215635-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220921215635-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220921215635-5916
	I0921 22:00:53.588850    6032 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220921215635-5916
	W0921 22:00:53.779807    6032 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220921215635-5916 returned with exit code 1
	I0921 22:00:53.780076    6032 retry.go:31] will retry after 585.618668ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220921215635-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220921215635-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220921215635-5916
	I0921 22:00:54.380645    6032 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220921215635-5916
	W0921 22:00:54.571549    6032 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220921215635-5916 returned with exit code 1
	W0921 22:00:54.571549    6032 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220921215635-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220921215635-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220921215635-5916
	
	W0921 22:00:54.571549    6032 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220921215635-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220921215635-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220921215635-5916
	I0921 22:00:54.584626    6032 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0921 22:00:54.590262    6032 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220921215635-5916
	W0921 22:00:54.774222    6032 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220921215635-5916 returned with exit code 1
	I0921 22:00:54.774504    6032 retry.go:31] will retry after 194.626905ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220921215635-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220921215635-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220921215635-5916
	I0921 22:00:54.986653    6032 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220921215635-5916
	W0921 22:00:55.195699    6032 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220921215635-5916 returned with exit code 1
	I0921 22:00:55.195699    6032 retry.go:31] will retry after 346.182076ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220921215635-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220921215635-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220921215635-5916
	I0921 22:00:55.555395    6032 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220921215635-5916
	W0921 22:00:55.734172    6032 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220921215635-5916 returned with exit code 1
	I0921 22:00:55.734172    6032 retry.go:31] will retry after 579.704465ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220921215635-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220921215635-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220921215635-5916
	I0921 22:00:56.330056    6032 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220921215635-5916
	W0921 22:00:56.524340    6032 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220921215635-5916 returned with exit code 1
	W0921 22:00:56.524340    6032 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220921215635-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220921215635-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220921215635-5916
	
	W0921 22:00:56.524340    6032 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20220921215635-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220921215635-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220921215635-5916
	I0921 22:00:56.524340    6032 fix.go:57] fixHost completed within 33.2303272s
	I0921 22:00:56.524340    6032 start.go:83] releasing machines lock for "multinode-20220921215635-5916", held for 33.2305582s
	W0921 22:00:56.525295    6032 out.go:239] * Failed to start docker container. Running "minikube delete -p multinode-20220921215635-5916" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for multinode-20220921215635-5916 container: docker volume create multinode-20220921215635-5916 --label name.minikube.sigs.k8s.io=multinode-20220921215635-5916 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create multinode-20220921215635-5916: error while creating volume root path '/var/lib/docker/volumes/multinode-20220921215635-5916': mkdir /var/lib/docker/volumes/multinode-20220921215635-5916: read-only file system
	
	* Failed to start docker container. Running "minikube delete -p multinode-20220921215635-5916" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for multinode-20220921215635-5916 container: docker volume create multinode-20220921215635-5916 --label name.minikube.sigs.k8s.io=multinode-20220921215635-5916 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create multinode-20220921215635-5916: error while creating volume root path '/var/lib/docker/volumes/multinode-20220921215635-5916': mkdir /var/lib/docker/volumes/multinode-20220921215635-5916: read-only file system
	
	I0921 22:00:56.530621    6032 out.go:177] 
	W0921 22:00:56.533104    6032 out.go:239] X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for multinode-20220921215635-5916 container: docker volume create multinode-20220921215635-5916 --label name.minikube.sigs.k8s.io=multinode-20220921215635-5916 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create multinode-20220921215635-5916: error while creating volume root path '/var/lib/docker/volumes/multinode-20220921215635-5916': mkdir /var/lib/docker/volumes/multinode-20220921215635-5916: read-only file system
	
	X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for multinode-20220921215635-5916 container: docker volume create multinode-20220921215635-5916 --label name.minikube.sigs.k8s.io=multinode-20220921215635-5916 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create multinode-20220921215635-5916: error while creating volume root path '/var/lib/docker/volumes/multinode-20220921215635-5916': mkdir /var/lib/docker/volumes/multinode-20220921215635-5916: read-only file system
	
	W0921 22:00:56.533339    6032 out.go:239] * Suggestion: Restart Docker
	* Suggestion: Restart Docker
	W0921 22:00:56.533339    6032 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/6825
	* Related issue: https://github.com/kubernetes/minikube/issues/6825
	I0921 22:00:56.538959    6032 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:354: failed to start cluster. args "out/minikube-windows-amd64.exe start -p multinode-20220921215635-5916 --wait=true -v=8 --alsologtostderr --driver=docker" : exit status 60
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/RestartMultiNode]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-20220921215635-5916
helpers_test.go:231: (dbg) Non-zero exit: docker inspect multinode-20220921215635-5916: exit status 1 (239.1095ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: multinode-20220921215635-5916

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-20220921215635-5916 -n multinode-20220921215635-5916
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-20220921215635-5916 -n multinode-20220921215635-5916: exit status 7 (527.262ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0921 22:00:57.526479    6736 status.go:247] status error: host: state: unknown state "multinode-20220921215635-5916": docker container inspect multinode-20220921215635-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220921215635-5916

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-20220921215635-5916" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/RestartMultiNode (76.75s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (100.53s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:441: (dbg) Run:  out/minikube-windows-amd64.exe node list -p multinode-20220921215635-5916
multinode_test.go:450: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-20220921215635-5916-m01 --driver=docker
multinode_test.go:450: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p multinode-20220921215635-5916-m01 --driver=docker: exit status 60 (48.3508107s)

                                                
                                                
-- stdout --
	* [multinode-20220921215635-5916-m01] minikube v1.27.0 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	  - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=14995
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting control plane node multinode-20220921215635-5916-m01 in cluster multinode-20220921215635-5916-m01
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=16300MB) ...
	* docker "multinode-20220921215635-5916-m01" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=16300MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	    > gcr.io/k8s-minikube/kicbase:  0 B [__________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase:  0 B [__________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase:  0 B [__________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase:  0 B [______________________] ?% ? p/s 800ms! minikube was unable to download gcr.io/k8s-minikube/kicbase:v0.0.34, but successfully downloaded gcr.io/k8s-minikube/kicbase:v0.0.34 as a fallback image
	E0921 22:01:04.523296    5260 network_create.go:104] error while trying to create docker network multinode-20220921215635-5916-m01 192.168.49.0/24: create docker network multinode-20220921215635-5916-m01 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-20220921215635-5916-m01 multinode-20220921215635-5916-m01: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network e806d9fc87048d6572cffef11e4e8b232c2215df0776b574588a386d44351798 (br-e806d9fc8704): conflicts with network a04d36bfb3cf4b1b1ab643a2ec70fadfc4a485878b32a275b4c7653037d66dd0 (br-a04d36bfb3cf): networks have overlapping IPv4
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network multinode-20220921215635-5916-m01 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-20220921215635-5916-m01 multinode-20220921215635-5916-m01: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network e806d9fc87048d6572cffef11e4e8b232c2215df0776b574588a386d44351798 (br-e806d9fc8704): conflicts with network a04d36bfb3cf4b1b1ab643a2ec70fadfc4a485878b32a275b4c7653037d66dd0 (br-a04d36bfb3cf): networks have overlapping IPv4
	
	! StartHost failed, but will try again: creating host: create: creating: setting up container node: creating volume for multinode-20220921215635-5916-m01 container: docker volume create multinode-20220921215635-5916-m01 --label name.minikube.sigs.k8s.io=multinode-20220921215635-5916-m01 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create multinode-20220921215635-5916-m01: error while creating volume root path '/var/lib/docker/volumes/multinode-20220921215635-5916-m01': mkdir /var/lib/docker/volumes/multinode-20220921215635-5916-m01: read-only file system
	
	E0921 22:01:36.777304    5260 network_create.go:104] error while trying to create docker network multinode-20220921215635-5916-m01 192.168.58.0/24: create docker network multinode-20220921215635-5916-m01 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-20220921215635-5916-m01 multinode-20220921215635-5916-m01: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network bfdb8d1d737342f58b55bd3bf116683314a20668a2dd869fc675321574b18b23 (br-bfdb8d1d7373): conflicts with network 8a3cd8d165a4af032276277dbee93b52b8ee4aed7ca6d8aceee6814901b31792 (br-8a3cd8d165a4): networks have overlapping IPv4
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network multinode-20220921215635-5916-m01 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-20220921215635-5916-m01 multinode-20220921215635-5916-m01: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network bfdb8d1d737342f58b55bd3bf116683314a20668a2dd869fc675321574b18b23 (br-bfdb8d1d7373): conflicts with network 8a3cd8d165a4af032276277dbee93b52b8ee4aed7ca6d8aceee6814901b31792 (br-8a3cd8d165a4): networks have overlapping IPv4
	
	* Failed to start docker container. Running "minikube delete -p multinode-20220921215635-5916-m01" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for multinode-20220921215635-5916-m01 container: docker volume create multinode-20220921215635-5916-m01 --label name.minikube.sigs.k8s.io=multinode-20220921215635-5916-m01 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create multinode-20220921215635-5916-m01: error while creating volume root path '/var/lib/docker/volumes/multinode-20220921215635-5916-m01': mkdir /var/lib/docker/volumes/multinode-20220921215635-5916-m01: read-only file system
	
	X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for multinode-20220921215635-5916-m01 container: docker volume create multinode-20220921215635-5916-m01 --label name.minikube.sigs.k8s.io=multinode-20220921215635-5916-m01 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create multinode-20220921215635-5916-m01: error while creating volume root path '/var/lib/docker/volumes/multinode-20220921215635-5916-m01': mkdir /var/lib/docker/volumes/multinode-20220921215635-5916-m01: read-only file system
	
	* Suggestion: Restart Docker
	* Related issue: https://github.com/kubernetes/minikube/issues/6825

                                                
                                                
** /stderr **
multinode_test.go:458: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-20220921215635-5916-m02 --driver=docker
multinode_test.go:458: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p multinode-20220921215635-5916-m02 --driver=docker: exit status 60 (48.3651448s)

                                                
                                                
-- stdout --
	* [multinode-20220921215635-5916-m02] minikube v1.27.0 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	  - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=14995
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting control plane node multinode-20220921215635-5916-m02 in cluster multinode-20220921215635-5916-m02
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=16300MB) ...
	* docker "multinode-20220921215635-5916-m02" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=16300MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	    > gcr.io/k8s-minikube/kicbase:  0 B [__________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase:  0 B [__________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase:  0 B [__________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase:  0 B [______________________] ?% ? p/s 800ms! minikube was unable to download gcr.io/k8s-minikube/kicbase:v0.0.34, but successfully downloaded gcr.io/k8s-minikube/kicbase:v0.0.34 as a fallback image
	E0921 22:01:52.985280    9136 network_create.go:104] error while trying to create docker network multinode-20220921215635-5916-m02 192.168.49.0/24: create docker network multinode-20220921215635-5916-m02 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-20220921215635-5916-m02 multinode-20220921215635-5916-m02: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 5e2be77b064e1a8307a357ba93953b39315b0575b2ec5229eeb6c263f6d51816 (br-5e2be77b064e): conflicts with network a04d36bfb3cf4b1b1ab643a2ec70fadfc4a485878b32a275b4c7653037d66dd0 (br-a04d36bfb3cf): networks have overlapping IPv4
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network multinode-20220921215635-5916-m02 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-20220921215635-5916-m02 multinode-20220921215635-5916-m02: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 5e2be77b064e1a8307a357ba93953b39315b0575b2ec5229eeb6c263f6d51816 (br-5e2be77b064e): conflicts with network a04d36bfb3cf4b1b1ab643a2ec70fadfc4a485878b32a275b4c7653037d66dd0 (br-a04d36bfb3cf): networks have overlapping IPv4
	
	! StartHost failed, but will try again: creating host: create: creating: setting up container node: creating volume for multinode-20220921215635-5916-m02 container: docker volume create multinode-20220921215635-5916-m02 --label name.minikube.sigs.k8s.io=multinode-20220921215635-5916-m02 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create multinode-20220921215635-5916-m02: error while creating volume root path '/var/lib/docker/volumes/multinode-20220921215635-5916-m02': mkdir /var/lib/docker/volumes/multinode-20220921215635-5916-m02: read-only file system
	
	E0921 22:02:25.269908    9136 network_create.go:104] error while trying to create docker network multinode-20220921215635-5916-m02 192.168.58.0/24: create docker network multinode-20220921215635-5916-m02 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-20220921215635-5916-m02 multinode-20220921215635-5916-m02: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 4afd4106650d38c7fada811af4ec20272b95a731500d634c6c2a72f320856911 (br-4afd4106650d): conflicts with network 8a3cd8d165a4af032276277dbee93b52b8ee4aed7ca6d8aceee6814901b31792 (br-8a3cd8d165a4): networks have overlapping IPv4
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network multinode-20220921215635-5916-m02 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-20220921215635-5916-m02 multinode-20220921215635-5916-m02: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 4afd4106650d38c7fada811af4ec20272b95a731500d634c6c2a72f320856911 (br-4afd4106650d): conflicts with network 8a3cd8d165a4af032276277dbee93b52b8ee4aed7ca6d8aceee6814901b31792 (br-8a3cd8d165a4): networks have overlapping IPv4
	
	* Failed to start docker container. Running "minikube delete -p multinode-20220921215635-5916-m02" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for multinode-20220921215635-5916-m02 container: docker volume create multinode-20220921215635-5916-m02 --label name.minikube.sigs.k8s.io=multinode-20220921215635-5916-m02 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create multinode-20220921215635-5916-m02: error while creating volume root path '/var/lib/docker/volumes/multinode-20220921215635-5916-m02': mkdir /var/lib/docker/volumes/multinode-20220921215635-5916-m02: read-only file system
	
	X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for multinode-20220921215635-5916-m02 container: docker volume create multinode-20220921215635-5916-m02 --label name.minikube.sigs.k8s.io=multinode-20220921215635-5916-m02 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create multinode-20220921215635-5916-m02: error while creating volume root path '/var/lib/docker/volumes/multinode-20220921215635-5916-m02': mkdir /var/lib/docker/volumes/multinode-20220921215635-5916-m02: read-only file system
	
	* Suggestion: Restart Docker
	* Related issue: https://github.com/kubernetes/minikube/issues/6825

                                                
                                                
** /stderr **
multinode_test.go:460: failed to start profile. args "out/minikube-windows-amd64.exe start -p multinode-20220921215635-5916-m02 --driver=docker" : exit status 60
multinode_test.go:465: (dbg) Run:  out/minikube-windows-amd64.exe node add -p multinode-20220921215635-5916
multinode_test.go:465: (dbg) Non-zero exit: out/minikube-windows-amd64.exe node add -p multinode-20220921215635-5916: exit status 80 (1.0476408s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "multinode-20220921215635-5916": docker container inspect multinode-20220921215635-5916 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220921215635-5916
	
	* 
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                       │
	│    * If the above advice does not help, please let us know:                                                           │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                         │
	│                                                                                                                       │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                              │
	│    * Please also attach the following file to the GitHub issue:                                                       │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_node_e3f75f9fdd712fd5423563a6a11e787bf6359068_30.log    │
	│                                                                                                                       │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:470: (dbg) Run:  out/minikube-windows-amd64.exe delete -p multinode-20220921215635-5916-m02
multinode_test.go:470: (dbg) Done: out/minikube-windows-amd64.exe delete -p multinode-20220921215635-5916-m02: (1.6027807s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/ValidateNameConflict]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-20220921215635-5916
helpers_test.go:231: (dbg) Non-zero exit: docker inspect multinode-20220921215635-5916: exit status 1 (271.3017ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: multinode-20220921215635-5916

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-20220921215635-5916 -n multinode-20220921215635-5916
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-20220921215635-5916 -n multinode-20220921215635-5916: exit status 7 (547.4707ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0921 22:02:38.040831    2180 status.go:247] status error: host: state: unknown state "multinode-20220921215635-5916": docker container inspect multinode-20220921215635-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20220921215635-5916

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-20220921215635-5916" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/ValidateNameConflict (100.53s)

                                                
                                    
x
+
TestPreload (52.07s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:48: (dbg) Run:  out/minikube-windows-amd64.exe start -p test-preload-20220921220240-5916 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker --kubernetes-version=v1.17.0
preload_test.go:48: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p test-preload-20220921220240-5916 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker --kubernetes-version=v1.17.0: exit status 60 (49.4010162s)

                                                
                                                
-- stdout --
	* [test-preload-20220921220240-5916] minikube v1.27.0 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	  - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=14995
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting control plane node test-preload-20220921220240-5916 in cluster test-preload-20220921220240-5916
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* docker "test-preload-20220921220240-5916" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0921 22:02:40.677747    9048 out.go:296] Setting OutFile to fd 928 ...
	I0921 22:02:40.738359    9048 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0921 22:02:40.738359    9048 out.go:309] Setting ErrFile to fd 752...
	I0921 22:02:40.738359    9048 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0921 22:02:40.756391    9048 out.go:303] Setting JSON to false
	I0921 22:02:40.759192    9048 start.go:115] hostinfo: {"hostname":"minikube2","uptime":3829,"bootTime":1663793931,"procs":149,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19044 Build 19044","kernelVersion":"10.0.19044 Build 19044","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"29dced62-21fb-45d8-a34e-472b66ced002"}
	W0921 22:02:40.759251    9048 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0921 22:02:40.764658    9048 out.go:177] * [test-preload-20220921220240-5916] minikube v1.27.0 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	I0921 22:02:40.767955    9048 notify.go:214] Checking for updates...
	I0921 22:02:40.771588    9048 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	I0921 22:02:40.774797    9048 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	I0921 22:02:40.778746    9048 out.go:177]   - MINIKUBE_LOCATION=14995
	I0921 22:02:40.781792    9048 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0921 22:02:40.786316    9048 config.go:180] Loaded profile config "multinode-20220921215635-5916-m01": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.2
	I0921 22:02:40.786316    9048 driver.go:365] Setting default libvirt URI to qemu:///system
	I0921 22:02:41.066396    9048 docker.go:137] docker version: linux-20.10.17
	I0921 22:02:41.074326    9048 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0921 22:02:41.594350    9048 info.go:265] docker info: {ID:6VFI:N2PO:PLBP:KXLR:EN7R:OR7P:564S:UY7N:ZWDA:5V2U:KGL2:KJZ7 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:43 OomKillDisable:true NGoroutines:63 SystemTime:2022-09-21 22:02:41.217615 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_6
4 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,p
rofile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.10.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.9] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-p
lugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.19.0]] Warnings:<nil>}}
	I0921 22:02:41.598437    9048 out.go:177] * Using the docker driver based on user configuration
	I0921 22:02:41.602316    9048 start.go:284] selected driver: docker
	I0921 22:02:41.602316    9048 start.go:808] validating driver "docker" against <nil>
	I0921 22:02:41.602316    9048 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0921 22:02:41.662949    9048 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0921 22:02:42.177627    9048 info.go:265] docker info: {ID:6VFI:N2PO:PLBP:KXLR:EN7R:OR7P:564S:UY7N:ZWDA:5V2U:KGL2:KJZ7 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:43 OomKillDisable:true NGoroutines:63 SystemTime:2022-09-21 22:02:41.8242178 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.10.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.9] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-
plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.19.0]] Warnings:<nil>}}
	I0921 22:02:42.178372    9048 start_flags.go:302] no existing cluster config was found, will generate one from the flags 
	I0921 22:02:42.179203    9048 start_flags.go:867] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0921 22:02:42.183359    9048 out.go:177] * Using Docker Desktop driver with root privileges
	I0921 22:02:42.185756    9048 cni.go:95] Creating CNI manager for ""
	I0921 22:02:42.185756    9048 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0921 22:02:42.185756    9048 start_flags.go:316] config:
	{Name:test-preload-20220921220240-5916 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.17.0 ClusterName:test-preload-20220921220240-5916 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I0921 22:02:42.188587    9048 out.go:177] * Starting control plane node test-preload-20220921220240-5916 in cluster test-preload-20220921220240-5916
	I0921 22:02:42.191805    9048 cache.go:120] Beginning downloading kic base image for docker with docker
	I0921 22:02:42.194439    9048 out.go:177] * Pulling base image ...
	I0921 22:02:42.197712    9048 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c in local docker daemon
	I0921 22:02:42.197712    9048 preload.go:132] Checking if preload exists for k8s version v1.17.0 and runtime docker
	I0921 22:02:42.197965    9048 profile.go:148] Saving config to C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\test-preload-20220921220240-5916\config.json ...
	I0921 22:02:42.198052    9048 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\kube-controller-manager:v1.17.0 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\kube-controller-manager_v1.17.0
	I0921 22:02:42.198146    9048 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\pause:3.1 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\pause_3.1
	I0921 22:02:42.198146    9048 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\etcd:3.4.3-0 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\etcd_3.4.3-0
	I0921 22:02:42.198247    9048 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\test-preload-20220921220240-5916\config.json: {Name:mk874635599f5ac3faf2e38930d1fa484de07f27 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0921 22:02:42.198293    9048 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\coredns:1.6.5 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\coredns_1.6.5
	I0921 22:02:42.198247    9048 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\kube-apiserver:v1.17.0 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\kube-apiserver_v1.17.0
	I0921 22:02:42.198052    9048 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\kube-scheduler:v1.17.0 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\kube-scheduler_v1.17.0
	I0921 22:02:42.198052    9048 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\kube-proxy:v1.17.0 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\kube-proxy_v1.17.0
	I0921 22:02:42.198052    9048 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner:v5 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5
	I0921 22:02:42.366669    9048 cache.go:107] acquiring lock: {Name:mk2bed4c2f349144087ca9b4676d08589a5f3b25 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0921 22:02:42.366669    9048 cache.go:107] acquiring lock: {Name:mkfe379c4c474168d5a5fd2dde0e9bf1347e993b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0921 22:02:42.366669    9048 cache.go:107] acquiring lock: {Name:mk93ccdec90972c05247bea23df9b97c54ef0291 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0921 22:02:42.366669    9048 cache.go:107] acquiring lock: {Name:mkb269f15b2e3b2569308dbf84de26df267b2fcc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0921 22:02:42.366669    9048 cache.go:107] acquiring lock: {Name:mkef49659bc6e08b20a8521eb6ce4fb712ad39c2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0921 22:02:42.366669    9048 cache.go:107] acquiring lock: {Name:mk965b06109155c0e187b8b69e2b0548d9bccb3b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0921 22:02:42.366669    9048 cache.go:107] acquiring lock: {Name:mkef9a3d9e3cbb1fe114c12bec441ddb11fca0c6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0921 22:02:42.366669    9048 cache.go:107] acquiring lock: {Name:mk7af4d324ae5378e4084d0d909beff30d29e38f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0921 22:02:42.366669    9048 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5 exists
	I0921 22:02:42.366669    9048 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\cache\\images\\amd64\\gcr.io\\k8s-minikube\\storage-provisioner_v5" took 167.8993ms
	I0921 22:02:42.366669    9048 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5 succeeded
	I0921 22:02:42.368617    9048 image.go:134] retrieving image: k8s.gcr.io/kube-controller-manager:v1.17.0
	I0921 22:02:42.368617    9048 image.go:134] retrieving image: k8s.gcr.io/kube-scheduler:v1.17.0
	I0921 22:02:42.369606    9048 image.go:134] retrieving image: k8s.gcr.io/etcd:3.4.3-0
	I0921 22:02:42.369606    9048 image.go:134] retrieving image: k8s.gcr.io/pause:3.1
	I0921 22:02:42.369606    9048 image.go:134] retrieving image: k8s.gcr.io/coredns:1.6.5
	I0921 22:02:42.369606    9048 image.go:134] retrieving image: k8s.gcr.io/kube-proxy:v1.17.0
	I0921 22:02:42.369606    9048 image.go:134] retrieving image: k8s.gcr.io/kube-apiserver:v1.17.0
	I0921 22:02:42.391558    9048 image.go:177] daemon lookup for k8s.gcr.io/pause:3.1: Error: No such image: k8s.gcr.io/pause:3.1
	I0921 22:02:42.398530    9048 image.go:177] daemon lookup for k8s.gcr.io/kube-proxy:v1.17.0: Error: No such image: k8s.gcr.io/kube-proxy:v1.17.0
	I0921 22:02:42.402535    9048 image.go:177] daemon lookup for k8s.gcr.io/coredns:1.6.5: Error: No such image: k8s.gcr.io/coredns:1.6.5
	I0921 22:02:42.412547    9048 image.go:177] daemon lookup for k8s.gcr.io/kube-controller-manager:v1.17.0: Error: No such image: k8s.gcr.io/kube-controller-manager:v1.17.0
	I0921 22:02:42.413542    9048 image.go:177] daemon lookup for k8s.gcr.io/kube-apiserver:v1.17.0: Error: No such image: k8s.gcr.io/kube-apiserver:v1.17.0
	I0921 22:02:42.421534    9048 image.go:177] daemon lookup for k8s.gcr.io/kube-scheduler:v1.17.0: Error: No such image: k8s.gcr.io/kube-scheduler:v1.17.0
	I0921 22:02:42.434526    9048 image.go:177] daemon lookup for k8s.gcr.io/etcd:3.4.3-0: Error: No such image: k8s.gcr.io/etcd:3.4.3-0
	I0921 22:02:42.461563    9048 cache.go:147] Downloading gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c to local cache
	I0921 22:02:42.461563    9048 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase_v0.0.34@sha256_f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c.tar
	I0921 22:02:42.461563    9048 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase_v0.0.34@sha256_f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c.tar
	I0921 22:02:42.461563    9048 image.go:59] Checking for gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c in local cache directory
	I0921 22:02:42.461563    9048 image.go:62] Found gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c in local cache directory, skipping pull
	I0921 22:02:42.461563    9048 image.go:103] gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c exists in cache, skipping pull
	I0921 22:02:42.461563    9048 cache.go:150] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c as a tarball
	I0921 22:02:42.461563    9048 cache.go:161] Loading gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c from local cache
	I0921 22:02:42.461563    9048 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase_v0.0.34@sha256_f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c.tar
	W0921 22:02:42.492837    9048 image.go:187] authn lookup for k8s.gcr.io/pause:3.1 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W0921 22:02:42.595292    9048 image.go:187] authn lookup for k8s.gcr.io/kube-proxy:v1.17.0 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W0921 22:02:42.694491    9048 image.go:187] authn lookup for k8s.gcr.io/coredns:1.6.5 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W0921 22:02:42.805206    9048 image.go:187] authn lookup for k8s.gcr.io/kube-controller-manager:v1.17.0 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W0921 22:02:42.912689    9048 image.go:187] authn lookup for k8s.gcr.io/kube-apiserver:v1.17.0 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W0921 22:02:43.011382    9048 image.go:187] authn lookup for k8s.gcr.io/kube-scheduler:v1.17.0 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	I0921 22:02:43.048541    9048 cache.go:161] opening:  \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\pause_3.1
	I0921 22:02:43.087477    9048 cache.go:161] opening:  \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\kube-proxy_v1.17.0
	W0921 22:02:43.135235    9048 image.go:187] authn lookup for k8s.gcr.io/etcd:3.4.3-0 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	I0921 22:02:43.150778    9048 cache.go:161] opening:  \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\coredns_1.6.5
	I0921 22:02:43.179701    9048 cache.go:156] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\pause_3.1 exists
	I0921 22:02:43.180579    9048 cache.go:96] cache image "k8s.gcr.io/pause:3.1" -> "C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\cache\\images\\amd64\\k8s.gcr.io\\pause_3.1" took 982.4251ms
	I0921 22:02:43.180617    9048 cache.go:80] save to tar file k8s.gcr.io/pause:3.1 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\pause_3.1 succeeded
	I0921 22:02:43.270764    9048 cache.go:161] opening:  \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\kube-controller-manager_v1.17.0
	I0921 22:02:43.372736    9048 cache.go:161] opening:  \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\kube-apiserver_v1.17.0
	I0921 22:02:43.397077    9048 cache.go:161] opening:  \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\kube-scheduler_v1.17.0
	I0921 22:02:43.532680    9048 cache.go:156] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\coredns_1.6.5 exists
	I0921 22:02:43.532680    9048 cache.go:96] cache image "k8s.gcr.io/coredns:1.6.5" -> "C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\cache\\images\\amd64\\k8s.gcr.io\\coredns_1.6.5" took 1.3343028s
	I0921 22:02:43.532680    9048 cache.go:80] save to tar file k8s.gcr.io/coredns:1.6.5 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\coredns_1.6.5 succeeded
	I0921 22:02:43.662843    9048 cache.go:161] opening:  \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\etcd_3.4.3-0
	I0921 22:02:44.370083    9048 cache.go:156] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\kube-controller-manager_v1.17.0 exists
	I0921 22:02:44.370383    9048 cache.go:96] cache image "k8s.gcr.io/kube-controller-manager:v1.17.0" -> "C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\cache\\images\\amd64\\k8s.gcr.io\\kube-controller-manager_v1.17.0" took 2.1722203s
	I0921 22:02:44.370383    9048 cache.go:80] save to tar file k8s.gcr.io/kube-controller-manager:v1.17.0 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\kube-controller-manager_v1.17.0 succeeded
	I0921 22:02:44.652653    9048 cache.go:156] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\kube-scheduler_v1.17.0 exists
	I0921 22:02:44.653611    9048 cache.go:96] cache image "k8s.gcr.io/kube-scheduler:v1.17.0" -> "C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\cache\\images\\amd64\\k8s.gcr.io\\kube-scheduler_v1.17.0" took 2.4550537s
	I0921 22:02:44.653611    9048 cache.go:80] save to tar file k8s.gcr.io/kube-scheduler:v1.17.0 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\kube-scheduler_v1.17.0 succeeded
	I0921 22:02:44.897662    9048 cache.go:156] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\kube-apiserver_v1.17.0 exists
	I0921 22:02:44.898504    9048 cache.go:96] cache image "k8s.gcr.io/kube-apiserver:v1.17.0" -> "C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\cache\\images\\amd64\\k8s.gcr.io\\kube-apiserver_v1.17.0" took 2.7000834s
	I0921 22:02:44.898504    9048 cache.go:80] save to tar file k8s.gcr.io/kube-apiserver:v1.17.0 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\kube-apiserver_v1.17.0 succeeded
	I0921 22:02:45.021956    9048 cache.go:164] successfully loaded gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c from cached tarball
	I0921 22:02:45.022139    9048 cache.go:170] Downloading gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c to local daemon
	I0921 22:02:45.022212    9048 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c.d.lock -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase_v0.0.34@sha256_f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c.d.lock
	I0921 22:02:45.022442    9048 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c in local docker daemon
	I0921 22:02:45.243572    9048 cache.go:156] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\kube-proxy_v1.17.0 exists
	I0921 22:02:45.244567    9048 cache.go:96] cache image "k8s.gcr.io/kube-proxy:v1.17.0" -> "C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\cache\\images\\amd64\\k8s.gcr.io\\kube-proxy_v1.17.0" took 3.0459148s
	I0921 22:02:45.244567    9048 cache.go:80] save to tar file k8s.gcr.io/kube-proxy:v1.17.0 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\kube-proxy_v1.17.0 succeeded
	I0921 22:02:45.246600    9048 image.go:243] Writing gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c to local daemon
	I0921 22:02:45.507373    9048 cache.go:156] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\etcd_3.4.3-0 exists
	I0921 22:02:45.507373    9048 cache.go:96] cache image "k8s.gcr.io/etcd:3.4.3-0" -> "C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\cache\\images\\amd64\\k8s.gcr.io\\etcd_3.4.3-0" took 3.3092017s
	I0921 22:02:45.507649    9048 cache.go:80] save to tar file k8s.gcr.io/etcd:3.4.3-0 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\etcd_3.4.3-0 succeeded
	I0921 22:02:45.507649    9048 cache.go:87] Successfully saved all images to host disk.
	    > gcr.io/k8s-minikube/kicbase:  0 B [__________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase:  0 B [__________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase:  0 B [__________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase:  0 B [______________________] ?% ? p/s 700msI0921 22:02:46.585822    9048 cache.go:177] use image loaded from cache gcr.io/k8s-minikube/kicbase:v0.0.34
	W0921 22:02:46.585822    9048 out.go:239] ! minikube was unable to download gcr.io/k8s-minikube/kicbase:v0.0.34, but successfully downloaded gcr.io/k8s-minikube/kicbase:v0.0.34 as a fallback image
	! minikube was unable to download gcr.io/k8s-minikube/kicbase:v0.0.34, but successfully downloaded gcr.io/k8s-minikube/kicbase:v0.0.34 as a fallback image
	I0921 22:02:46.585822    9048 cache.go:208] Successfully downloaded all kic artifacts
	I0921 22:02:46.586435    9048 start.go:364] acquiring machines lock for test-preload-20220921220240-5916: {Name:mk79cc5c93885eff585fa46b3b396ad9bd52adf3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0921 22:02:46.586699    9048 start.go:368] acquired machines lock for "test-preload-20220921220240-5916" in 100.9µs
	I0921 22:02:46.586699    9048 start.go:93] Provisioning new machine with config: &{Name:test-preload-20220921220240-5916 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.34 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.17.0 ClusterName:test-preload-20220921220240-5916 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.17.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: Socke
tVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet} &{Name: IP: Port:8443 KubernetesVersion:v1.17.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0921 22:02:46.586699    9048 start.go:125] createHost starting for "" (driver="docker")
	I0921 22:02:46.591787    9048 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0921 22:02:46.592325    9048 start.go:159] libmachine.API.Create for "test-preload-20220921220240-5916" (driver="docker")
	I0921 22:02:46.592325    9048 client.go:168] LocalClient.Create starting
	I0921 22:02:46.592988    9048 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I0921 22:02:46.592988    9048 main.go:134] libmachine: Decoding PEM data...
	I0921 22:02:46.592988    9048 main.go:134] libmachine: Parsing certificate...
	I0921 22:02:46.592988    9048 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I0921 22:02:46.593730    9048 main.go:134] libmachine: Decoding PEM data...
	I0921 22:02:46.593764    9048 main.go:134] libmachine: Parsing certificate...
	I0921 22:02:46.601567    9048 cli_runner.go:164] Run: docker network inspect test-preload-20220921220240-5916 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0921 22:02:46.787155    9048 cli_runner.go:211] docker network inspect test-preload-20220921220240-5916 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0921 22:02:46.795099    9048 network_create.go:272] running [docker network inspect test-preload-20220921220240-5916] to gather additional debugging logs...
	I0921 22:02:46.795099    9048 cli_runner.go:164] Run: docker network inspect test-preload-20220921220240-5916
	W0921 22:02:47.007089    9048 cli_runner.go:211] docker network inspect test-preload-20220921220240-5916 returned with exit code 1
	I0921 22:02:47.007130    9048 network_create.go:275] error running [docker network inspect test-preload-20220921220240-5916]: docker network inspect test-preload-20220921220240-5916: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: test-preload-20220921220240-5916
	I0921 22:02:47.007189    9048 network_create.go:277] output of [docker network inspect test-preload-20220921220240-5916]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: test-preload-20220921220240-5916
	
	** /stderr **
	I0921 22:02:47.015395    9048 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0921 22:02:47.234342    9048 network.go:290] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc0003b0058] misses:0}
	I0921 22:02:47.234342    9048 network.go:236] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0921 22:02:47.234342    9048 network_create.go:115] attempt to create docker network test-preload-20220921220240-5916 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0921 22:02:47.241783    9048 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=test-preload-20220921220240-5916 test-preload-20220921220240-5916
	W0921 22:02:47.426819    9048 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=test-preload-20220921220240-5916 test-preload-20220921220240-5916 returned with exit code 1
	E0921 22:02:47.426819    9048 network_create.go:104] error while trying to create docker network test-preload-20220921220240-5916 192.168.49.0/24: create docker network test-preload-20220921220240-5916 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=test-preload-20220921220240-5916 test-preload-20220921220240-5916: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network b02324e6a4671c3c9f07e3253be7191d3f24cb8cf64df95583ac9aab25cf5b98 (br-b02324e6a467): conflicts with network a04d36bfb3cf4b1b1ab643a2ec70fadfc4a485878b32a275b4c7653037d66dd0 (br-a04d36bfb3cf): networks have overlapping IPv4
	W0921 22:02:47.426819    9048 out.go:239] ! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network test-preload-20220921220240-5916 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=test-preload-20220921220240-5916 test-preload-20220921220240-5916: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network b02324e6a4671c3c9f07e3253be7191d3f24cb8cf64df95583ac9aab25cf5b98 (br-b02324e6a467): conflicts with network a04d36bfb3cf4b1b1ab643a2ec70fadfc4a485878b32a275b4c7653037d66dd0 (br-a04d36bfb3cf): networks have overlapping IPv4
	
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network test-preload-20220921220240-5916 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=test-preload-20220921220240-5916 test-preload-20220921220240-5916: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network b02324e6a4671c3c9f07e3253be7191d3f24cb8cf64df95583ac9aab25cf5b98 (br-b02324e6a467): conflicts with network a04d36bfb3cf4b1b1ab643a2ec70fadfc4a485878b32a275b4c7653037d66dd0 (br-a04d36bfb3cf): networks have overlapping IPv4
	
	I0921 22:02:47.440843    9048 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0921 22:02:47.648368    9048 cli_runner.go:164] Run: docker volume create test-preload-20220921220240-5916 --label name.minikube.sigs.k8s.io=test-preload-20220921220240-5916 --label created_by.minikube.sigs.k8s.io=true
	W0921 22:02:47.840093    9048 cli_runner.go:211] docker volume create test-preload-20220921220240-5916 --label name.minikube.sigs.k8s.io=test-preload-20220921220240-5916 --label created_by.minikube.sigs.k8s.io=true returned with exit code 1
	I0921 22:02:47.840403    9048 client.go:171] LocalClient.Create took 1.2480691s
	I0921 22:02:49.863357    9048 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0921 22:02:49.869374    9048 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220921220240-5916
	W0921 22:02:50.084385    9048 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220921220240-5916 returned with exit code 1
	I0921 22:02:50.084520    9048 retry.go:31] will retry after 276.165072ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "test-preload-20220921220240-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220921220240-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: test-preload-20220921220240-5916
	I0921 22:02:50.370323    9048 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220921220240-5916
	W0921 22:02:50.583363    9048 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220921220240-5916 returned with exit code 1
	I0921 22:02:50.583671    9048 retry.go:31] will retry after 540.190908ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "test-preload-20220921220240-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220921220240-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: test-preload-20220921220240-5916
	I0921 22:02:51.134430    9048 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220921220240-5916
	W0921 22:02:51.312546    9048 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220921220240-5916 returned with exit code 1
	W0921 22:02:51.312546    9048 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "test-preload-20220921220240-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220921220240-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: test-preload-20220921220240-5916
	
	W0921 22:02:51.312546    9048 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "test-preload-20220921220240-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220921220240-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: test-preload-20220921220240-5916
	I0921 22:02:51.323303    9048 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0921 22:02:51.332453    9048 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220921220240-5916
	W0921 22:02:51.515928    9048 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220921220240-5916 returned with exit code 1
	I0921 22:02:51.515928    9048 retry.go:31] will retry after 234.428547ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "test-preload-20220921220240-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220921220240-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: test-preload-20220921220240-5916
	I0921 22:02:51.774455    9048 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220921220240-5916
	W0921 22:02:51.968090    9048 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220921220240-5916 returned with exit code 1
	I0921 22:02:51.968090    9048 retry.go:31] will retry after 346.739061ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "test-preload-20220921220240-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220921220240-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: test-preload-20220921220240-5916
	I0921 22:02:52.338340    9048 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220921220240-5916
	W0921 22:02:52.529461    9048 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220921220240-5916 returned with exit code 1
	I0921 22:02:52.529498    9048 retry.go:31] will retry after 667.587979ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "test-preload-20220921220240-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220921220240-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: test-preload-20220921220240-5916
	I0921 22:02:53.208098    9048 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220921220240-5916
	W0921 22:02:53.400926    9048 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220921220240-5916 returned with exit code 1
	W0921 22:02:53.401194    9048 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "test-preload-20220921220240-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220921220240-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: test-preload-20220921220240-5916
	
	W0921 22:02:53.401252    9048 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "test-preload-20220921220240-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220921220240-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: test-preload-20220921220240-5916
	I0921 22:02:53.401252    9048 start.go:128] duration metric: createHost completed in 6.8145019s
	I0921 22:02:53.401252    9048 start.go:83] releasing machines lock for "test-preload-20220921220240-5916", held for 6.8145019s
	W0921 22:02:53.401252    9048 start.go:602] error starting host: creating host: create: creating: setting up container node: creating volume for test-preload-20220921220240-5916 container: docker volume create test-preload-20220921220240-5916 --label name.minikube.sigs.k8s.io=test-preload-20220921220240-5916 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create test-preload-20220921220240-5916: error while creating volume root path '/var/lib/docker/volumes/test-preload-20220921220240-5916': mkdir /var/lib/docker/volumes/test-preload-20220921220240-5916: read-only file system
	I0921 22:02:53.416210    9048 cli_runner.go:164] Run: docker container inspect test-preload-20220921220240-5916 --format={{.State.Status}}
	W0921 22:02:53.634746    9048 cli_runner.go:211] docker container inspect test-preload-20220921220240-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:02:53.634834    9048 delete.go:82] Unable to get host status for test-preload-20220921220240-5916, assuming it has already been deleted: state: unknown state "test-preload-20220921220240-5916": docker container inspect test-preload-20220921220240-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: test-preload-20220921220240-5916
	W0921 22:02:53.635008    9048 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: setting up container node: creating volume for test-preload-20220921220240-5916 container: docker volume create test-preload-20220921220240-5916 --label name.minikube.sigs.k8s.io=test-preload-20220921220240-5916 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create test-preload-20220921220240-5916: error while creating volume root path '/var/lib/docker/volumes/test-preload-20220921220240-5916': mkdir /var/lib/docker/volumes/test-preload-20220921220240-5916: read-only file system
	
	! StartHost failed, but will try again: creating host: create: creating: setting up container node: creating volume for test-preload-20220921220240-5916 container: docker volume create test-preload-20220921220240-5916 --label name.minikube.sigs.k8s.io=test-preload-20220921220240-5916 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create test-preload-20220921220240-5916: error while creating volume root path '/var/lib/docker/volumes/test-preload-20220921220240-5916': mkdir /var/lib/docker/volumes/test-preload-20220921220240-5916: read-only file system
	
	I0921 22:02:53.635008    9048 start.go:617] Will try again in 5 seconds ...
	I0921 22:02:58.645853    9048 start.go:364] acquiring machines lock for test-preload-20220921220240-5916: {Name:mk79cc5c93885eff585fa46b3b396ad9bd52adf3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0921 22:02:58.645973    9048 start.go:368] acquired machines lock for "test-preload-20220921220240-5916" in 0s
	I0921 22:02:58.646517    9048 start.go:96] Skipping create...Using existing machine configuration
	I0921 22:02:58.646517    9048 fix.go:55] fixHost starting: 
	I0921 22:02:58.660534    9048 cli_runner.go:164] Run: docker container inspect test-preload-20220921220240-5916 --format={{.State.Status}}
	W0921 22:02:58.876421    9048 cli_runner.go:211] docker container inspect test-preload-20220921220240-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:02:58.876421    9048 fix.go:103] recreateIfNeeded on test-preload-20220921220240-5916: state= err=unknown state "test-preload-20220921220240-5916": docker container inspect test-preload-20220921220240-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: test-preload-20220921220240-5916
	I0921 22:02:58.876421    9048 fix.go:108] machineExists: false. err=machine does not exist
	I0921 22:02:58.880209    9048 out.go:177] * docker "test-preload-20220921220240-5916" container is missing, will recreate.
	I0921 22:02:58.883459    9048 delete.go:124] DEMOLISHING test-preload-20220921220240-5916 ...
	I0921 22:02:58.895705    9048 cli_runner.go:164] Run: docker container inspect test-preload-20220921220240-5916 --format={{.State.Status}}
	W0921 22:02:59.116610    9048 cli_runner.go:211] docker container inspect test-preload-20220921220240-5916 --format={{.State.Status}} returned with exit code 1
	W0921 22:02:59.116610    9048 stop.go:75] unable to get state: unknown state "test-preload-20220921220240-5916": docker container inspect test-preload-20220921220240-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: test-preload-20220921220240-5916
	I0921 22:02:59.116610    9048 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "test-preload-20220921220240-5916": docker container inspect test-preload-20220921220240-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: test-preload-20220921220240-5916
	I0921 22:02:59.132531    9048 cli_runner.go:164] Run: docker container inspect test-preload-20220921220240-5916 --format={{.State.Status}}
	W0921 22:02:59.332759    9048 cli_runner.go:211] docker container inspect test-preload-20220921220240-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:02:59.332759    9048 delete.go:82] Unable to get host status for test-preload-20220921220240-5916, assuming it has already been deleted: state: unknown state "test-preload-20220921220240-5916": docker container inspect test-preload-20220921220240-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: test-preload-20220921220240-5916
	I0921 22:02:59.340640    9048 cli_runner.go:164] Run: docker container inspect -f {{.Id}} test-preload-20220921220240-5916
	W0921 22:02:59.534300    9048 cli_runner.go:211] docker container inspect -f {{.Id}} test-preload-20220921220240-5916 returned with exit code 1
	I0921 22:02:59.534614    9048 kic.go:356] could not find the container test-preload-20220921220240-5916 to remove it. will try anyways
	I0921 22:02:59.542990    9048 cli_runner.go:164] Run: docker container inspect test-preload-20220921220240-5916 --format={{.State.Status}}
	W0921 22:02:59.721221    9048 cli_runner.go:211] docker container inspect test-preload-20220921220240-5916 --format={{.State.Status}} returned with exit code 1
	W0921 22:02:59.721221    9048 oci.go:84] error getting container status, will try to delete anyways: unknown state "test-preload-20220921220240-5916": docker container inspect test-preload-20220921220240-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: test-preload-20220921220240-5916
	I0921 22:02:59.728425    9048 cli_runner.go:164] Run: docker exec --privileged -t test-preload-20220921220240-5916 /bin/bash -c "sudo init 0"
	W0921 22:02:59.924740    9048 cli_runner.go:211] docker exec --privileged -t test-preload-20220921220240-5916 /bin/bash -c "sudo init 0" returned with exit code 1
	I0921 22:02:59.924838    9048 oci.go:646] error shutdown test-preload-20220921220240-5916: docker exec --privileged -t test-preload-20220921220240-5916 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: test-preload-20220921220240-5916
	I0921 22:03:00.948756    9048 cli_runner.go:164] Run: docker container inspect test-preload-20220921220240-5916 --format={{.State.Status}}
	W0921 22:03:01.142117    9048 cli_runner.go:211] docker container inspect test-preload-20220921220240-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:03:01.142319    9048 oci.go:658] temporary error verifying shutdown: unknown state "test-preload-20220921220240-5916": docker container inspect test-preload-20220921220240-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: test-preload-20220921220240-5916
	I0921 22:03:01.142319    9048 oci.go:660] temporary error: container test-preload-20220921220240-5916 status is  but expect it to be exited
	I0921 22:03:01.142319    9048 retry.go:31] will retry after 328.259627ms: couldn't verify container is exited. %v: unknown state "test-preload-20220921220240-5916": docker container inspect test-preload-20220921220240-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: test-preload-20220921220240-5916
	I0921 22:03:01.490218    9048 cli_runner.go:164] Run: docker container inspect test-preload-20220921220240-5916 --format={{.State.Status}}
	W0921 22:03:01.668674    9048 cli_runner.go:211] docker container inspect test-preload-20220921220240-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:03:01.668674    9048 oci.go:658] temporary error verifying shutdown: unknown state "test-preload-20220921220240-5916": docker container inspect test-preload-20220921220240-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: test-preload-20220921220240-5916
	I0921 22:03:01.668986    9048 oci.go:660] temporary error: container test-preload-20220921220240-5916 status is  but expect it to be exited
	I0921 22:03:01.669031    9048 retry.go:31] will retry after 447.727139ms: couldn't verify container is exited. %v: unknown state "test-preload-20220921220240-5916": docker container inspect test-preload-20220921220240-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: test-preload-20220921220240-5916
	I0921 22:03:02.129329    9048 cli_runner.go:164] Run: docker container inspect test-preload-20220921220240-5916 --format={{.State.Status}}
	W0921 22:03:02.326095    9048 cli_runner.go:211] docker container inspect test-preload-20220921220240-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:03:02.326339    9048 oci.go:658] temporary error verifying shutdown: unknown state "test-preload-20220921220240-5916": docker container inspect test-preload-20220921220240-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: test-preload-20220921220240-5916
	I0921 22:03:02.326389    9048 oci.go:660] temporary error: container test-preload-20220921220240-5916 status is  but expect it to be exited
	I0921 22:03:02.326389    9048 retry.go:31] will retry after 901.025843ms: couldn't verify container is exited. %v: unknown state "test-preload-20220921220240-5916": docker container inspect test-preload-20220921220240-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: test-preload-20220921220240-5916
	I0921 22:03:03.237828    9048 cli_runner.go:164] Run: docker container inspect test-preload-20220921220240-5916 --format={{.State.Status}}
	W0921 22:03:03.459781    9048 cli_runner.go:211] docker container inspect test-preload-20220921220240-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:03:03.459876    9048 oci.go:658] temporary error verifying shutdown: unknown state "test-preload-20220921220240-5916": docker container inspect test-preload-20220921220240-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: test-preload-20220921220240-5916
	I0921 22:03:03.459876    9048 oci.go:660] temporary error: container test-preload-20220921220240-5916 status is  but expect it to be exited
	I0921 22:03:03.459876    9048 retry.go:31] will retry after 1.713171311s: couldn't verify container is exited. %v: unknown state "test-preload-20220921220240-5916": docker container inspect test-preload-20220921220240-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: test-preload-20220921220240-5916
	I0921 22:03:05.184403    9048 cli_runner.go:164] Run: docker container inspect test-preload-20220921220240-5916 --format={{.State.Status}}
	W0921 22:03:05.373669    9048 cli_runner.go:211] docker container inspect test-preload-20220921220240-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:03:05.373669    9048 oci.go:658] temporary error verifying shutdown: unknown state "test-preload-20220921220240-5916": docker container inspect test-preload-20220921220240-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: test-preload-20220921220240-5916
	I0921 22:03:05.373669    9048 oci.go:660] temporary error: container test-preload-20220921220240-5916 status is  but expect it to be exited
	I0921 22:03:05.373669    9048 retry.go:31] will retry after 3.325151152s: couldn't verify container is exited. %v: unknown state "test-preload-20220921220240-5916": docker container inspect test-preload-20220921220240-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: test-preload-20220921220240-5916
	I0921 22:03:08.715616    9048 cli_runner.go:164] Run: docker container inspect test-preload-20220921220240-5916 --format={{.State.Status}}
	W0921 22:03:08.895701    9048 cli_runner.go:211] docker container inspect test-preload-20220921220240-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:03:08.895784    9048 oci.go:658] temporary error verifying shutdown: unknown state "test-preload-20220921220240-5916": docker container inspect test-preload-20220921220240-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: test-preload-20220921220240-5916
	I0921 22:03:08.895784    9048 oci.go:660] temporary error: container test-preload-20220921220240-5916 status is  but expect it to be exited
	I0921 22:03:08.895784    9048 retry.go:31] will retry after 2.711970641s: couldn't verify container is exited. %v: unknown state "test-preload-20220921220240-5916": docker container inspect test-preload-20220921220240-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: test-preload-20220921220240-5916
	I0921 22:03:11.624090    9048 cli_runner.go:164] Run: docker container inspect test-preload-20220921220240-5916 --format={{.State.Status}}
	W0921 22:03:11.816887    9048 cli_runner.go:211] docker container inspect test-preload-20220921220240-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:03:11.817270    9048 oci.go:658] temporary error verifying shutdown: unknown state "test-preload-20220921220240-5916": docker container inspect test-preload-20220921220240-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: test-preload-20220921220240-5916
	I0921 22:03:11.817308    9048 oci.go:660] temporary error: container test-preload-20220921220240-5916 status is  but expect it to be exited
	I0921 22:03:11.817308    9048 retry.go:31] will retry after 5.015617898s: couldn't verify container is exited. %v: unknown state "test-preload-20220921220240-5916": docker container inspect test-preload-20220921220240-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: test-preload-20220921220240-5916
	I0921 22:03:16.858018    9048 cli_runner.go:164] Run: docker container inspect test-preload-20220921220240-5916 --format={{.State.Status}}
	W0921 22:03:17.049778    9048 cli_runner.go:211] docker container inspect test-preload-20220921220240-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:03:17.049778    9048 oci.go:658] temporary error verifying shutdown: unknown state "test-preload-20220921220240-5916": docker container inspect test-preload-20220921220240-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: test-preload-20220921220240-5916
	I0921 22:03:17.049778    9048 oci.go:660] temporary error: container test-preload-20220921220240-5916 status is  but expect it to be exited
	I0921 22:03:17.049778    9048 oci.go:88] couldn't shut down test-preload-20220921220240-5916 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "test-preload-20220921220240-5916": docker container inspect test-preload-20220921220240-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: test-preload-20220921220240-5916
	 
	I0921 22:03:17.057394    9048 cli_runner.go:164] Run: docker rm -f -v test-preload-20220921220240-5916
	I0921 22:03:17.257279    9048 cli_runner.go:164] Run: docker container inspect -f {{.Id}} test-preload-20220921220240-5916
	W0921 22:03:17.453057    9048 cli_runner.go:211] docker container inspect -f {{.Id}} test-preload-20220921220240-5916 returned with exit code 1
	I0921 22:03:17.461165    9048 cli_runner.go:164] Run: docker network inspect test-preload-20220921220240-5916 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0921 22:03:17.654700    9048 cli_runner.go:211] docker network inspect test-preload-20220921220240-5916 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0921 22:03:17.661828    9048 network_create.go:272] running [docker network inspect test-preload-20220921220240-5916] to gather additional debugging logs...
	I0921 22:03:17.661828    9048 cli_runner.go:164] Run: docker network inspect test-preload-20220921220240-5916
	W0921 22:03:17.857133    9048 cli_runner.go:211] docker network inspect test-preload-20220921220240-5916 returned with exit code 1
	I0921 22:03:17.857369    9048 network_create.go:275] error running [docker network inspect test-preload-20220921220240-5916]: docker network inspect test-preload-20220921220240-5916: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: test-preload-20220921220240-5916
	I0921 22:03:17.857369    9048 network_create.go:277] output of [docker network inspect test-preload-20220921220240-5916]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: test-preload-20220921220240-5916
	
	** /stderr **
	W0921 22:03:17.857968    9048 delete.go:139] delete failed (probably ok) <nil>
	I0921 22:03:17.857968    9048 fix.go:115] Sleeping 1 second for extra luck!
	I0921 22:03:18.871720    9048 start.go:125] createHost starting for "" (driver="docker")
	I0921 22:03:18.876933    9048 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0921 22:03:18.877265    9048 start.go:159] libmachine.API.Create for "test-preload-20220921220240-5916" (driver="docker")
	I0921 22:03:18.877329    9048 client.go:168] LocalClient.Create starting
	I0921 22:03:18.877684    9048 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I0921 22:03:18.877684    9048 main.go:134] libmachine: Decoding PEM data...
	I0921 22:03:18.877684    9048 main.go:134] libmachine: Parsing certificate...
	I0921 22:03:18.878440    9048 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I0921 22:03:18.878440    9048 main.go:134] libmachine: Decoding PEM data...
	I0921 22:03:18.878440    9048 main.go:134] libmachine: Parsing certificate...
	I0921 22:03:18.886566    9048 cli_runner.go:164] Run: docker network inspect test-preload-20220921220240-5916 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0921 22:03:19.073162    9048 cli_runner.go:211] docker network inspect test-preload-20220921220240-5916 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0921 22:03:19.081131    9048 network_create.go:272] running [docker network inspect test-preload-20220921220240-5916] to gather additional debugging logs...
	I0921 22:03:19.082104    9048 cli_runner.go:164] Run: docker network inspect test-preload-20220921220240-5916
	W0921 22:03:19.273623    9048 cli_runner.go:211] docker network inspect test-preload-20220921220240-5916 returned with exit code 1
	I0921 22:03:19.273623    9048 network_create.go:275] error running [docker network inspect test-preload-20220921220240-5916]: docker network inspect test-preload-20220921220240-5916: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: test-preload-20220921220240-5916
	I0921 22:03:19.273623    9048 network_create.go:277] output of [docker network inspect test-preload-20220921220240-5916]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: test-preload-20220921220240-5916
	
	** /stderr **
	I0921 22:03:19.281647    9048 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0921 22:03:19.491176    9048 network.go:281] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0003b0058] amended:false}} dirty:map[] misses:0}
	I0921 22:03:19.491176    9048 network.go:239] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0921 22:03:19.508849    9048 network.go:290] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0003b0058] amended:true}} dirty:map[192.168.49.0:0xc0003b0058 192.168.58.0:0xc000572590] misses:0}
	I0921 22:03:19.508849    9048 network.go:236] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0921 22:03:19.508849    9048 network_create.go:115] attempt to create docker network test-preload-20220921220240-5916 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0921 22:03:19.516543    9048 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=test-preload-20220921220240-5916 test-preload-20220921220240-5916
	W0921 22:03:19.707669    9048 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=test-preload-20220921220240-5916 test-preload-20220921220240-5916 returned with exit code 1
	E0921 22:03:19.707669    9048 network_create.go:104] error while trying to create docker network test-preload-20220921220240-5916 192.168.58.0/24: create docker network test-preload-20220921220240-5916 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=test-preload-20220921220240-5916 test-preload-20220921220240-5916: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 6e89e032f8f27e9971cf68bb221f9bffdc4a4f561ce9102213d42d8559f09e8f (br-6e89e032f8f2): conflicts with network 8a3cd8d165a4af032276277dbee93b52b8ee4aed7ca6d8aceee6814901b31792 (br-8a3cd8d165a4): networks have overlapping IPv4
	W0921 22:03:19.707669    9048 out.go:239] ! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network test-preload-20220921220240-5916 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=test-preload-20220921220240-5916 test-preload-20220921220240-5916: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 6e89e032f8f27e9971cf68bb221f9bffdc4a4f561ce9102213d42d8559f09e8f (br-6e89e032f8f2): conflicts with network 8a3cd8d165a4af032276277dbee93b52b8ee4aed7ca6d8aceee6814901b31792 (br-8a3cd8d165a4): networks have overlapping IPv4
	
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network test-preload-20220921220240-5916 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=test-preload-20220921220240-5916 test-preload-20220921220240-5916: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 6e89e032f8f27e9971cf68bb221f9bffdc4a4f561ce9102213d42d8559f09e8f (br-6e89e032f8f2): conflicts with network 8a3cd8d165a4af032276277dbee93b52b8ee4aed7ca6d8aceee6814901b31792 (br-8a3cd8d165a4): networks have overlapping IPv4
	
	I0921 22:03:19.721184    9048 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0921 22:03:19.931698    9048 cli_runner.go:164] Run: docker volume create test-preload-20220921220240-5916 --label name.minikube.sigs.k8s.io=test-preload-20220921220240-5916 --label created_by.minikube.sigs.k8s.io=true
	W0921 22:03:20.126370    9048 cli_runner.go:211] docker volume create test-preload-20220921220240-5916 --label name.minikube.sigs.k8s.io=test-preload-20220921220240-5916 --label created_by.minikube.sigs.k8s.io=true returned with exit code 1
	I0921 22:03:20.126515    9048 client.go:171] LocalClient.Create took 1.2491764s
	I0921 22:03:22.145295    9048 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0921 22:03:22.152242    9048 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220921220240-5916
	W0921 22:03:22.338309    9048 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220921220240-5916 returned with exit code 1
	I0921 22:03:22.338635    9048 retry.go:31] will retry after 242.222461ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "test-preload-20220921220240-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220921220240-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: test-preload-20220921220240-5916
	I0921 22:03:22.594436    9048 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220921220240-5916
	W0921 22:03:22.802650    9048 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220921220240-5916 returned with exit code 1
	I0921 22:03:22.802930    9048 retry.go:31] will retry after 293.637806ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "test-preload-20220921220240-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220921220240-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: test-preload-20220921220240-5916
	I0921 22:03:23.111808    9048 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220921220240-5916
	W0921 22:03:23.294472    9048 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220921220240-5916 returned with exit code 1
	I0921 22:03:23.294602    9048 retry.go:31] will retry after 446.119795ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "test-preload-20220921220240-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220921220240-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: test-preload-20220921220240-5916
	I0921 22:03:23.751667    9048 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220921220240-5916
	W0921 22:03:23.974963    9048 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220921220240-5916 returned with exit code 1
	W0921 22:03:23.974963    9048 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "test-preload-20220921220240-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220921220240-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: test-preload-20220921220240-5916
	
	W0921 22:03:23.974963    9048 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "test-preload-20220921220240-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220921220240-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: test-preload-20220921220240-5916
	I0921 22:03:23.985566    9048 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0921 22:03:23.992123    9048 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220921220240-5916
	W0921 22:03:24.193104    9048 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220921220240-5916 returned with exit code 1
	I0921 22:03:24.193104    9048 retry.go:31] will retry after 179.638263ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "test-preload-20220921220240-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220921220240-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: test-preload-20220921220240-5916
	I0921 22:03:24.388505    9048 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220921220240-5916
	W0921 22:03:24.583636    9048 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220921220240-5916 returned with exit code 1
	I0921 22:03:24.583636    9048 retry.go:31] will retry after 263.695078ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "test-preload-20220921220240-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220921220240-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: test-preload-20220921220240-5916
	I0921 22:03:24.870633    9048 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220921220240-5916
	W0921 22:03:25.061904    9048 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220921220240-5916 returned with exit code 1
	I0921 22:03:25.061904    9048 retry.go:31] will retry after 484.240172ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "test-preload-20220921220240-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220921220240-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: test-preload-20220921220240-5916
	I0921 22:03:25.565443    9048 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220921220240-5916
	W0921 22:03:25.747999    9048 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220921220240-5916 returned with exit code 1
	W0921 22:03:25.748337    9048 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "test-preload-20220921220240-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220921220240-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: test-preload-20220921220240-5916
	
	W0921 22:03:25.748405    9048 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "test-preload-20220921220240-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220921220240-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: test-preload-20220921220240-5916
	I0921 22:03:25.748405    9048 start.go:128] duration metric: createHost completed in 6.8766336s
	I0921 22:03:25.757689    9048 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0921 22:03:25.763728    9048 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220921220240-5916
	W0921 22:03:25.961474    9048 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220921220240-5916 returned with exit code 1
	I0921 22:03:25.961474    9048 retry.go:31] will retry after 340.62286ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "test-preload-20220921220240-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220921220240-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: test-preload-20220921220240-5916
	I0921 22:03:26.311913    9048 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220921220240-5916
	W0921 22:03:26.490722    9048 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220921220240-5916 returned with exit code 1
	I0921 22:03:26.491133    9048 retry.go:31] will retry after 297.417842ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "test-preload-20220921220240-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220921220240-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: test-preload-20220921220240-5916
	I0921 22:03:26.812162    9048 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220921220240-5916
	W0921 22:03:26.992127    9048 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220921220240-5916 returned with exit code 1
	I0921 22:03:26.992127    9048 retry.go:31] will retry after 448.358942ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "test-preload-20220921220240-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220921220240-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: test-preload-20220921220240-5916
	I0921 22:03:27.461552    9048 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220921220240-5916
	W0921 22:03:27.639191    9048 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220921220240-5916 returned with exit code 1
	W0921 22:03:27.639598    9048 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "test-preload-20220921220240-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220921220240-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: test-preload-20220921220240-5916
	
	W0921 22:03:27.639598    9048 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "test-preload-20220921220240-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220921220240-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: test-preload-20220921220240-5916
	I0921 22:03:27.651898    9048 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0921 22:03:27.659562    9048 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220921220240-5916
	W0921 22:03:27.840835    9048 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220921220240-5916 returned with exit code 1
	I0921 22:03:27.840835    9048 retry.go:31] will retry after 176.645665ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "test-preload-20220921220240-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220921220240-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: test-preload-20220921220240-5916
	I0921 22:03:28.036179    9048 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220921220240-5916
	W0921 22:03:28.217387    9048 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220921220240-5916 returned with exit code 1
	I0921 22:03:28.217911    9048 retry.go:31] will retry after 512.00063ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "test-preload-20220921220240-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220921220240-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: test-preload-20220921220240-5916
	I0921 22:03:28.752021    9048 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220921220240-5916
	W0921 22:03:28.944724    9048 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220921220240-5916 returned with exit code 1
	I0921 22:03:28.944724    9048 retry.go:31] will retry after 673.154531ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "test-preload-20220921220240-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220921220240-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: test-preload-20220921220240-5916
	I0921 22:03:29.626724    9048 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220921220240-5916
	W0921 22:03:29.805590    9048 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220921220240-5916 returned with exit code 1
	W0921 22:03:29.805618    9048 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "test-preload-20220921220240-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220921220240-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: test-preload-20220921220240-5916
	
	W0921 22:03:29.805618    9048 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "test-preload-20220921220240-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220921220240-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: test-preload-20220921220240-5916
	I0921 22:03:29.805618    9048 fix.go:57] fixHost completed within 31.1588679s
	I0921 22:03:29.805618    9048 start.go:83] releasing machines lock for "test-preload-20220921220240-5916", held for 31.1594117s
	W0921 22:03:29.806343    9048 out.go:239] * Failed to start docker container. Running "minikube delete -p test-preload-20220921220240-5916" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for test-preload-20220921220240-5916 container: docker volume create test-preload-20220921220240-5916 --label name.minikube.sigs.k8s.io=test-preload-20220921220240-5916 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create test-preload-20220921220240-5916: error while creating volume root path '/var/lib/docker/volumes/test-preload-20220921220240-5916': mkdir /var/lib/docker/volumes/test-preload-20220921220240-5916: read-only file system
	
	* Failed to start docker container. Running "minikube delete -p test-preload-20220921220240-5916" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for test-preload-20220921220240-5916 container: docker volume create test-preload-20220921220240-5916 --label name.minikube.sigs.k8s.io=test-preload-20220921220240-5916 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create test-preload-20220921220240-5916: error while creating volume root path '/var/lib/docker/volumes/test-preload-20220921220240-5916': mkdir /var/lib/docker/volumes/test-preload-20220921220240-5916: read-only file system
	
	I0921 22:03:29.811826    9048 out.go:177] 
	W0921 22:03:29.812855    9048 out.go:239] X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for test-preload-20220921220240-5916 container: docker volume create test-preload-20220921220240-5916 --label name.minikube.sigs.k8s.io=test-preload-20220921220240-5916 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create test-preload-20220921220240-5916: error while creating volume root path '/var/lib/docker/volumes/test-preload-20220921220240-5916': mkdir /var/lib/docker/volumes/test-preload-20220921220240-5916: read-only file system
	
	X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for test-preload-20220921220240-5916 container: docker volume create test-preload-20220921220240-5916 --label name.minikube.sigs.k8s.io=test-preload-20220921220240-5916 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create test-preload-20220921220240-5916: error while creating volume root path '/var/lib/docker/volumes/test-preload-20220921220240-5916': mkdir /var/lib/docker/volumes/test-preload-20220921220240-5916: read-only file system
	
	W0921 22:03:29.812855    9048 out.go:239] * Suggestion: Restart Docker
	* Suggestion: Restart Docker
	W0921 22:03:29.812855    9048 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/6825
	* Related issue: https://github.com/kubernetes/minikube/issues/6825
	I0921 22:03:29.812855    9048 out.go:177] 

                                                
                                                
** /stderr **
preload_test.go:50: out/minikube-windows-amd64.exe start -p test-preload-20220921220240-5916 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker --kubernetes-version=v1.17.0 failed: exit status 60
panic.go:522: *** TestPreload FAILED at 2022-09-21 22:03:30.0179624 +0000 GMT m=+2017.570512201
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestPreload]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect test-preload-20220921220240-5916
helpers_test.go:231: (dbg) Non-zero exit: docker inspect test-preload-20220921220240-5916: exit status 1 (276.6299ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: test-preload-20220921220240-5916

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p test-preload-20220921220240-5916 -n test-preload-20220921220240-5916
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p test-preload-20220921220240-5916 -n test-preload-20220921220240-5916: exit status 7 (566.9585ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0921 22:03:30.838092    8472 status.go:247] status error: host: state: unknown state "test-preload-20220921220240-5916": docker container inspect test-preload-20220921220240-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: test-preload-20220921220240-5916

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "test-preload-20220921220240-5916" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:175: Cleaning up "test-preload-20220921220240-5916" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p test-preload-20220921220240-5916
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p test-preload-20220921220240-5916: (1.6498772s)
--- FAIL: TestPreload (52.07s)

                                                
                                    
x
+
TestScheduledStopWindows (50.79s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-windows-amd64.exe start -p scheduled-stop-20220921220332-5916 --memory=2048 --driver=docker
scheduled_stop_test.go:128: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p scheduled-stop-20220921220332-5916 --memory=2048 --driver=docker: exit status 60 (48.3554315s)

                                                
                                                
-- stdout --
	* [scheduled-stop-20220921220332-5916] minikube v1.27.0 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	  - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=14995
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting control plane node scheduled-stop-20220921220332-5916 in cluster scheduled-stop-20220921220332-5916
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* docker "scheduled-stop-20220921220332-5916" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	    > gcr.io/k8s-minikube/kicbase:  0 B [__________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase:  0 B [__________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase:  0 B [__________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase:  0 B [______________________] ?% ? p/s 800ms! minikube was unable to download gcr.io/k8s-minikube/kicbase:v0.0.34, but successfully downloaded gcr.io/k8s-minikube/kicbase:v0.0.34 as a fallback image
	E0921 22:03:39.121402    4536 network_create.go:104] error while trying to create docker network scheduled-stop-20220921220332-5916 192.168.49.0/24: create docker network scheduled-stop-20220921220332-5916 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=scheduled-stop-20220921220332-5916 scheduled-stop-20220921220332-5916: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 9d91a54c226652f7dbc4ef93d782942ed0062225aee2850c15958878bc7c6989 (br-9d91a54c2266): conflicts with network a04d36bfb3cf4b1b1ab643a2ec70fadfc4a485878b32a275b4c7653037d66dd0 (br-a04d36bfb3cf): networks have overlapping IPv4
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network scheduled-stop-20220921220332-5916 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=scheduled-stop-20220921220332-5916 scheduled-stop-20220921220332-5916: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 9d91a54c226652f7dbc4ef93d782942ed0062225aee2850c15958878bc7c6989 (br-9d91a54c2266): conflicts with network a04d36bfb3cf4b1b1ab643a2ec70fadfc4a485878b32a275b4c7653037d66dd0 (br-a04d36bfb3cf): networks have overlapping IPv4
	
	! StartHost failed, but will try again: creating host: create: creating: setting up container node: creating volume for scheduled-stop-20220921220332-5916 container: docker volume create scheduled-stop-20220921220332-5916 --label name.minikube.sigs.k8s.io=scheduled-stop-20220921220332-5916 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create scheduled-stop-20220921220332-5916: error while creating volume root path '/var/lib/docker/volumes/scheduled-stop-20220921220332-5916': mkdir /var/lib/docker/volumes/scheduled-stop-20220921220332-5916: read-only file system
	
	E0921 22:04:11.559190    4536 network_create.go:104] error while trying to create docker network scheduled-stop-20220921220332-5916 192.168.58.0/24: create docker network scheduled-stop-20220921220332-5916 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=scheduled-stop-20220921220332-5916 scheduled-stop-20220921220332-5916: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network c4713925cd73c7cd31c51961ae86888ad1109427cf4d6abdd44e3d3a032a0cfa (br-c4713925cd73): conflicts with network 8a3cd8d165a4af032276277dbee93b52b8ee4aed7ca6d8aceee6814901b31792 (br-8a3cd8d165a4): networks have overlapping IPv4
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network scheduled-stop-20220921220332-5916 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=scheduled-stop-20220921220332-5916 scheduled-stop-20220921220332-5916: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network c4713925cd73c7cd31c51961ae86888ad1109427cf4d6abdd44e3d3a032a0cfa (br-c4713925cd73): conflicts with network 8a3cd8d165a4af032276277dbee93b52b8ee4aed7ca6d8aceee6814901b31792 (br-8a3cd8d165a4): networks have overlapping IPv4
	
	* Failed to start docker container. Running "minikube delete -p scheduled-stop-20220921220332-5916" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for scheduled-stop-20220921220332-5916 container: docker volume create scheduled-stop-20220921220332-5916 --label name.minikube.sigs.k8s.io=scheduled-stop-20220921220332-5916 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create scheduled-stop-20220921220332-5916: error while creating volume root path '/var/lib/docker/volumes/scheduled-stop-20220921220332-5916': mkdir /var/lib/docker/volumes/scheduled-stop-20220921220332-5916: read-only file system
	
	X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for scheduled-stop-20220921220332-5916 container: docker volume create scheduled-stop-20220921220332-5916 --label name.minikube.sigs.k8s.io=scheduled-stop-20220921220332-5916 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create scheduled-stop-20220921220332-5916: error while creating volume root path '/var/lib/docker/volumes/scheduled-stop-20220921220332-5916': mkdir /var/lib/docker/volumes/scheduled-stop-20220921220332-5916: read-only file system
	
	* Suggestion: Restart Docker
	* Related issue: https://github.com/kubernetes/minikube/issues/6825

                                                
                                                
** /stderr **
scheduled_stop_test.go:130: starting minikube: exit status 60

                                                
                                                
-- stdout --
	* [scheduled-stop-20220921220332-5916] minikube v1.27.0 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	  - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=14995
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting control plane node scheduled-stop-20220921220332-5916 in cluster scheduled-stop-20220921220332-5916
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* docker "scheduled-stop-20220921220332-5916" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	    > gcr.io/k8s-minikube/kicbase:  0 B [__________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase:  0 B [__________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase:  0 B [__________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase:  0 B [______________________] ?% ? p/s 800ms! minikube was unable to download gcr.io/k8s-minikube/kicbase:v0.0.34, but successfully downloaded gcr.io/k8s-minikube/kicbase:v0.0.34 as a fallback image
	E0921 22:03:39.121402    4536 network_create.go:104] error while trying to create docker network scheduled-stop-20220921220332-5916 192.168.49.0/24: create docker network scheduled-stop-20220921220332-5916 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=scheduled-stop-20220921220332-5916 scheduled-stop-20220921220332-5916: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 9d91a54c226652f7dbc4ef93d782942ed0062225aee2850c15958878bc7c6989 (br-9d91a54c2266): conflicts with network a04d36bfb3cf4b1b1ab643a2ec70fadfc4a485878b32a275b4c7653037d66dd0 (br-a04d36bfb3cf): networks have overlapping IPv4
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network scheduled-stop-20220921220332-5916 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=scheduled-stop-20220921220332-5916 scheduled-stop-20220921220332-5916: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 9d91a54c226652f7dbc4ef93d782942ed0062225aee2850c15958878bc7c6989 (br-9d91a54c2266): conflicts with network a04d36bfb3cf4b1b1ab643a2ec70fadfc4a485878b32a275b4c7653037d66dd0 (br-a04d36bfb3cf): networks have overlapping IPv4
	
	! StartHost failed, but will try again: creating host: create: creating: setting up container node: creating volume for scheduled-stop-20220921220332-5916 container: docker volume create scheduled-stop-20220921220332-5916 --label name.minikube.sigs.k8s.io=scheduled-stop-20220921220332-5916 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create scheduled-stop-20220921220332-5916: error while creating volume root path '/var/lib/docker/volumes/scheduled-stop-20220921220332-5916': mkdir /var/lib/docker/volumes/scheduled-stop-20220921220332-5916: read-only file system
	
	E0921 22:04:11.559190    4536 network_create.go:104] error while trying to create docker network scheduled-stop-20220921220332-5916 192.168.58.0/24: create docker network scheduled-stop-20220921220332-5916 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=scheduled-stop-20220921220332-5916 scheduled-stop-20220921220332-5916: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network c4713925cd73c7cd31c51961ae86888ad1109427cf4d6abdd44e3d3a032a0cfa (br-c4713925cd73): conflicts with network 8a3cd8d165a4af032276277dbee93b52b8ee4aed7ca6d8aceee6814901b31792 (br-8a3cd8d165a4): networks have overlapping IPv4
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network scheduled-stop-20220921220332-5916 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=scheduled-stop-20220921220332-5916 scheduled-stop-20220921220332-5916: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network c4713925cd73c7cd31c51961ae86888ad1109427cf4d6abdd44e3d3a032a0cfa (br-c4713925cd73): conflicts with network 8a3cd8d165a4af032276277dbee93b52b8ee4aed7ca6d8aceee6814901b31792 (br-8a3cd8d165a4): networks have overlapping IPv4
	
	* Failed to start docker container. Running "minikube delete -p scheduled-stop-20220921220332-5916" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for scheduled-stop-20220921220332-5916 container: docker volume create scheduled-stop-20220921220332-5916 --label name.minikube.sigs.k8s.io=scheduled-stop-20220921220332-5916 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create scheduled-stop-20220921220332-5916: error while creating volume root path '/var/lib/docker/volumes/scheduled-stop-20220921220332-5916': mkdir /var/lib/docker/volumes/scheduled-stop-20220921220332-5916: read-only file system
	
	X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for scheduled-stop-20220921220332-5916 container: docker volume create scheduled-stop-20220921220332-5916 --label name.minikube.sigs.k8s.io=scheduled-stop-20220921220332-5916 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create scheduled-stop-20220921220332-5916: error while creating volume root path '/var/lib/docker/volumes/scheduled-stop-20220921220332-5916': mkdir /var/lib/docker/volumes/scheduled-stop-20220921220332-5916: read-only file system
	
	* Suggestion: Restart Docker
	* Related issue: https://github.com/kubernetes/minikube/issues/6825

                                                
                                                
** /stderr **
panic.go:522: *** TestScheduledStopWindows FAILED at 2022-09-21 22:04:20.87899 +0000 GMT m=+2068.431158301
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestScheduledStopWindows]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect scheduled-stop-20220921220332-5916
helpers_test.go:231: (dbg) Non-zero exit: docker inspect scheduled-stop-20220921220332-5916: exit status 1 (235.974ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: scheduled-stop-20220921220332-5916

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p scheduled-stop-20220921220332-5916 -n scheduled-stop-20220921220332-5916
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p scheduled-stop-20220921220332-5916 -n scheduled-stop-20220921220332-5916: exit status 7 (548.5968ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0921 22:04:21.643538    1904 status.go:247] status error: host: state: unknown state "scheduled-stop-20220921220332-5916": docker container inspect scheduled-stop-20220921220332-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: scheduled-stop-20220921220332-5916

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "scheduled-stop-20220921220332-5916" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:175: Cleaning up "scheduled-stop-20220921220332-5916" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p scheduled-stop-20220921220332-5916
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p scheduled-stop-20220921220332-5916: (1.6321073s)
--- FAIL: TestScheduledStopWindows (50.79s)

                                                
                                    
x
+
TestInsufficientStorage (11.18s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-windows-amd64.exe start -p insufficient-storage-20220921220423-5916 --memory=2048 --output=json --wait=true --driver=docker
status_test.go:50: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p insufficient-storage-20220921220423-5916 --memory=2048 --output=json --wait=true --driver=docker: exit status 26 (9.0446197s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"fce8e563-a1df-499e-b50c-2eea8b18c9bd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-20220921220423-5916] minikube v1.27.0 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"07cfee82-ecf3-4686-b590-d5a850b54a3c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=C:\\Users\\jenkins.minikube2\\minikube-integration\\kubeconfig"}}
	{"specversion":"1.0","id":"745383e6-0fcf-471f-9a87-d49494fba94e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube"}}
	{"specversion":"1.0","id":"dbbdc67b-c392-4d6e-955b-ab6aff9a71ea","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=14995"}}
	{"specversion":"1.0","id":"823bc5d3-c7fe-4367-bc84-b63e0a762b8e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"3a09a2b0-5cb7-4f85-9cb4-d17115a8c039","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"fb0d814b-4073-4ed1-bbc8-e1d3997f4e5f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"f48f4d7c-d6a2-47f3-b531-43a17b4de9c2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"d971c200-cfbd-435f-ab0c-d0f7d9e6166d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker Desktop driver with root privileges"}}
	{"specversion":"1.0","id":"622978c6-925e-4e36-92e8-121fef807e20","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting control plane node insufficient-storage-20220921220423-5916 in cluster insufficient-storage-20220921220423-5916","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"e2d2e2c5-ff73-468f-aeaa-5a6c74fb7819","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"7d743494-9401-44c9-a576-ac590d663ada","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.warning","datacontenttype":"application/json","data":{"message":"minikube was unable to download gcr.io/k8s-minikube/kicbase:v0.0.34, but successfully downloaded gcr.io/k8s-minikube/kicbase:v0.0.34 as a fallback image"}}
	{"specversion":"1.0","id":"964361e5-7052-4570-a8fd-354181a7c0ba","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"8d600dbc-6fa7-4833-9f8d-46a9b986f1b7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.warning","datacontenttype":"application/json","data":{"message":"Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network insufficient-storage-20220921220423-5916 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=insufficient-storage-20220921220423-5916 insufficient-storage-20220921220423-5916: exit status 1\nstdout:\n\nstderr:\nError response from daemon: cannot create network d030ec08655609c15af4862081abca9f07fe7ce688f9e7fc162b3ac1332c082e (br-d030ec086556): conflicts with network a04d36bfb3cf4b1b1ab643a2ec70fadfc4a485878b32a275b4c7653037d66dd0 (br-a04d36bfb3cf): netwo
rks have overlapping IPv4"}}
	{"specversion":"1.0","id":"aba7e25d-9c30-42ee-8946-0e10c6ef6232","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100%% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
** stderr ** 
	    > gcr.io/k8s-minikube/kicbase:  0 B [__________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase:  0 B [__________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase:  0 B [__________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase:  0 B [______________________] ?% ? p/s 800msE0921 22:04:29.843288    8972 network_create.go:104] error while trying to create docker network insufficient-storage-20220921220423-5916 192.168.49.0/24: create docker network insufficient-storage-20220921220423-5916 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=insufficient-storage-20220921220423-5916 insufficient-storage-20220921220423-5916: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network d030ec08655609c15af4862081abca9f07fe7ce688f9e7fc162b3ac1332c082e (br-d030ec086556): conflicts with network a04d36bfb3cf4b1b1ab643a2ec70fadfc4a485878b32a275b4c7653037d66dd0 (br-a04d36bfb3cf): networks have overlapping IPv4

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-windows-amd64.exe status -p insufficient-storage-20220921220423-5916 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status -p insufficient-storage-20220921220423-5916 --output=json --layout=cluster: exit status 7 (532.0279ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-20220921220423-5916","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.27.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":520,"StatusName":"Unknown"}},"Nodes":[{"Name":"insufficient-storage-20220921220423-5916","StatusCode":520,"StatusName":"Unknown","Components":{"apiserver":{"Name":"apiserver","StatusCode":520,"StatusName":"Unknown"},"kubelet":{"Name":"kubelet","StatusCode":520,"StatusName":"Unknown"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0921 22:04:32.853331    6764 status.go:258] status error: host: state: unknown state "insufficient-storage-20220921220423-5916": docker container inspect insufficient-storage-20220921220423-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: insufficient-storage-20220921220423-5916
	E0921 22:04:32.853331    6764 status.go:261] The "insufficient-storage-20220921220423-5916" host does not exist!

                                                
                                                
** /stderr **
status_test.go:98: incorrect node status code: 507
helpers_test.go:175: Cleaning up "insufficient-storage-20220921220423-5916" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p insufficient-storage-20220921220423-5916
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p insufficient-storage-20220921220423-5916: (1.6052777s)
--- FAIL: TestInsufficientStorage (11.18s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (136.76s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:127: (dbg) Run:  C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube-v1.9.0.530211344.exe start -p running-upgrade-20220921220528-5916 --memory=2200 --vm-driver=docker

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:127: (dbg) Non-zero exit: C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube-v1.9.0.530211344.exe start -p running-upgrade-20220921220528-5916 --memory=2200 --vm-driver=docker: exit status 70 (42.5513306s)

                                                
                                                
-- stdout --
	* [running-upgrade-20220921220528-5916] minikube v1.9.0 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=14995
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=C:\Users\jenkins.minikube2\AppData\Local\Temp\legacy_kubeconfig478791483
	* Using the docker driver based on user configuration
	* Pulling base image ...
	* Downloading Kubernetes v1.18.0 preload ...
	* Creating Kubernetes in docker container with (CPUs=2) (16 available), Memory=2200MB (51405MB available) ...
	! StartHost failed, but will try again: creating host: create: creating: create kic node: creating volume for running-upgrade-20220921220528-5916 container: output Error response from daemon: create running-upgrade-20220921220528-5916: error while creating volume root path '/var/lib/docker/volumes/running-upgrade-20220921220528-5916': mkdir /var/lib/docker/volumes/running-upgrade-20220921220528-5916: read-only file system
	: exit status 1
	* docker "running-upgrade-20220921220528-5916" container is missing, will recreate.
	* Creating Kubernetes in docker container with (CPUs=2) (16 available), Memory=2200MB (51405MB available) ...
	* StartHost failed again: recreate: creating host: create: creating: create kic node: creating volume for running-upgrade-20220921220528-5916 container: output Error response from daemon: create running-upgrade-20220921220528-5916: error while creating volume root path '/var/lib/docker/volumes/running-upgrade-20220921220528-5916': mkdir /var/lib/docker/volumes/running-upgrade-20220921220528-5916: read-only file system
	: exit status 1
	  - Run: "minikube delete -p running-upgrade-20220921220528-5916", then "minikube start -p running-upgrade-20220921220528-5916 --alsologtostderr -v=1" to try again with more logging

                                                
                                                
-- /stdout --
** stderr ** 
	    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 14.45 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 43.33 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 85.76 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 110.87 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 157.45 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 190.62 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 226.44 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 259.95 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 299.84 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 330.06 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 369.78 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 401.55 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4
: 445.69 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 479.16 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 523.86 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 542.91 MiB* 
	X Unable to start VM after repeated tries. Please try {{'minikube delete' if possible: recreate: creating host: create: creating: create kic node: creating volume for running-upgrade-20220921220528-5916 container: output Error response from daemon: create running-upgrade-20220921220528-5916: error while creating volume root path '/var/lib/docker/volumes/running-upgrade-20220921220528-5916': mkdir /var/lib/docker/volumes/running-upgrade-20220921220528-5916: read-only file system
	: exit status 1
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:127: (dbg) Run:  C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube-v1.9.0.530211344.exe start -p running-upgrade-20220921220528-5916 --memory=2200 --vm-driver=docker

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:127: (dbg) Non-zero exit: C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube-v1.9.0.530211344.exe start -p running-upgrade-20220921220528-5916 --memory=2200 --vm-driver=docker: exit status 70 (39.9753906s)

                                                
                                                
-- stdout --
	* [running-upgrade-20220921220528-5916] minikube v1.9.0 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=14995
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=C:\Users\jenkins.minikube2\AppData\Local\Temp\legacy_kubeconfig366860107
	* Using the docker driver based on existing profile
	* Pulling base image ...
	* docker "running-upgrade-20220921220528-5916" container is missing, will recreate.
	* Creating Kubernetes in docker container with (CPUs=2) (16 available), Memory=2200MB (51405MB available) ...
	! StartHost failed, but will try again: recreate: creating host: create: creating: create kic node: creating volume for running-upgrade-20220921220528-5916 container: output Error response from daemon: create running-upgrade-20220921220528-5916: error while creating volume root path '/var/lib/docker/volumes/running-upgrade-20220921220528-5916': mkdir /var/lib/docker/volumes/running-upgrade-20220921220528-5916: read-only file system
	: exit status 1
	* docker "running-upgrade-20220921220528-5916" container is missing, will recreate.
	* Creating Kubernetes in docker container with (CPUs=2) (16 available), Memory=2200MB (51405MB available) ...
	* StartHost failed again: recreate: creating host: create: creating: create kic node: creating volume for running-upgrade-20220921220528-5916 container: output Error response from daemon: create running-upgrade-20220921220528-5916: error while creating volume root path '/var/lib/docker/volumes/running-upgrade-20220921220528-5916': mkdir /var/lib/docker/volumes/running-upgrade-20220921220528-5916: read-only file system
	: exit status 1
	  - Run: "minikube delete -p running-upgrade-20220921220528-5916", then "minikube start -p running-upgrade-20220921220528-5916 --alsologtostderr -v=1" to try again with more logging

                                                
                                                
-- /stdout --
** stderr ** 
	* 
	X Unable to start VM after repeated tries. Please try {{'minikube delete' if possible: recreate: creating host: create: creating: create kic node: creating volume for running-upgrade-20220921220528-5916 container: output Error response from daemon: create running-upgrade-20220921220528-5916: error while creating volume root path '/var/lib/docker/volumes/running-upgrade-20220921220528-5916': mkdir /var/lib/docker/volumes/running-upgrade-20220921220528-5916: read-only file system
	: exit status 1
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:127: (dbg) Run:  C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube-v1.9.0.530211344.exe start -p running-upgrade-20220921220528-5916 --memory=2200 --vm-driver=docker

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:127: (dbg) Non-zero exit: C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube-v1.9.0.530211344.exe start -p running-upgrade-20220921220528-5916 --memory=2200 --vm-driver=docker: exit status 70 (48.4416212s)

                                                
                                                
-- stdout --
	* [running-upgrade-20220921220528-5916] minikube v1.9.0 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=14995
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=C:\Users\jenkins.minikube2\AppData\Local\Temp\legacy_kubeconfig482767468
	* Using the docker driver based on existing profile
	* Pulling base image ...
	* Downloading Kubernetes v1.18.0 preload ...
	* docker "running-upgrade-20220921220528-5916" container is missing, will recreate.
	* Creating Kubernetes in docker container with (CPUs=2) (16 available), Memory=2200MB (51405MB available) ...
	! StartHost failed, but will try again: recreate: creating host: create: creating: create kic node: creating volume for running-upgrade-20220921220528-5916 container: output Error response from daemon: create running-upgrade-20220921220528-5916: error while creating volume root path '/var/lib/docker/volumes/running-upgrade-20220921220528-5916': mkdir /var/lib/docker/volumes/running-upgrade-20220921220528-5916: read-only file system
	: exit status 1
	* docker "running-upgrade-20220921220528-5916" container is missing, will recreate.
	* Creating Kubernetes in docker container with (CPUs=2) (16 available), Memory=2200MB (51405MB available) ...
	* StartHost failed again: recreate: creating host: create: creating: create kic node: creating volume for running-upgrade-20220921220528-5916 container: output Error response from daemon: create running-upgrade-20220921220528-5916: error while creating volume root path '/var/lib/docker/volumes/running-upgrade-20220921220528-5916': mkdir /var/lib/docker/volumes/running-upgrade-20220921220528-5916: read-only file system
	: exit status 1
	  - Run: "minikube delete -p running-upgrade-20220921220528-5916", then "minikube start -p running-upgrade-20220921220528-5916 --alsologtostderr -v=1" to try again with more logging

                                                
                                                
-- /stdout --
** stderr ** 
	    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 28.45 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 66.30 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 94.16 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 130.22 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 161.53 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 203.34 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 251.67 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 283.56 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 336.30 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 389.20 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 425.30 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 470.23 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4
: 516.00 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 542.91 MiB* 
	X Unable to start VM after repeated tries. Please try {{'minikube delete' if possible: recreate: creating host: create: creating: create kic node: creating volume for running-upgrade-20220921220528-5916 container: output Error response from daemon: create running-upgrade-20220921220528-5916: error while creating volume root path '/var/lib/docker/volumes/running-upgrade-20220921220528-5916': mkdir /var/lib/docker/volumes/running-upgrade-20220921220528-5916: read-only file system
	: exit status 1
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:133: legacy v1.9.0 start failed: exit status 70
panic.go:522: *** TestRunningBinaryUpgrade FAILED at 2022-09-21 22:07:43.0053614 +0000 GMT m=+2270.556004201
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestRunningBinaryUpgrade]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect running-upgrade-20220921220528-5916
helpers_test.go:231: (dbg) Non-zero exit: docker inspect running-upgrade-20220921220528-5916: exit status 1 (238.0592ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: running-upgrade-20220921220528-5916

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p running-upgrade-20220921220528-5916 -n running-upgrade-20220921220528-5916
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p running-upgrade-20220921220528-5916 -n running-upgrade-20220921220528-5916: exit status 7 (589.2281ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0921 22:07:43.809742    2432 status.go:247] status error: host: state: unknown state "running-upgrade-20220921220528-5916": docker container inspect running-upgrade-20220921220528-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: running-upgrade-20220921220528-5916

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "running-upgrade-20220921220528-5916" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:175: Cleaning up "running-upgrade-20220921220528-5916" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p running-upgrade-20220921220528-5916
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p running-upgrade-20220921220528-5916: (1.6353162s)
--- FAIL: TestRunningBinaryUpgrade (136.76s)

                                                
                                    
x
+
TestKubernetesUpgrade (72.08s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:229: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubernetes-upgrade-20220921220835-5916 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:229: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-20220921220835-5916 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker: exit status 60 (49.9638279s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-20220921220835-5916] minikube v1.27.0 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	  - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=14995
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting control plane node kubernetes-upgrade-20220921220835-5916 in cluster kubernetes-upgrade-20220921220835-5916
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* docker "kubernetes-upgrade-20220921220835-5916" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0921 22:08:35.607612    8700 out.go:296] Setting OutFile to fd 1812 ...
	I0921 22:08:35.671998    8700 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0921 22:08:35.671998    8700 out.go:309] Setting ErrFile to fd 1480...
	I0921 22:08:35.671998    8700 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0921 22:08:35.692207    8700 out.go:303] Setting JSON to false
	I0921 22:08:35.703471    8700 start.go:115] hostinfo: {"hostname":"minikube2","uptime":4184,"bootTime":1663793931,"procs":153,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19044 Build 19044","kernelVersion":"10.0.19044 Build 19044","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"29dced62-21fb-45d8-a34e-472b66ced002"}
	W0921 22:08:35.703629    8700 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0921 22:08:35.708365    8700 out.go:177] * [kubernetes-upgrade-20220921220835-5916] minikube v1.27.0 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	I0921 22:08:35.712503    8700 notify.go:214] Checking for updates...
	I0921 22:08:35.714750    8700 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	I0921 22:08:35.717348    8700 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	I0921 22:08:35.719735    8700 out.go:177]   - MINIKUBE_LOCATION=14995
	I0921 22:08:35.722640    8700 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0921 22:08:35.726121    8700 config.go:180] Loaded profile config "NoKubernetes-20220921220434-5916": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v0.0.0
	I0921 22:08:35.726362    8700 config.go:180] Loaded profile config "cert-expiration-20220921220719-5916": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.2
	I0921 22:08:35.726362    8700 config.go:180] Loaded profile config "docker-flags-20220921220745-5916": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.2
	I0921 22:08:35.727348    8700 config.go:180] Loaded profile config "multinode-20220921215635-5916-m01": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.2
	I0921 22:08:35.727421    8700 driver.go:365] Setting default libvirt URI to qemu:///system
	I0921 22:08:36.031878    8700 docker.go:137] docker version: linux-20.10.17
	I0921 22:08:36.039660    8700 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0921 22:08:36.592360    8700 info.go:265] docker info: {ID:6VFI:N2PO:PLBP:KXLR:EN7R:OR7P:564S:UY7N:ZWDA:5V2U:KGL2:KJZ7 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:44 OomKillDisable:true NGoroutines:76 SystemTime:2022-09-21 22:08:36.2059724 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.10.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.9] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-
plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.19.0]] Warnings:<nil>}}
	I0921 22:08:36.596094    8700 out.go:177] * Using the docker driver based on user configuration
	I0921 22:08:36.599139    8700 start.go:284] selected driver: docker
	I0921 22:08:36.599139    8700 start.go:808] validating driver "docker" against <nil>
	I0921 22:08:36.599139    8700 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0921 22:08:36.678841    8700 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0921 22:08:37.241851    8700 info.go:265] docker info: {ID:6VFI:N2PO:PLBP:KXLR:EN7R:OR7P:564S:UY7N:ZWDA:5V2U:KGL2:KJZ7 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:43 OomKillDisable:true NGoroutines:76 SystemTime:2022-09-21 22:08:36.8575144 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.10.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.9] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-
plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.19.0]] Warnings:<nil>}}
	I0921 22:08:37.241851    8700 start_flags.go:302] no existing cluster config was found, will generate one from the flags 
	I0921 22:08:37.242848    8700 start_flags.go:849] Wait components to verify : map[apiserver:true system_pods:true]
	I0921 22:08:37.245860    8700 out.go:177] * Using Docker Desktop driver with root privileges
	I0921 22:08:37.247857    8700 cni.go:95] Creating CNI manager for ""
	I0921 22:08:37.247857    8700 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0921 22:08:37.247857    8700 start_flags.go:316] config:
	{Name:kubernetes-upgrade-20220921220835-5916 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-20220921220835-5916 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I0921 22:08:37.252853    8700 out.go:177] * Starting control plane node kubernetes-upgrade-20220921220835-5916 in cluster kubernetes-upgrade-20220921220835-5916
	I0921 22:08:37.254847    8700 cache.go:120] Beginning downloading kic base image for docker with docker
	I0921 22:08:37.256848    8700 out.go:177] * Pulling base image ...
	I0921 22:08:37.259848    8700 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0921 22:08:37.259848    8700 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c in local docker daemon
	I0921 22:08:37.259848    8700 preload.go:148] Found local preload: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0921 22:08:37.259848    8700 cache.go:57] Caching tarball of preloaded images
	I0921 22:08:37.259848    8700 preload.go:174] Found C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0921 22:08:37.259848    8700 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0921 22:08:37.260856    8700 profile.go:148] Saving config to C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\kubernetes-upgrade-20220921220835-5916\config.json ...
	I0921 22:08:37.260856    8700 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\kubernetes-upgrade-20220921220835-5916\config.json: {Name:mkc9255bda171da99aca607faa05c1825548e31c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0921 22:08:37.478728    8700 cache.go:147] Downloading gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c to local cache
	I0921 22:08:37.478728    8700 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase_v0.0.34@sha256_f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c.tar
	I0921 22:08:37.478728    8700 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase_v0.0.34@sha256_f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c.tar
	I0921 22:08:37.478728    8700 image.go:59] Checking for gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c in local cache directory
	I0921 22:08:37.478728    8700 image.go:62] Found gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c in local cache directory, skipping pull
	I0921 22:08:37.478728    8700 image.go:103] gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c exists in cache, skipping pull
	I0921 22:08:37.478728    8700 cache.go:150] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c as a tarball
	I0921 22:08:37.478728    8700 cache.go:161] Loading gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c from local cache
	I0921 22:08:37.478728    8700 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase_v0.0.34@sha256_f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c.tar
	I0921 22:08:39.964791    8700 cache.go:164] successfully loaded gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c from cached tarball
	I0921 22:08:39.964791    8700 cache.go:170] Downloading gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c to local daemon
	I0921 22:08:39.964791    8700 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c.d.lock -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase_v0.0.34@sha256_f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c.d.lock
	I0921 22:08:39.965459    8700 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c in local docker daemon
	I0921 22:08:40.196978    8700 image.go:243] Writing gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c to local daemon
	    > gcr.io/k8s-minikube/kicbase:  0 B [__________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase:  0 B [__________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase:  0 B [__________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase:  0 B [______________________] ?% ? p/s 800msI0921 22:08:41.714245    8700 cache.go:177] use image loaded from cache gcr.io/k8s-minikube/kicbase:v0.0.34
	W0921 22:08:41.714245    8700 out.go:239] ! minikube was unable to download gcr.io/k8s-minikube/kicbase:v0.0.34, but successfully downloaded gcr.io/k8s-minikube/kicbase:v0.0.34 as a fallback image
	! minikube was unable to download gcr.io/k8s-minikube/kicbase:v0.0.34, but successfully downloaded gcr.io/k8s-minikube/kicbase:v0.0.34 as a fallback image
	I0921 22:08:41.714245    8700 cache.go:208] Successfully downloaded all kic artifacts
	I0921 22:08:41.714245    8700 start.go:364] acquiring machines lock for kubernetes-upgrade-20220921220835-5916: {Name:mkda73b5d7ff530021ad5458d5db8ab2b49076c0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0921 22:08:41.714245    8700 start.go:368] acquired machines lock for "kubernetes-upgrade-20220921220835-5916" in 0s
	I0921 22:08:41.714841    8700 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-20220921220835-5916 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.34 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-20220921220835-5916 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:clu
ster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client Soc
ketVMnetPath:/var/run/socket_vmnet} &{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0921 22:08:41.714841    8700 start.go:125] createHost starting for "" (driver="docker")
	I0921 22:08:41.720024    8700 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0921 22:08:41.720024    8700 start.go:159] libmachine.API.Create for "kubernetes-upgrade-20220921220835-5916" (driver="docker")
	I0921 22:08:41.720024    8700 client.go:168] LocalClient.Create starting
	I0921 22:08:41.721297    8700 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I0921 22:08:41.721382    8700 main.go:134] libmachine: Decoding PEM data...
	I0921 22:08:41.721382    8700 main.go:134] libmachine: Parsing certificate...
	I0921 22:08:41.721382    8700 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I0921 22:08:41.721382    8700 main.go:134] libmachine: Decoding PEM data...
	I0921 22:08:41.721903    8700 main.go:134] libmachine: Parsing certificate...
	I0921 22:08:41.731093    8700 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-20220921220835-5916 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0921 22:08:41.941783    8700 cli_runner.go:211] docker network inspect kubernetes-upgrade-20220921220835-5916 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0921 22:08:41.944752    8700 network_create.go:272] running [docker network inspect kubernetes-upgrade-20220921220835-5916] to gather additional debugging logs...
	I0921 22:08:41.944752    8700 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-20220921220835-5916
	W0921 22:08:42.159817    8700 cli_runner.go:211] docker network inspect kubernetes-upgrade-20220921220835-5916 returned with exit code 1
	I0921 22:08:42.159817    8700 network_create.go:275] error running [docker network inspect kubernetes-upgrade-20220921220835-5916]: docker network inspect kubernetes-upgrade-20220921220835-5916: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: kubernetes-upgrade-20220921220835-5916
	I0921 22:08:42.159817    8700 network_create.go:277] output of [docker network inspect kubernetes-upgrade-20220921220835-5916]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: kubernetes-upgrade-20220921220835-5916
	
	** /stderr **
	I0921 22:08:42.166824    8700 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0921 22:08:42.396454    8700 network.go:290] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc0006ae748] misses:0}
	I0921 22:08:42.396454    8700 network.go:236] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0921 22:08:42.396454    8700 network_create.go:115] attempt to create docker network kubernetes-upgrade-20220921220835-5916 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0921 22:08:42.403442    8700 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubernetes-upgrade-20220921220835-5916 kubernetes-upgrade-20220921220835-5916
	W0921 22:08:42.611539    8700 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubernetes-upgrade-20220921220835-5916 kubernetes-upgrade-20220921220835-5916 returned with exit code 1
	E0921 22:08:42.611539    8700 network_create.go:104] error while trying to create docker network kubernetes-upgrade-20220921220835-5916 192.168.49.0/24: create docker network kubernetes-upgrade-20220921220835-5916 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubernetes-upgrade-20220921220835-5916 kubernetes-upgrade-20220921220835-5916: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network cab7ae06daf25a8a553c8ba4ef8638cc7b7076bc56a7c00a3c7caf65b12b71e2 (br-cab7ae06daf2): conflicts with network a04d36bfb3cf4b1b1ab643a2ec70fadfc4a485878b32a275b4c7653037d66dd0 (br-a04d36bfb3cf): networks have overlapping IPv4
	W0921 22:08:42.611539    8700 out.go:239] ! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network kubernetes-upgrade-20220921220835-5916 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubernetes-upgrade-20220921220835-5916 kubernetes-upgrade-20220921220835-5916: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network cab7ae06daf25a8a553c8ba4ef8638cc7b7076bc56a7c00a3c7caf65b12b71e2 (br-cab7ae06daf2): conflicts with network a04d36bfb3cf4b1b1ab643a2ec70fadfc4a485878b32a275b4c7653037d66dd0 (br-a04d36bfb3cf): networks have overlapping IPv4
	
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network kubernetes-upgrade-20220921220835-5916 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubernetes-upgrade-20220921220835-5916 kubernetes-upgrade-20220921220835-5916: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network cab7ae06daf25a8a553c8ba4ef8638cc7b7076bc56a7c00a3c7caf65b12b71e2 (br-cab7ae06daf2): conflicts with network a04d36bfb3cf4b1b1ab643a2ec70fadfc4a485878b32a275b4c7653037d66dd0 (br-a04d36bfb3cf): networks have overlapping IPv4
	
	I0921 22:08:42.624539    8700 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0921 22:08:42.809387    8700 cli_runner.go:164] Run: docker volume create kubernetes-upgrade-20220921220835-5916 --label name.minikube.sigs.k8s.io=kubernetes-upgrade-20220921220835-5916 --label created_by.minikube.sigs.k8s.io=true
	W0921 22:08:43.019538    8700 cli_runner.go:211] docker volume create kubernetes-upgrade-20220921220835-5916 --label name.minikube.sigs.k8s.io=kubernetes-upgrade-20220921220835-5916 --label created_by.minikube.sigs.k8s.io=true returned with exit code 1
	I0921 22:08:43.019538    8700 client.go:171] LocalClient.Create took 1.2995035s
	I0921 22:08:45.043744    8700 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0921 22:08:45.049672    8700 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220921220835-5916
	W0921 22:08:45.235661    8700 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220921220835-5916 returned with exit code 1
	I0921 22:08:45.236059    8700 retry.go:31] will retry after 276.165072ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-20220921220835-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220921220835-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-20220921220835-5916
	I0921 22:08:45.532215    8700 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220921220835-5916
	W0921 22:08:45.724500    8700 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220921220835-5916 returned with exit code 1
	I0921 22:08:45.724500    8700 retry.go:31] will retry after 540.190908ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-20220921220835-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220921220835-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-20220921220835-5916
	I0921 22:08:46.276507    8700 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220921220835-5916
	W0921 22:08:46.502429    8700 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220921220835-5916 returned with exit code 1
	W0921 22:08:46.502429    8700 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-20220921220835-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220921220835-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-20220921220835-5916
	
	W0921 22:08:46.502429    8700 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-20220921220835-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220921220835-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-20220921220835-5916
	I0921 22:08:46.513459    8700 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0921 22:08:46.520479    8700 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220921220835-5916
	W0921 22:08:46.765247    8700 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220921220835-5916 returned with exit code 1
	I0921 22:08:46.765247    8700 retry.go:31] will retry after 234.428547ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-20220921220835-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220921220835-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-20220921220835-5916
	I0921 22:08:47.035731    8700 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220921220835-5916
	W0921 22:08:47.233651    8700 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220921220835-5916 returned with exit code 1
	I0921 22:08:47.233897    8700 retry.go:31] will retry after 346.739061ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-20220921220835-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220921220835-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-20220921220835-5916
	I0921 22:08:47.595390    8700 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220921220835-5916
	W0921 22:08:47.803121    8700 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220921220835-5916 returned with exit code 1
	I0921 22:08:47.803272    8700 retry.go:31] will retry after 667.587979ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-20220921220835-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220921220835-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-20220921220835-5916
	I0921 22:08:48.485152    8700 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220921220835-5916
	W0921 22:08:48.679882    8700 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220921220835-5916 returned with exit code 1
	W0921 22:08:48.679882    8700 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-20220921220835-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220921220835-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-20220921220835-5916
	
	W0921 22:08:48.679882    8700 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-20220921220835-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220921220835-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-20220921220835-5916
	I0921 22:08:48.679882    8700 start.go:128] duration metric: createHost completed in 6.9649877s
	I0921 22:08:48.679882    8700 start.go:83] releasing machines lock for "kubernetes-upgrade-20220921220835-5916", held for 6.9655834s
	W0921 22:08:48.680424    8700 start.go:602] error starting host: creating host: create: creating: setting up container node: creating volume for kubernetes-upgrade-20220921220835-5916 container: docker volume create kubernetes-upgrade-20220921220835-5916 --label name.minikube.sigs.k8s.io=kubernetes-upgrade-20220921220835-5916 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create kubernetes-upgrade-20220921220835-5916: error while creating volume root path '/var/lib/docker/volumes/kubernetes-upgrade-20220921220835-5916': mkdir /var/lib/docker/volumes/kubernetes-upgrade-20220921220835-5916: read-only file system
	I0921 22:08:48.713704    8700 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-20220921220835-5916 --format={{.State.Status}}
	W0921 22:08:48.896554    8700 cli_runner.go:211] docker container inspect kubernetes-upgrade-20220921220835-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:08:48.896554    8700 delete.go:82] Unable to get host status for kubernetes-upgrade-20220921220835-5916, assuming it has already been deleted: state: unknown state "kubernetes-upgrade-20220921220835-5916": docker container inspect kubernetes-upgrade-20220921220835-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-20220921220835-5916
	W0921 22:08:48.897124    8700 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: setting up container node: creating volume for kubernetes-upgrade-20220921220835-5916 container: docker volume create kubernetes-upgrade-20220921220835-5916 --label name.minikube.sigs.k8s.io=kubernetes-upgrade-20220921220835-5916 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create kubernetes-upgrade-20220921220835-5916: error while creating volume root path '/var/lib/docker/volumes/kubernetes-upgrade-20220921220835-5916': mkdir /var/lib/docker/volumes/kubernetes-upgrade-20220921220835-5916: read-only file system
	
	! StartHost failed, but will try again: creating host: create: creating: setting up container node: creating volume for kubernetes-upgrade-20220921220835-5916 container: docker volume create kubernetes-upgrade-20220921220835-5916 --label name.minikube.sigs.k8s.io=kubernetes-upgrade-20220921220835-5916 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create kubernetes-upgrade-20220921220835-5916: error while creating volume root path '/var/lib/docker/volumes/kubernetes-upgrade-20220921220835-5916': mkdir /var/lib/docker/volumes/kubernetes-upgrade-20220921220835-5916: read-only file system
	
	I0921 22:08:48.897254    8700 start.go:617] Will try again in 5 seconds ...
	I0921 22:08:53.908780    8700 start.go:364] acquiring machines lock for kubernetes-upgrade-20220921220835-5916: {Name:mkda73b5d7ff530021ad5458d5db8ab2b49076c0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0921 22:08:53.909421    8700 start.go:368] acquired machines lock for "kubernetes-upgrade-20220921220835-5916" in 395.6µs
	I0921 22:08:53.909626    8700 start.go:96] Skipping create...Using existing machine configuration
	I0921 22:08:53.909694    8700 fix.go:55] fixHost starting: 
	I0921 22:08:53.925216    8700 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-20220921220835-5916 --format={{.State.Status}}
	W0921 22:08:54.109657    8700 cli_runner.go:211] docker container inspect kubernetes-upgrade-20220921220835-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:08:54.109657    8700 fix.go:103] recreateIfNeeded on kubernetes-upgrade-20220921220835-5916: state= err=unknown state "kubernetes-upgrade-20220921220835-5916": docker container inspect kubernetes-upgrade-20220921220835-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-20220921220835-5916
	I0921 22:08:54.109657    8700 fix.go:108] machineExists: false. err=machine does not exist
	I0921 22:08:54.113620    8700 out.go:177] * docker "kubernetes-upgrade-20220921220835-5916" container is missing, will recreate.
	I0921 22:08:54.115745    8700 delete.go:124] DEMOLISHING kubernetes-upgrade-20220921220835-5916 ...
	I0921 22:08:54.132050    8700 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-20220921220835-5916 --format={{.State.Status}}
	W0921 22:08:54.314013    8700 cli_runner.go:211] docker container inspect kubernetes-upgrade-20220921220835-5916 --format={{.State.Status}} returned with exit code 1
	W0921 22:08:54.314013    8700 stop.go:75] unable to get state: unknown state "kubernetes-upgrade-20220921220835-5916": docker container inspect kubernetes-upgrade-20220921220835-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-20220921220835-5916
	I0921 22:08:54.314013    8700 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "kubernetes-upgrade-20220921220835-5916": docker container inspect kubernetes-upgrade-20220921220835-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-20220921220835-5916
	I0921 22:08:54.330061    8700 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-20220921220835-5916 --format={{.State.Status}}
	W0921 22:08:54.516317    8700 cli_runner.go:211] docker container inspect kubernetes-upgrade-20220921220835-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:08:54.516317    8700 delete.go:82] Unable to get host status for kubernetes-upgrade-20220921220835-5916, assuming it has already been deleted: state: unknown state "kubernetes-upgrade-20220921220835-5916": docker container inspect kubernetes-upgrade-20220921220835-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-20220921220835-5916
	I0921 22:08:54.524980    8700 cli_runner.go:164] Run: docker container inspect -f {{.Id}} kubernetes-upgrade-20220921220835-5916
	W0921 22:08:54.734133    8700 cli_runner.go:211] docker container inspect -f {{.Id}} kubernetes-upgrade-20220921220835-5916 returned with exit code 1
	I0921 22:08:54.734133    8700 kic.go:356] could not find the container kubernetes-upgrade-20220921220835-5916 to remove it. will try anyways
	I0921 22:08:54.743199    8700 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-20220921220835-5916 --format={{.State.Status}}
	W0921 22:08:54.952113    8700 cli_runner.go:211] docker container inspect kubernetes-upgrade-20220921220835-5916 --format={{.State.Status}} returned with exit code 1
	W0921 22:08:54.952186    8700 oci.go:84] error getting container status, will try to delete anyways: unknown state "kubernetes-upgrade-20220921220835-5916": docker container inspect kubernetes-upgrade-20220921220835-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-20220921220835-5916
	I0921 22:08:54.959633    8700 cli_runner.go:164] Run: docker exec --privileged -t kubernetes-upgrade-20220921220835-5916 /bin/bash -c "sudo init 0"
	W0921 22:08:55.157454    8700 cli_runner.go:211] docker exec --privileged -t kubernetes-upgrade-20220921220835-5916 /bin/bash -c "sudo init 0" returned with exit code 1
	I0921 22:08:55.157454    8700 oci.go:646] error shutdown kubernetes-upgrade-20220921220835-5916: docker exec --privileged -t kubernetes-upgrade-20220921220835-5916 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: kubernetes-upgrade-20220921220835-5916
	I0921 22:08:56.173771    8700 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-20220921220835-5916 --format={{.State.Status}}
	W0921 22:08:56.354784    8700 cli_runner.go:211] docker container inspect kubernetes-upgrade-20220921220835-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:08:56.354784    8700 oci.go:658] temporary error verifying shutdown: unknown state "kubernetes-upgrade-20220921220835-5916": docker container inspect kubernetes-upgrade-20220921220835-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-20220921220835-5916
	I0921 22:08:56.354784    8700 oci.go:660] temporary error: container kubernetes-upgrade-20220921220835-5916 status is  but expect it to be exited
	I0921 22:08:56.354784    8700 retry.go:31] will retry after 328.259627ms: couldn't verify container is exited. %v: unknown state "kubernetes-upgrade-20220921220835-5916": docker container inspect kubernetes-upgrade-20220921220835-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-20220921220835-5916
	I0921 22:08:56.693314    8700 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-20220921220835-5916 --format={{.State.Status}}
	W0921 22:08:56.885203    8700 cli_runner.go:211] docker container inspect kubernetes-upgrade-20220921220835-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:08:56.885404    8700 oci.go:658] temporary error verifying shutdown: unknown state "kubernetes-upgrade-20220921220835-5916": docker container inspect kubernetes-upgrade-20220921220835-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-20220921220835-5916
	I0921 22:08:56.885533    8700 oci.go:660] temporary error: container kubernetes-upgrade-20220921220835-5916 status is  but expect it to be exited
	I0921 22:08:56.885592    8700 retry.go:31] will retry after 447.727139ms: couldn't verify container is exited. %v: unknown state "kubernetes-upgrade-20220921220835-5916": docker container inspect kubernetes-upgrade-20220921220835-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-20220921220835-5916
	I0921 22:08:57.350163    8700 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-20220921220835-5916 --format={{.State.Status}}
	W0921 22:08:57.587406    8700 cli_runner.go:211] docker container inspect kubernetes-upgrade-20220921220835-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:08:57.587457    8700 oci.go:658] temporary error verifying shutdown: unknown state "kubernetes-upgrade-20220921220835-5916": docker container inspect kubernetes-upgrade-20220921220835-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-20220921220835-5916
	I0921 22:08:57.587457    8700 oci.go:660] temporary error: container kubernetes-upgrade-20220921220835-5916 status is  but expect it to be exited
	I0921 22:08:57.587457    8700 retry.go:31] will retry after 901.025843ms: couldn't verify container is exited. %v: unknown state "kubernetes-upgrade-20220921220835-5916": docker container inspect kubernetes-upgrade-20220921220835-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-20220921220835-5916
	I0921 22:08:58.508780    8700 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-20220921220835-5916 --format={{.State.Status}}
	W0921 22:08:58.704142    8700 cli_runner.go:211] docker container inspect kubernetes-upgrade-20220921220835-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:08:58.704250    8700 oci.go:658] temporary error verifying shutdown: unknown state "kubernetes-upgrade-20220921220835-5916": docker container inspect kubernetes-upgrade-20220921220835-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-20220921220835-5916
	I0921 22:08:58.704294    8700 oci.go:660] temporary error: container kubernetes-upgrade-20220921220835-5916 status is  but expect it to be exited
	I0921 22:08:58.704294    8700 retry.go:31] will retry after 1.713171311s: couldn't verify container is exited. %v: unknown state "kubernetes-upgrade-20220921220835-5916": docker container inspect kubernetes-upgrade-20220921220835-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-20220921220835-5916
	I0921 22:09:00.436509    8700 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-20220921220835-5916 --format={{.State.Status}}
	W0921 22:09:00.661275    8700 cli_runner.go:211] docker container inspect kubernetes-upgrade-20220921220835-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:09:00.661275    8700 oci.go:658] temporary error verifying shutdown: unknown state "kubernetes-upgrade-20220921220835-5916": docker container inspect kubernetes-upgrade-20220921220835-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-20220921220835-5916
	I0921 22:09:00.661275    8700 oci.go:660] temporary error: container kubernetes-upgrade-20220921220835-5916 status is  but expect it to be exited
	I0921 22:09:00.661275    8700 retry.go:31] will retry after 3.325151152s: couldn't verify container is exited. %v: unknown state "kubernetes-upgrade-20220921220835-5916": docker container inspect kubernetes-upgrade-20220921220835-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-20220921220835-5916
	I0921 22:09:04.008068    8700 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-20220921220835-5916 --format={{.State.Status}}
	W0921 22:09:04.194348    8700 cli_runner.go:211] docker container inspect kubernetes-upgrade-20220921220835-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:09:04.194348    8700 oci.go:658] temporary error verifying shutdown: unknown state "kubernetes-upgrade-20220921220835-5916": docker container inspect kubernetes-upgrade-20220921220835-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-20220921220835-5916
	I0921 22:09:04.194348    8700 oci.go:660] temporary error: container kubernetes-upgrade-20220921220835-5916 status is  but expect it to be exited
	I0921 22:09:04.194348    8700 retry.go:31] will retry after 2.711970641s: couldn't verify container is exited. %v: unknown state "kubernetes-upgrade-20220921220835-5916": docker container inspect kubernetes-upgrade-20220921220835-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-20220921220835-5916
	I0921 22:09:06.915612    8700 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-20220921220835-5916 --format={{.State.Status}}
	W0921 22:09:07.109682    8700 cli_runner.go:211] docker container inspect kubernetes-upgrade-20220921220835-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:09:07.109729    8700 oci.go:658] temporary error verifying shutdown: unknown state "kubernetes-upgrade-20220921220835-5916": docker container inspect kubernetes-upgrade-20220921220835-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-20220921220835-5916
	I0921 22:09:07.109729    8700 oci.go:660] temporary error: container kubernetes-upgrade-20220921220835-5916 status is  but expect it to be exited
	I0921 22:09:07.109729    8700 retry.go:31] will retry after 5.015617898s: couldn't verify container is exited. %v: unknown state "kubernetes-upgrade-20220921220835-5916": docker container inspect kubernetes-upgrade-20220921220835-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-20220921220835-5916
	I0921 22:09:12.149246    8700 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-20220921220835-5916 --format={{.State.Status}}
	W0921 22:09:12.327529    8700 cli_runner.go:211] docker container inspect kubernetes-upgrade-20220921220835-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:09:12.327638    8700 oci.go:658] temporary error verifying shutdown: unknown state "kubernetes-upgrade-20220921220835-5916": docker container inspect kubernetes-upgrade-20220921220835-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-20220921220835-5916
	I0921 22:09:12.341451    8700 oci.go:660] temporary error: container kubernetes-upgrade-20220921220835-5916 status is  but expect it to be exited
	I0921 22:09:12.341451    8700 oci.go:88] couldn't shut down kubernetes-upgrade-20220921220835-5916 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "kubernetes-upgrade-20220921220835-5916": docker container inspect kubernetes-upgrade-20220921220835-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-20220921220835-5916
	 
	I0921 22:09:12.348969    8700 cli_runner.go:164] Run: docker rm -f -v kubernetes-upgrade-20220921220835-5916
	I0921 22:09:12.537627    8700 cli_runner.go:164] Run: docker container inspect -f {{.Id}} kubernetes-upgrade-20220921220835-5916
	W0921 22:09:12.717313    8700 cli_runner.go:211] docker container inspect -f {{.Id}} kubernetes-upgrade-20220921220835-5916 returned with exit code 1
	I0921 22:09:12.726137    8700 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-20220921220835-5916 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0921 22:09:12.923917    8700 cli_runner.go:211] docker network inspect kubernetes-upgrade-20220921220835-5916 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0921 22:09:12.931888    8700 network_create.go:272] running [docker network inspect kubernetes-upgrade-20220921220835-5916] to gather additional debugging logs...
	I0921 22:09:12.931888    8700 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-20220921220835-5916
	W0921 22:09:13.140308    8700 cli_runner.go:211] docker network inspect kubernetes-upgrade-20220921220835-5916 returned with exit code 1
	I0921 22:09:13.140308    8700 network_create.go:275] error running [docker network inspect kubernetes-upgrade-20220921220835-5916]: docker network inspect kubernetes-upgrade-20220921220835-5916: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: kubernetes-upgrade-20220921220835-5916
	I0921 22:09:13.140308    8700 network_create.go:277] output of [docker network inspect kubernetes-upgrade-20220921220835-5916]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: kubernetes-upgrade-20220921220835-5916
	
	** /stderr **
	W0921 22:09:13.141367    8700 delete.go:139] delete failed (probably ok) <nil>
	I0921 22:09:13.141367    8700 fix.go:115] Sleeping 1 second for extra luck!
	I0921 22:09:14.147150    8700 start.go:125] createHost starting for "" (driver="docker")
	I0921 22:09:14.153173    8700 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0921 22:09:14.153559    8700 start.go:159] libmachine.API.Create for "kubernetes-upgrade-20220921220835-5916" (driver="docker")
	I0921 22:09:14.153559    8700 client.go:168] LocalClient.Create starting
	I0921 22:09:14.153559    8700 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I0921 22:09:14.154401    8700 main.go:134] libmachine: Decoding PEM data...
	I0921 22:09:14.154401    8700 main.go:134] libmachine: Parsing certificate...
	I0921 22:09:14.154534    8700 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I0921 22:09:14.154534    8700 main.go:134] libmachine: Decoding PEM data...
	I0921 22:09:14.154534    8700 main.go:134] libmachine: Parsing certificate...
	I0921 22:09:14.164094    8700 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-20220921220835-5916 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0921 22:09:14.362706    8700 cli_runner.go:211] docker network inspect kubernetes-upgrade-20220921220835-5916 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0921 22:09:14.367707    8700 network_create.go:272] running [docker network inspect kubernetes-upgrade-20220921220835-5916] to gather additional debugging logs...
	I0921 22:09:14.367707    8700 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-20220921220835-5916
	W0921 22:09:14.568213    8700 cli_runner.go:211] docker network inspect kubernetes-upgrade-20220921220835-5916 returned with exit code 1
	I0921 22:09:14.568275    8700 network_create.go:275] error running [docker network inspect kubernetes-upgrade-20220921220835-5916]: docker network inspect kubernetes-upgrade-20220921220835-5916: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: kubernetes-upgrade-20220921220835-5916
	I0921 22:09:14.568275    8700 network_create.go:277] output of [docker network inspect kubernetes-upgrade-20220921220835-5916]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: kubernetes-upgrade-20220921220835-5916
	
	** /stderr **
	I0921 22:09:14.575123    8700 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0921 22:09:14.769275    8700 network.go:281] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0006ae748] amended:false}} dirty:map[] misses:0}
	I0921 22:09:14.769275    8700 network.go:239] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0921 22:09:14.784274    8700 network.go:290] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0006ae748] amended:true}} dirty:map[192.168.49.0:0xc0006ae748 192.168.58.0:0xc0006105f8] misses:0}
	I0921 22:09:14.784274    8700 network.go:236] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0921 22:09:14.784274    8700 network_create.go:115] attempt to create docker network kubernetes-upgrade-20220921220835-5916 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0921 22:09:14.791273    8700 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubernetes-upgrade-20220921220835-5916 kubernetes-upgrade-20220921220835-5916
	W0921 22:09:14.985791    8700 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubernetes-upgrade-20220921220835-5916 kubernetes-upgrade-20220921220835-5916 returned with exit code 1
	E0921 22:09:14.985791    8700 network_create.go:104] error while trying to create docker network kubernetes-upgrade-20220921220835-5916 192.168.58.0/24: create docker network kubernetes-upgrade-20220921220835-5916 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubernetes-upgrade-20220921220835-5916 kubernetes-upgrade-20220921220835-5916: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 4a4b0e40804095346b517492898349a092a981400cc3491620aef05682937d6c (br-4a4b0e408040): conflicts with network 8a3cd8d165a4af032276277dbee93b52b8ee4aed7ca6d8aceee6814901b31792 (br-8a3cd8d165a4): networks have overlapping IPv4
	W0921 22:09:14.985791    8700 out.go:239] ! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network kubernetes-upgrade-20220921220835-5916 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubernetes-upgrade-20220921220835-5916 kubernetes-upgrade-20220921220835-5916: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 4a4b0e40804095346b517492898349a092a981400cc3491620aef05682937d6c (br-4a4b0e408040): conflicts with network 8a3cd8d165a4af032276277dbee93b52b8ee4aed7ca6d8aceee6814901b31792 (br-8a3cd8d165a4): networks have overlapping IPv4
	
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network kubernetes-upgrade-20220921220835-5916 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubernetes-upgrade-20220921220835-5916 kubernetes-upgrade-20220921220835-5916: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 4a4b0e40804095346b517492898349a092a981400cc3491620aef05682937d6c (br-4a4b0e408040): conflicts with network 8a3cd8d165a4af032276277dbee93b52b8ee4aed7ca6d8aceee6814901b31792 (br-8a3cd8d165a4): networks have overlapping IPv4
	
	I0921 22:09:15.000070    8700 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0921 22:09:15.194697    8700 cli_runner.go:164] Run: docker volume create kubernetes-upgrade-20220921220835-5916 --label name.minikube.sigs.k8s.io=kubernetes-upgrade-20220921220835-5916 --label created_by.minikube.sigs.k8s.io=true
	W0921 22:09:15.388012    8700 cli_runner.go:211] docker volume create kubernetes-upgrade-20220921220835-5916 --label name.minikube.sigs.k8s.io=kubernetes-upgrade-20220921220835-5916 --label created_by.minikube.sigs.k8s.io=true returned with exit code 1
	I0921 22:09:15.388012    8700 client.go:171] LocalClient.Create took 1.2344435s
	I0921 22:09:17.410055    8700 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0921 22:09:17.416169    8700 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220921220835-5916
	W0921 22:09:17.614749    8700 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220921220835-5916 returned with exit code 1
	I0921 22:09:17.614948    8700 retry.go:31] will retry after 242.222461ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-20220921220835-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220921220835-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-20220921220835-5916
	I0921 22:09:17.871896    8700 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220921220835-5916
	W0921 22:09:18.063564    8700 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220921220835-5916 returned with exit code 1
	I0921 22:09:18.063564    8700 retry.go:31] will retry after 293.637806ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-20220921220835-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220921220835-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-20220921220835-5916
	I0921 22:09:18.365818    8700 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220921220835-5916
	W0921 22:09:18.559987    8700 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220921220835-5916 returned with exit code 1
	I0921 22:09:18.559987    8700 retry.go:31] will retry after 446.119795ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-20220921220835-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220921220835-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-20220921220835-5916
	I0921 22:09:19.020899    8700 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220921220835-5916
	W0921 22:09:19.246761    8700 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220921220835-5916 returned with exit code 1
	W0921 22:09:19.247183    8700 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-20220921220835-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220921220835-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-20220921220835-5916
	
	W0921 22:09:19.247183    8700 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-20220921220835-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220921220835-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-20220921220835-5916
	I0921 22:09:19.260335    8700 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0921 22:09:19.270497    8700 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220921220835-5916
	W0921 22:09:19.510476    8700 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220921220835-5916 returned with exit code 1
	I0921 22:09:19.510476    8700 retry.go:31] will retry after 179.638263ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-20220921220835-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220921220835-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-20220921220835-5916
	I0921 22:09:19.711492    8700 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220921220835-5916
	W0921 22:09:19.932755    8700 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220921220835-5916 returned with exit code 1
	I0921 22:09:19.932808    8700 retry.go:31] will retry after 263.695078ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-20220921220835-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220921220835-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-20220921220835-5916
	I0921 22:09:20.210703    8700 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220921220835-5916
	W0921 22:09:20.427957    8700 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220921220835-5916 returned with exit code 1
	I0921 22:09:20.428277    8700 retry.go:31] will retry after 484.240172ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-20220921220835-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220921220835-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-20220921220835-5916
	I0921 22:09:20.924823    8700 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220921220835-5916
	W0921 22:09:21.123801    8700 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220921220835-5916 returned with exit code 1
	W0921 22:09:21.124051    8700 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-20220921220835-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220921220835-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-20220921220835-5916
	
	W0921 22:09:21.124051    8700 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-20220921220835-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220921220835-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-20220921220835-5916
	I0921 22:09:21.124051    8700 start.go:128] duration metric: createHost completed in 6.9768474s
	I0921 22:09:21.134190    8700 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0921 22:09:21.140936    8700 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220921220835-5916
	W0921 22:09:21.325089    8700 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220921220835-5916 returned with exit code 1
	I0921 22:09:21.325360    8700 retry.go:31] will retry after 340.62286ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-20220921220835-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220921220835-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-20220921220835-5916
	I0921 22:09:21.679609    8700 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220921220835-5916
	W0921 22:09:21.869006    8700 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220921220835-5916 returned with exit code 1
	I0921 22:09:21.869006    8700 retry.go:31] will retry after 297.417842ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-20220921220835-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220921220835-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-20220921220835-5916
	I0921 22:09:22.189607    8700 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220921220835-5916
	W0921 22:09:22.384127    8700 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220921220835-5916 returned with exit code 1
	I0921 22:09:22.384127    8700 retry.go:31] will retry after 448.358942ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-20220921220835-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220921220835-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-20220921220835-5916
	I0921 22:09:22.844390    8700 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220921220835-5916
	W0921 22:09:23.044179    8700 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220921220835-5916 returned with exit code 1
	W0921 22:09:23.044218    8700 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-20220921220835-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220921220835-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-20220921220835-5916
	
	W0921 22:09:23.044218    8700 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-20220921220835-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220921220835-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-20220921220835-5916
	I0921 22:09:23.054770    8700 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0921 22:09:23.061903    8700 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220921220835-5916
	W0921 22:09:23.245477    8700 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220921220835-5916 returned with exit code 1
	I0921 22:09:23.245816    8700 retry.go:31] will retry after 176.645665ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-20220921220835-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220921220835-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-20220921220835-5916
	I0921 22:09:23.440334    8700 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220921220835-5916
	W0921 22:09:23.646310    8700 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220921220835-5916 returned with exit code 1
	I0921 22:09:23.646673    8700 retry.go:31] will retry after 512.00063ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-20220921220835-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220921220835-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-20220921220835-5916
	I0921 22:09:24.178818    8700 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220921220835-5916
	W0921 22:09:24.357687    8700 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220921220835-5916 returned with exit code 1
	I0921 22:09:24.357687    8700 retry.go:31] will retry after 673.154531ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-20220921220835-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220921220835-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-20220921220835-5916
	I0921 22:09:25.042026    8700 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220921220835-5916
	W0921 22:09:25.250087    8700 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220921220835-5916 returned with exit code 1
	W0921 22:09:25.250251    8700 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-20220921220835-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220921220835-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-20220921220835-5916
	
	W0921 22:09:25.250251    8700 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-20220921220835-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220921220835-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-20220921220835-5916
	I0921 22:09:25.250251    8700 fix.go:57] fixHost completed within 31.3403184s
	I0921 22:09:25.250251    8700 start.go:83] releasing machines lock for "kubernetes-upgrade-20220921220835-5916", held for 31.3405254s
	W0921 22:09:25.251023    8700 out.go:239] * Failed to start docker container. Running "minikube delete -p kubernetes-upgrade-20220921220835-5916" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for kubernetes-upgrade-20220921220835-5916 container: docker volume create kubernetes-upgrade-20220921220835-5916 --label name.minikube.sigs.k8s.io=kubernetes-upgrade-20220921220835-5916 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create kubernetes-upgrade-20220921220835-5916: error while creating volume root path '/var/lib/docker/volumes/kubernetes-upgrade-20220921220835-5916': mkdir /var/lib/docker/volumes/kubernetes-upgrade-20220921220835-5916: read-only file system
	
	* Failed to start docker container. Running "minikube delete -p kubernetes-upgrade-20220921220835-5916" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for kubernetes-upgrade-20220921220835-5916 container: docker volume create kubernetes-upgrade-20220921220835-5916 --label name.minikube.sigs.k8s.io=kubernetes-upgrade-20220921220835-5916 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create kubernetes-upgrade-20220921220835-5916: error while creating volume root path '/var/lib/docker/volumes/kubernetes-upgrade-20220921220835-5916': mkdir /var/lib/docker/volumes/kubernetes-upgrade-20220921220835-5916: read-only file system
	
	I0921 22:09:25.258972    8700 out.go:177] 
	W0921 22:09:25.261371    8700 out.go:239] X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for kubernetes-upgrade-20220921220835-5916 container: docker volume create kubernetes-upgrade-20220921220835-5916 --label name.minikube.sigs.k8s.io=kubernetes-upgrade-20220921220835-5916 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create kubernetes-upgrade-20220921220835-5916: error while creating volume root path '/var/lib/docker/volumes/kubernetes-upgrade-20220921220835-5916': mkdir /var/lib/docker/volumes/kubernetes-upgrade-20220921220835-5916: read-only file system
	
	X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for kubernetes-upgrade-20220921220835-5916 container: docker volume create kubernetes-upgrade-20220921220835-5916 --label name.minikube.sigs.k8s.io=kubernetes-upgrade-20220921220835-5916 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create kubernetes-upgrade-20220921220835-5916: error while creating volume root path '/var/lib/docker/volumes/kubernetes-upgrade-20220921220835-5916': mkdir /var/lib/docker/volumes/kubernetes-upgrade-20220921220835-5916: read-only file system
	
	W0921 22:09:25.261990    8700 out.go:239] * Suggestion: Restart Docker
	* Suggestion: Restart Docker
	W0921 22:09:25.262043    8700 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/6825
	* Related issue: https://github.com/kubernetes/minikube/issues/6825
	I0921 22:09:25.265111    8700 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:231: failed to start minikube HEAD with oldest k8s version: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-20220921220835-5916 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker: exit status 60
version_upgrade_test.go:234: (dbg) Run:  out/minikube-windows-amd64.exe stop -p kubernetes-upgrade-20220921220835-5916

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:234: (dbg) Non-zero exit: out/minikube-windows-amd64.exe stop -p kubernetes-upgrade-20220921220835-5916: exit status 82 (19.4236077s)

                                                
                                                
-- stdout --
	* Stopping node "kubernetes-upgrade-20220921220835-5916"  ...
	* Stopping node "kubernetes-upgrade-20220921220835-5916"  ...
	* Stopping node "kubernetes-upgrade-20220921220835-5916"  ...
	* Stopping node "kubernetes-upgrade-20220921220835-5916"  ...
	* Stopping node "kubernetes-upgrade-20220921220835-5916"  ...
	* Stopping node "kubernetes-upgrade-20220921220835-5916"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0921 22:09:29.322853    7256 daemonize_windows.go:38] error terminating scheduled stop for profile kubernetes-upgrade-20220921220835-5916: stopping schedule-stop service for profile kubernetes-upgrade-20220921220835-5916: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-20220921220835-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220921220835-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-20220921220835-5916
	X Exiting due to GUEST_STOP_TIMEOUT: docker container inspect kubernetes-upgrade-20220921220835-5916 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-20220921220835-5916
	
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                        │
	│    * If the above advice does not help, please let us know:                                                            │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                          │
	│                                                                                                                        │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                               │
	│    * Please also attach the following file to the GitHub issue:                                                        │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_stop_eeafc14e343686f0df1f1d4295ac2d3042636ff8_153.log    │
	│                                                                                                                        │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
version_upgrade_test.go:236: out/minikube-windows-amd64.exe stop -p kubernetes-upgrade-20220921220835-5916 failed: exit status 82
panic.go:522: *** TestKubernetesUpgrade FAILED at 2022-09-21 22:09:44.8296842 +0000 GMT m=+2392.379404501
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestKubernetesUpgrade]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect kubernetes-upgrade-20220921220835-5916
helpers_test.go:231: (dbg) Non-zero exit: docker inspect kubernetes-upgrade-20220921220835-5916: exit status 1 (256.4028ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: kubernetes-upgrade-20220921220835-5916

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p kubernetes-upgrade-20220921220835-5916 -n kubernetes-upgrade-20220921220835-5916
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p kubernetes-upgrade-20220921220835-5916 -n kubernetes-upgrade-20220921220835-5916: exit status 7 (607.3087ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0921 22:09:45.671777    3944 status.go:247] status error: host: state: unknown state "kubernetes-upgrade-20220921220835-5916": docker container inspect kubernetes-upgrade-20220921220835-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-20220921220835-5916

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "kubernetes-upgrade-20220921220835-5916" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:175: Cleaning up "kubernetes-upgrade-20220921220835-5916" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p kubernetes-upgrade-20220921220835-5916
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p kubernetes-upgrade-20220921220835-5916: (1.7276125s)
--- FAIL: TestKubernetesUpgrade (72.08s)

                                                
                                    
x
+
TestMissingContainerUpgrade (128.26s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:316: (dbg) Run:  C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube-v1.9.1.221805706.exe start -p missing-upgrade-20220921220627-5916 --memory=2200 --driver=docker

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:316: (dbg) Non-zero exit: C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube-v1.9.1.221805706.exe start -p missing-upgrade-20220921220627-5916 --memory=2200 --driver=docker: exit status 78 (38.4506316s)

                                                
                                                
-- stdout --
	* [missing-upgrade-20220921220627-5916] minikube v1.9.1 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	  - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=14995
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on user configuration
	* Starting control plane node m01 in cluster missing-upgrade-20220921220627-5916
	* Pulling base image ...
	* Downloading Kubernetes v1.18.0 preload ...
	* Creating Kubernetes in docker container with (CPUs=2) (16 available), Memory=2200MB (51405MB available) ...
	* docker "missing-upgrade-20220921220627-5916" container is missing, will recreate.
	* Creating Kubernetes in docker container with (CPUs=2) (16 available), Memory=2200MB (51405MB available) ...

                                                
                                                
-- /stdout --
** stderr ** 
	    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 15.06 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 29.25 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 63.11 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 100.88 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 123.89 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 163.94 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 207.56 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 239.66 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 281.78 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 316.22 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 350.16 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 392.00 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4
: 421.02 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 457.92 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 488.59 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 521.75 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 542.91 MiB! StartHost failed, but will try again: creating host: create: creating: create kic node: creating volume for missing-upgrade-20220921220627-5916 container: output Error response from daemon: create missing-upgrade-20220921220627-5916: error while creating volume root path '/var/lib/docker/volumes/missing-upgrade-20220921220627-5916': mkdir /var/lib/docker/volumes/missing-upgrade-20220921220627-5916: read-only file system
	: exit status 1
	* 
	* [DOCKER_READONLY] Failed to start docker container. "minikube start -p missing-upgrade-20220921220627-5916" may fix it. recreate: creating host: create: creating: create kic node: creating volume for missing-upgrade-20220921220627-5916 container: output Error response from daemon: create missing-upgrade-20220921220627-5916: error while creating volume root path '/var/lib/docker/volumes/missing-upgrade-20220921220627-5916': mkdir /var/lib/docker/volumes/missing-upgrade-20220921220627-5916: read-only file system
	: exit status 1
	* Suggestion: Restart Docker
	* Related issue: https://github.com/kubernetes/minikube/issues/6825

                                                
                                                
** /stderr **
version_upgrade_test.go:316: (dbg) Run:  C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube-v1.9.1.221805706.exe start -p missing-upgrade-20220921220627-5916 --memory=2200 --driver=docker

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:316: (dbg) Non-zero exit: C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube-v1.9.1.221805706.exe start -p missing-upgrade-20220921220627-5916 --memory=2200 --driver=docker: exit status 78 (46.9699989s)

                                                
                                                
-- stdout --
	* [missing-upgrade-20220921220627-5916] minikube v1.9.1 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	  - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=14995
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on existing profile
	* Starting control plane node m01 in cluster missing-upgrade-20220921220627-5916
	* Pulling base image ...
	* docker "missing-upgrade-20220921220627-5916" container is missing, will recreate.
	* Creating Kubernetes in docker container with (CPUs=2) (16 available), Memory=2200MB (51405MB available) ...
	* docker "missing-upgrade-20220921220627-5916" container is missing, will recreate.
	* Creating Kubernetes in docker container with (CPUs=2) (16 available), Memory=2200MB (51405MB available) ...

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: recreate: creating host: create: creating: create kic node: creating volume for missing-upgrade-20220921220627-5916 container: output Error response from daemon: create missing-upgrade-20220921220627-5916: error while creating volume root path '/var/lib/docker/volumes/missing-upgrade-20220921220627-5916': mkdir /var/lib/docker/volumes/missing-upgrade-20220921220627-5916: read-only file system
	: exit status 1
	* 
	* [DOCKER_READONLY] Failed to start docker container. "minikube start -p missing-upgrade-20220921220627-5916" may fix it. recreate: creating host: create: creating: create kic node: creating volume for missing-upgrade-20220921220627-5916 container: output Error response from daemon: create missing-upgrade-20220921220627-5916: error while creating volume root path '/var/lib/docker/volumes/missing-upgrade-20220921220627-5916': mkdir /var/lib/docker/volumes/missing-upgrade-20220921220627-5916: read-only file system
	: exit status 1
	* Suggestion: Restart Docker
	* Related issue: https://github.com/kubernetes/minikube/issues/6825

                                                
                                                
** /stderr **
version_upgrade_test.go:316: (dbg) Run:  C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube-v1.9.1.221805706.exe start -p missing-upgrade-20220921220627-5916 --memory=2200 --driver=docker

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:316: (dbg) Non-zero exit: C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube-v1.9.1.221805706.exe start -p missing-upgrade-20220921220627-5916 --memory=2200 --driver=docker: exit status 78 (36.5344892s)

                                                
                                                
-- stdout --
	* [missing-upgrade-20220921220627-5916] minikube v1.9.1 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	  - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=14995
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on existing profile
	* Starting control plane node m01 in cluster missing-upgrade-20220921220627-5916
	* Pulling base image ...
	* Downloading Kubernetes v1.18.0 preload ...
	* docker "missing-upgrade-20220921220627-5916" container is missing, will recreate.
	* Creating Kubernetes in docker container with (CPUs=2) (16 available), Memory=2200MB (51405MB available) ...
	* docker "missing-upgrade-20220921220627-5916" container is missing, will recreate.
	* Creating Kubernetes in docker container with (CPUs=2) (16 available), Memory=2200MB (51405MB available) ...

                                                
                                                
-- /stdout --
** stderr ** 
	    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 17.55 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 48.86 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 74.69 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 104.44 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 127.52 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 165.86 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 192.33 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 224.94 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 264.48 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 292.47 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 323.80 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 356.14 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4
: 387.22 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 422.84 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 462.02 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 494.95 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 538.80 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 542.91 MiB! StartHost failed, but will try again: recreate: creating host: create: creating: create kic node: creating volume for missing-upgrade-20220921220627-5916 container: output Error response from daemon: create missing-upgrade-20220921220627-5916: error while creating volume root path '/var/lib/docker/volumes/missing-upgrade-20220921220627-5916': mkdir /var/lib/docker/volumes/missing-upgrade-20220921220627-5916: read-only file system
	: exit status 1
	* 
	* [DOCKER_READONLY] Failed to start docker container. "minikube start -p missing-upgrade-20220921220627-5916" may fix it. recreate: creating host: create: creating: create kic node: creating volume for missing-upgrade-20220921220627-5916 container: output Error response from daemon: create missing-upgrade-20220921220627-5916: error while creating volume root path '/var/lib/docker/volumes/missing-upgrade-20220921220627-5916': mkdir /var/lib/docker/volumes/missing-upgrade-20220921220627-5916: read-only file system
	: exit status 1
	* Suggestion: Restart Docker
	* Related issue: https://github.com/kubernetes/minikube/issues/6825

                                                
                                                
** /stderr **
version_upgrade_test.go:322: release start failed: exit status 78
panic.go:522: *** TestMissingContainerUpgrade FAILED at 2022-09-21 22:08:32.830796 +0000 GMT m=+2320.381063501
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMissingContainerUpgrade]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect missing-upgrade-20220921220627-5916
helpers_test.go:231: (dbg) Non-zero exit: docker inspect missing-upgrade-20220921220627-5916: exit status 1 (252.3989ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: missing-upgrade-20220921220627-5916

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p missing-upgrade-20220921220627-5916 -n missing-upgrade-20220921220627-5916
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p missing-upgrade-20220921220627-5916 -n missing-upgrade-20220921220627-5916: exit status 7 (581.2631ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0921 22:08:33.640443    4312 status.go:247] status error: host: state: unknown state "missing-upgrade-20220921220627-5916": docker container inspect missing-upgrade-20220921220627-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: missing-upgrade-20220921220627-5916

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "missing-upgrade-20220921220627-5916" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:175: Cleaning up "missing-upgrade-20220921220627-5916" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p missing-upgrade-20220921220627-5916

                                                
                                                
=== CONT  TestMissingContainerUpgrade
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p missing-upgrade-20220921220627-5916: (1.6746199s)
--- FAIL: TestMissingContainerUpgrade (128.26s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (107.98s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:190: (dbg) Run:  C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube-v1.9.0.3839537704.exe start -p stopped-upgrade-20220921220434-5916 --memory=2200 --vm-driver=docker

                                                
                                                
=== CONT  TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:190: (dbg) Non-zero exit: C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube-v1.9.0.3839537704.exe start -p stopped-upgrade-20220921220434-5916 --memory=2200 --vm-driver=docker: exit status 70 (32.2541397s)

                                                
                                                
-- stdout --
	! [stopped-upgrade-20220921220434-5916] minikube v1.9.0 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=14995
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=C:\Users\jenkins.minikube2\AppData\Local\Temp\legacy_kubeconfig2822364711
	* Using the docker driver based on user configuration
	* Pulling base image ...
	* Downloading Kubernetes v1.18.0 preload ...
	* Creating Kubernetes in docker container with (CPUs=2) (16 available), Memory=2200MB (51405MB available) ...
	! StartHost failed, but will try again: creating host: create: creating: create kic node: creating volume for stopped-upgrade-20220921220434-5916 container: output Error response from daemon: create stopped-upgrade-20220921220434-5916: error while creating volume root path '/var/lib/docker/volumes/stopped-upgrade-20220921220434-5916': mkdir /var/lib/docker/volumes/stopped-upgrade-20220921220434-5916: read-only file system
	: exit status 1
	* docker "stopped-upgrade-20220921220434-5916" container is missing, will recreate.
	* Creating Kubernetes in docker container with (CPUs=2) (16 available), Memory=2200MB (51405MB available) ...
	* StartHost failed again: recreate: creating host: create: creating: create kic node: creating volume for stopped-upgrade-20220921220434-5916 container: output Error response from daemon: create stopped-upgrade-20220921220434-5916: error while creating volume root path '/var/lib/docker/volumes/stopped-upgrade-20220921220434-5916': mkdir /var/lib/docker/volumes/stopped-upgrade-20220921220434-5916: read-only file system
	: exit status 1
	  - Run: "minikube delete -p stopped-upgrade-20220921220434-5916", then "minikube start -p stopped-upgrade-20220921220434-5916 --alsologtostderr -v=1" to try again with more logging

                                                
                                                
-- /stdout --
** stderr ** 
	* minikube 1.27.0 is available! Download it: https://github.com/kubernetes/minikube/releases/tag/v1.27.0
	* To disable this notice, run: 'minikube config set WantUpdateNotification false'
	
	    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 18.36 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 51.83 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 84.06 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 128.14 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 166.27 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 183.83 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 209.47 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 232.00 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 245.38 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 279.08 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 297.20 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 328.89 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4
: 360.00 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 392.00 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 420.98 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 453.53 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 480.08 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 511.64 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 542.91 MiB* 
	X Unable to start VM after repeated tries. Please try {{'minikube delete' if possible: recreate: creating host: create: creating: create kic node: creating volume for stopped-upgrade-20220921220434-5916 container: output Error response from daemon: create stopped-upgrade-20220921220434-5916: error while creating volume root path '/var/lib/docker/volumes/stopped-upgrade-20220921220434-5916': mkdir /var/lib/docker/volumes/stopped-upgrade-20220921220434-5916: read-only file system
	: exit status 1
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:190: (dbg) Run:  C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube-v1.9.0.3839537704.exe start -p stopped-upgrade-20220921220434-5916 --memory=2200 --vm-driver=docker

                                                
                                                
=== CONT  TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:190: (dbg) Non-zero exit: C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube-v1.9.0.3839537704.exe start -p stopped-upgrade-20220921220434-5916 --memory=2200 --vm-driver=docker: exit status 70 (32.678197s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-20220921220434-5916] minikube v1.9.0 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=14995
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=C:\Users\jenkins.minikube2\AppData\Local\Temp\legacy_kubeconfig3186661711
	* Using the docker driver based on existing profile
	* Pulling base image ...
	* docker "stopped-upgrade-20220921220434-5916" container is missing, will recreate.
	* Creating Kubernetes in docker container with (CPUs=2) (16 available), Memory=2200MB (51405MB available) ...
	! StartHost failed, but will try again: recreate: creating host: create: creating: create kic node: creating volume for stopped-upgrade-20220921220434-5916 container: output Error response from daemon: create stopped-upgrade-20220921220434-5916: error while creating volume root path '/var/lib/docker/volumes/stopped-upgrade-20220921220434-5916': mkdir /var/lib/docker/volumes/stopped-upgrade-20220921220434-5916: read-only file system
	: exit status 1
	* docker "stopped-upgrade-20220921220434-5916" container is missing, will recreate.
	* Creating Kubernetes in docker container with (CPUs=2) (16 available), Memory=2200MB (51405MB available) ...
	* StartHost failed again: recreate: creating host: create: creating: create kic node: creating volume for stopped-upgrade-20220921220434-5916 container: output Error response from daemon: create stopped-upgrade-20220921220434-5916: error while creating volume root path '/var/lib/docker/volumes/stopped-upgrade-20220921220434-5916': mkdir /var/lib/docker/volumes/stopped-upgrade-20220921220434-5916: read-only file system
	: exit status 1
	  - Run: "minikube delete -p stopped-upgrade-20220921220434-5916", then "minikube start -p stopped-upgrade-20220921220434-5916 --alsologtostderr -v=1" to try again with more logging

                                                
                                                
-- /stdout --
** stderr ** 
	* 
	X Unable to start VM after repeated tries. Please try {{'minikube delete' if possible: recreate: creating host: create: creating: create kic node: creating volume for stopped-upgrade-20220921220434-5916 container: output Error response from daemon: create stopped-upgrade-20220921220434-5916: error while creating volume root path '/var/lib/docker/volumes/stopped-upgrade-20220921220434-5916': mkdir /var/lib/docker/volumes/stopped-upgrade-20220921220434-5916: read-only file system
	: exit status 1
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:190: (dbg) Run:  C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube-v1.9.0.3839537704.exe start -p stopped-upgrade-20220921220434-5916 --memory=2200 --vm-driver=docker

                                                
                                                
=== CONT  TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:190: (dbg) Non-zero exit: C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube-v1.9.0.3839537704.exe start -p stopped-upgrade-20220921220434-5916 --memory=2200 --vm-driver=docker: exit status 70 (40.4689967s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-20220921220434-5916] minikube v1.9.0 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=14995
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=C:\Users\jenkins.minikube2\AppData\Local\Temp\legacy_kubeconfig3951807806
	* Using the docker driver based on existing profile
	* Pulling base image ...
	* docker "stopped-upgrade-20220921220434-5916" container is missing, will recreate.
	* Creating Kubernetes in docker container with (CPUs=2) (16 available), Memory=2200MB (51405MB available) ...
	! StartHost failed, but will try again: recreate: creating host: create: creating: create kic node: creating volume for stopped-upgrade-20220921220434-5916 container: output Error response from daemon: create stopped-upgrade-20220921220434-5916: error while creating volume root path '/var/lib/docker/volumes/stopped-upgrade-20220921220434-5916': mkdir /var/lib/docker/volumes/stopped-upgrade-20220921220434-5916: read-only file system
	: exit status 1
	* docker "stopped-upgrade-20220921220434-5916" container is missing, will recreate.
	* Creating Kubernetes in docker container with (CPUs=2) (16 available), Memory=2200MB (51405MB available) ...
	* StartHost failed again: recreate: creating host: create: creating: create kic node: creating volume for stopped-upgrade-20220921220434-5916 container: output Error response from daemon: create stopped-upgrade-20220921220434-5916: error while creating volume root path '/var/lib/docker/volumes/stopped-upgrade-20220921220434-5916': mkdir /var/lib/docker/volumes/stopped-upgrade-20220921220434-5916: read-only file system
	: exit status 1
	  - Run: "minikube delete -p stopped-upgrade-20220921220434-5916", then "minikube start -p stopped-upgrade-20220921220434-5916 --alsologtostderr -v=1" to try again with more logging

                                                
                                                
-- /stdout --
** stderr ** 
	* 
	X Unable to start VM after repeated tries. Please try {{'minikube delete' if possible: recreate: creating host: create: creating: create kic node: creating volume for stopped-upgrade-20220921220434-5916 container: output Error response from daemon: create stopped-upgrade-20220921220434-5916: error while creating volume root path '/var/lib/docker/volumes/stopped-upgrade-20220921220434-5916': mkdir /var/lib/docker/volumes/stopped-upgrade-20220921220434-5916: read-only file system
	: exit status 1
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:196: legacy v1.9.0 start failed: exit status 70
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (107.98s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (53.63s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-windows-amd64.exe start -p NoKubernetes-20220921220434-5916 --driver=docker

                                                
                                                
=== CONT  TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p NoKubernetes-20220921220434-5916 --driver=docker: exit status 60 (52.7677902s)

                                                
                                                
-- stdout --
	* [NoKubernetes-20220921220434-5916] minikube v1.27.0 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	  - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=14995
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting control plane node NoKubernetes-20220921220434-5916 in cluster NoKubernetes-20220921220434-5916
	* Pulling base image ...
	* Another minikube instance is downloading dependencies... 
	* Creating docker container (CPUs=2, Memory=16300MB) ...
	* docker "NoKubernetes-20220921220434-5916" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=16300MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	    > gcr.io/k8s-minikube/kicbase:  0 B [__________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase:  0 B [__________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase:  0 B [__________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase:  0 B [______________________] ?% ? p/s 800ms! minikube was unable to download gcr.io/k8s-minikube/kicbase:v0.0.34, but successfully downloaded gcr.io/k8s-minikube/kicbase:v0.0.34 as a fallback image
	E0921 22:04:45.688848    5608 network_create.go:104] error while trying to create docker network NoKubernetes-20220921220434-5916 192.168.49.0/24: create docker network NoKubernetes-20220921220434-5916 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=NoKubernetes-20220921220434-5916 NoKubernetes-20220921220434-5916: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 6072cdf19fd3724c9b0e09238bb6477f1d6dc62fe45930d60cb89253f7e04160 (br-6072cdf19fd3): conflicts with network a04d36bfb3cf4b1b1ab643a2ec70fadfc4a485878b32a275b4c7653037d66dd0 (br-a04d36bfb3cf): networks have overlapping IPv4
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network NoKubernetes-20220921220434-5916 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=NoKubernetes-20220921220434-5916 NoKubernetes-20220921220434-5916: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 6072cdf19fd3724c9b0e09238bb6477f1d6dc62fe45930d60cb89253f7e04160 (br-6072cdf19fd3): conflicts with network a04d36bfb3cf4b1b1ab643a2ec70fadfc4a485878b32a275b4c7653037d66dd0 (br-a04d36bfb3cf): networks have overlapping IPv4
	
	! StartHost failed, but will try again: creating host: create: creating: setting up container node: creating volume for NoKubernetes-20220921220434-5916 container: docker volume create NoKubernetes-20220921220434-5916 --label name.minikube.sigs.k8s.io=NoKubernetes-20220921220434-5916 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create NoKubernetes-20220921220434-5916: error while creating volume root path '/var/lib/docker/volumes/NoKubernetes-20220921220434-5916': mkdir /var/lib/docker/volumes/NoKubernetes-20220921220434-5916: read-only file system
	
	E0921 22:05:18.235193    5608 network_create.go:104] error while trying to create docker network NoKubernetes-20220921220434-5916 192.168.58.0/24: create docker network NoKubernetes-20220921220434-5916 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=NoKubernetes-20220921220434-5916 NoKubernetes-20220921220434-5916: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network f21fab21ec69546eac312011ce5ce35a518b8b43204b7e4e0ca278d0fd640f71 (br-f21fab21ec69): conflicts with network 8a3cd8d165a4af032276277dbee93b52b8ee4aed7ca6d8aceee6814901b31792 (br-8a3cd8d165a4): networks have overlapping IPv4
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network NoKubernetes-20220921220434-5916 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=NoKubernetes-20220921220434-5916 NoKubernetes-20220921220434-5916: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network f21fab21ec69546eac312011ce5ce35a518b8b43204b7e4e0ca278d0fd640f71 (br-f21fab21ec69): conflicts with network 8a3cd8d165a4af032276277dbee93b52b8ee4aed7ca6d8aceee6814901b31792 (br-8a3cd8d165a4): networks have overlapping IPv4
	
	* Failed to start docker container. Running "minikube delete -p NoKubernetes-20220921220434-5916" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for NoKubernetes-20220921220434-5916 container: docker volume create NoKubernetes-20220921220434-5916 --label name.minikube.sigs.k8s.io=NoKubernetes-20220921220434-5916 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create NoKubernetes-20220921220434-5916: error while creating volume root path '/var/lib/docker/volumes/NoKubernetes-20220921220434-5916': mkdir /var/lib/docker/volumes/NoKubernetes-20220921220434-5916: read-only file system
	
	X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for NoKubernetes-20220921220434-5916 container: docker volume create NoKubernetes-20220921220434-5916 --label name.minikube.sigs.k8s.io=NoKubernetes-20220921220434-5916 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create NoKubernetes-20220921220434-5916: error while creating volume root path '/var/lib/docker/volumes/NoKubernetes-20220921220434-5916': mkdir /var/lib/docker/volumes/NoKubernetes-20220921220434-5916: read-only file system
	
	* Suggestion: Restart Docker
	* Related issue: https://github.com/kubernetes/minikube/issues/6825

                                                
                                                
** /stderr **
no_kubernetes_test.go:97: failed to start minikube with args: "out/minikube-windows-amd64.exe start -p NoKubernetes-20220921220434-5916 --driver=docker" : exit status 60
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestNoKubernetes/serial/StartWithK8s]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect NoKubernetes-20220921220434-5916
helpers_test.go:231: (dbg) Non-zero exit: docker inspect NoKubernetes-20220921220434-5916: exit status 1 (237.8958ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: NoKubernetes-20220921220434-5916

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p NoKubernetes-20220921220434-5916 -n NoKubernetes-20220921220434-5916
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p NoKubernetes-20220921220434-5916 -n NoKubernetes-20220921220434-5916: exit status 7 (606.7812ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0921 22:05:28.662971    6968 status.go:247] status error: host: state: unknown state "NoKubernetes-20220921220434-5916": docker container inspect NoKubernetes-20220921220434-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: NoKubernetes-20220921220434-5916

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-20220921220434-5916" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestNoKubernetes/serial/StartWithK8s (53.63s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (78.92s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-windows-amd64.exe start -p NoKubernetes-20220921220434-5916 --no-kubernetes --driver=docker

                                                
                                                
=== CONT  TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p NoKubernetes-20220921220434-5916 --no-kubernetes --driver=docker: exit status 60 (1m18.048417s)

                                                
                                                
-- stdout --
	* [NoKubernetes-20220921220434-5916] minikube v1.27.0 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	  - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=14995
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-20220921220434-5916
	* Pulling base image ...
	* docker "NoKubernetes-20220921220434-5916" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=16300MB) ...
	* docker "NoKubernetes-20220921220434-5916" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=16300MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	    > gcr.io/k8s-minikube/kicbase:  0 B [__________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase:  0 B [__________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase:  0 B [__________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase:  0 B [__________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase:  0 B [______________________] ?% ? p/s 800ms! minikube was unable to download gcr.io/k8s-minikube/kicbase:v0.0.34, but successfully downloaded gcr.io/k8s-minikube/kicbase:v0.0.34 as a fallback image
	E0921 22:05:57.783487    5388 network_create.go:104] error while trying to create docker network NoKubernetes-20220921220434-5916 192.168.49.0/24: create docker network NoKubernetes-20220921220434-5916 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=NoKubernetes-20220921220434-5916 NoKubernetes-20220921220434-5916: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 7509e19f81ad64dca4e198fc8c96b9ae339844a71407e9ff065883a6332abc01 (br-7509e19f81ad): conflicts with network a04d36bfb3cf4b1b1ab643a2ec70fadfc4a485878b32a275b4c7653037d66dd0 (br-a04d36bfb3cf): networks have overlapping IPv4
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network NoKubernetes-20220921220434-5916 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=NoKubernetes-20220921220434-5916 NoKubernetes-20220921220434-5916: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 7509e19f81ad64dca4e198fc8c96b9ae339844a71407e9ff065883a6332abc01 (br-7509e19f81ad): conflicts with network a04d36bfb3cf4b1b1ab643a2ec70fadfc4a485878b32a275b4c7653037d66dd0 (br-a04d36bfb3cf): networks have overlapping IPv4
	
	! StartHost failed, but will try again: recreate: creating host: create: creating: setting up container node: creating volume for NoKubernetes-20220921220434-5916 container: docker volume create NoKubernetes-20220921220434-5916 --label name.minikube.sigs.k8s.io=NoKubernetes-20220921220434-5916 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create NoKubernetes-20220921220434-5916: error while creating volume root path '/var/lib/docker/volumes/NoKubernetes-20220921220434-5916': mkdir /var/lib/docker/volumes/NoKubernetes-20220921220434-5916: read-only file system
	
	E0921 22:06:37.263736    5388 network_create.go:104] error while trying to create docker network NoKubernetes-20220921220434-5916 192.168.58.0/24: create docker network NoKubernetes-20220921220434-5916 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=NoKubernetes-20220921220434-5916 NoKubernetes-20220921220434-5916: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 66cb178b7cc40487eb9a45fe87b4fbf1fb145541fd6cbd0f8cb42ff9538dd8a7 (br-66cb178b7cc4): conflicts with network 8a3cd8d165a4af032276277dbee93b52b8ee4aed7ca6d8aceee6814901b31792 (br-8a3cd8d165a4): networks have overlapping IPv4
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network NoKubernetes-20220921220434-5916 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=NoKubernetes-20220921220434-5916 NoKubernetes-20220921220434-5916: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 66cb178b7cc40487eb9a45fe87b4fbf1fb145541fd6cbd0f8cb42ff9538dd8a7 (br-66cb178b7cc4): conflicts with network 8a3cd8d165a4af032276277dbee93b52b8ee4aed7ca6d8aceee6814901b31792 (br-8a3cd8d165a4): networks have overlapping IPv4
	
	* Failed to start docker container. Running "minikube delete -p NoKubernetes-20220921220434-5916" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for NoKubernetes-20220921220434-5916 container: docker volume create NoKubernetes-20220921220434-5916 --label name.minikube.sigs.k8s.io=NoKubernetes-20220921220434-5916 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create NoKubernetes-20220921220434-5916: error while creating volume root path '/var/lib/docker/volumes/NoKubernetes-20220921220434-5916': mkdir /var/lib/docker/volumes/NoKubernetes-20220921220434-5916: read-only file system
	
	X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for NoKubernetes-20220921220434-5916 container: docker volume create NoKubernetes-20220921220434-5916 --label name.minikube.sigs.k8s.io=NoKubernetes-20220921220434-5916 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create NoKubernetes-20220921220434-5916: error while creating volume root path '/var/lib/docker/volumes/NoKubernetes-20220921220434-5916': mkdir /var/lib/docker/volumes/NoKubernetes-20220921220434-5916: read-only file system
	
	* Suggestion: Restart Docker
	* Related issue: https://github.com/kubernetes/minikube/issues/6825

                                                
                                                
** /stderr **
no_kubernetes_test.go:114: failed to start minikube with args: "out/minikube-windows-amd64.exe start -p NoKubernetes-20220921220434-5916 --no-kubernetes --driver=docker" : exit status 60
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestNoKubernetes/serial/StartWithStopK8s]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect NoKubernetes-20220921220434-5916
helpers_test.go:231: (dbg) Non-zero exit: docker inspect NoKubernetes-20220921220434-5916: exit status 1 (276.073ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: NoKubernetes-20220921220434-5916

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p NoKubernetes-20220921220434-5916 -n NoKubernetes-20220921220434-5916
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p NoKubernetes-20220921220434-5916 -n NoKubernetes-20220921220434-5916: exit status 7 (585.965ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0921 22:06:47.577512    8216 status.go:247] status error: host: state: unknown state "NoKubernetes-20220921220434-5916": docker container inspect NoKubernetes-20220921220434-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: NoKubernetes-20220921220434-5916

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-20220921220434-5916" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestNoKubernetes/serial/StartWithStopK8s (78.92s)

                                                
                                    
x
+
TestPause/serial/Start (51.75s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-windows-amd64.exe start -p pause-20220921220531-5916 --memory=2048 --install-addons=false --wait=all --driver=docker

                                                
                                                
=== CONT  TestPause/serial/Start
pause_test.go:80: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p pause-20220921220531-5916 --memory=2048 --install-addons=false --wait=all --driver=docker: exit status 60 (50.8869151s)

                                                
                                                
-- stdout --
	* [pause-20220921220531-5916] minikube v1.27.0 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	  - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=14995
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting control plane node pause-20220921220531-5916 in cluster pause-20220921220531-5916
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* docker "pause-20220921220531-5916" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	    > gcr.io/k8s-minikube/kicbase:  0 B [__________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase:  0 B [__________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase:  0 B [__________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase:  0 B [__________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase:  0 B [__________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase:  0 B [_______________________] ?% ? p/s 1.0s! minikube was unable to download gcr.io/k8s-minikube/kicbase:v0.0.34, but successfully downloaded gcr.io/k8s-minikube/kicbase:v0.0.34 as a fallback image
	E0921 22:05:39.995575    5492 network_create.go:104] error while trying to create docker network pause-20220921220531-5916 192.168.49.0/24: create docker network pause-20220921220531-5916 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=pause-20220921220531-5916 pause-20220921220531-5916: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network d369c77bf76ce8a226ce0ce224b94404605d0b4f525befc3ed80b66e92e22b02 (br-d369c77bf76c): conflicts with network a04d36bfb3cf4b1b1ab643a2ec70fadfc4a485878b32a275b4c7653037d66dd0 (br-a04d36bfb3cf): networks have overlapping IPv4
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network pause-20220921220531-5916 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=pause-20220921220531-5916 pause-20220921220531-5916: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network d369c77bf76ce8a226ce0ce224b94404605d0b4f525befc3ed80b66e92e22b02 (br-d369c77bf76c): conflicts with network a04d36bfb3cf4b1b1ab643a2ec70fadfc4a485878b32a275b4c7653037d66dd0 (br-a04d36bfb3cf): networks have overlapping IPv4
	
	! StartHost failed, but will try again: creating host: create: creating: setting up container node: creating volume for pause-20220921220531-5916 container: docker volume create pause-20220921220531-5916 --label name.minikube.sigs.k8s.io=pause-20220921220531-5916 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create pause-20220921220531-5916: error while creating volume root path '/var/lib/docker/volumes/pause-20220921220531-5916': mkdir /var/lib/docker/volumes/pause-20220921220531-5916: read-only file system
	
	E0921 22:06:12.566574    5492 network_create.go:104] error while trying to create docker network pause-20220921220531-5916 192.168.58.0/24: create docker network pause-20220921220531-5916 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=pause-20220921220531-5916 pause-20220921220531-5916: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 04f89a1e7e692738319b2c79dd3144813c707c521b2dce3c4cd2e27085331103 (br-04f89a1e7e69): conflicts with network 8a3cd8d165a4af032276277dbee93b52b8ee4aed7ca6d8aceee6814901b31792 (br-8a3cd8d165a4): networks have overlapping IPv4
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network pause-20220921220531-5916 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=pause-20220921220531-5916 pause-20220921220531-5916: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 04f89a1e7e692738319b2c79dd3144813c707c521b2dce3c4cd2e27085331103 (br-04f89a1e7e69): conflicts with network 8a3cd8d165a4af032276277dbee93b52b8ee4aed7ca6d8aceee6814901b31792 (br-8a3cd8d165a4): networks have overlapping IPv4
	
	* Failed to start docker container. Running "minikube delete -p pause-20220921220531-5916" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for pause-20220921220531-5916 container: docker volume create pause-20220921220531-5916 --label name.minikube.sigs.k8s.io=pause-20220921220531-5916 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create pause-20220921220531-5916: error while creating volume root path '/var/lib/docker/volumes/pause-20220921220531-5916': mkdir /var/lib/docker/volumes/pause-20220921220531-5916: read-only file system
	
	X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for pause-20220921220531-5916 container: docker volume create pause-20220921220531-5916 --label name.minikube.sigs.k8s.io=pause-20220921220531-5916 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create pause-20220921220531-5916: error while creating volume root path '/var/lib/docker/volumes/pause-20220921220531-5916': mkdir /var/lib/docker/volumes/pause-20220921220531-5916: read-only file system
	
	* Suggestion: Restart Docker
	* Related issue: https://github.com/kubernetes/minikube/issues/6825

                                                
                                                
** /stderr **
pause_test.go:82: failed to start minikube with args: "out/minikube-windows-amd64.exe start -p pause-20220921220531-5916 --memory=2048 --install-addons=false --wait=all --driver=docker" : exit status 60
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestPause/serial/Start]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect pause-20220921220531-5916

                                                
                                                
=== CONT  TestPause/serial/Start
helpers_test.go:231: (dbg) Non-zero exit: docker inspect pause-20220921220531-5916: exit status 1 (286.6237ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: pause-20220921220531-5916

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p pause-20220921220531-5916 -n pause-20220921220531-5916
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p pause-20220921220531-5916 -n pause-20220921220531-5916: exit status 7 (568.4349ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0921 22:06:23.592467     648 status.go:247] status error: host: state: unknown state "pause-20220921220531-5916": docker container inspect pause-20220921220531-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: pause-20220921220531-5916

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "pause-20220921220531-5916" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestPause/serial/Start (51.75s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.59s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:213: (dbg) Run:  out/minikube-windows-amd64.exe logs -p stopped-upgrade-20220921220434-5916

                                                
                                                
=== CONT  TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:213: (dbg) Non-zero exit: out/minikube-windows-amd64.exe logs -p stopped-upgrade-20220921220434-5916: exit status 80 (1.2226352s)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|------------------------------------------|-------------------|---------|---------------------|---------------------|
	| Command |                       Args                        |                 Profile                  |       User        | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|------------------------------------------|-------------------|---------|---------------------|---------------------|
	| unpause | -p                                                | json-output-20220921214338-5916          | testUser          | v1.27.0 | 21 Sep 22 21:44 GMT |                     |
	|         | json-output-20220921214338-5916                   |                                          |                   |         |                     |                     |
	|         | --output=json --user=testUser                     |                                          |                   |         |                     |                     |
	| stop    | -p                                                | json-output-20220921214338-5916          | testUser          | v1.27.0 | 21 Sep 22 21:44 GMT |                     |
	|         | json-output-20220921214338-5916                   |                                          |                   |         |                     |                     |
	|         | --output=json --user=testUser                     |                                          |                   |         |                     |                     |
	| delete  | -p                                                | json-output-20220921214338-5916          | minikube2\jenkins | v1.27.0 | 21 Sep 22 21:44 GMT | 21 Sep 22 21:44 GMT |
	|         | json-output-20220921214338-5916                   |                                          |                   |         |                     |                     |
	| start   | -p                                                | json-output-error-20220921214450-5916    | minikube2\jenkins | v1.27.0 | 21 Sep 22 21:44 GMT |                     |
	|         | json-output-error-20220921214450-5916             |                                          |                   |         |                     |                     |
	|         | --memory=2200 --output=json                       |                                          |                   |         |                     |                     |
	|         | --wait=true --driver=fail                         |                                          |                   |         |                     |                     |
	| delete  | -p                                                | json-output-error-20220921214450-5916    | minikube2\jenkins | v1.27.0 | 21 Sep 22 21:44 GMT | 21 Sep 22 21:44 GMT |
	|         | json-output-error-20220921214450-5916             |                                          |                   |         |                     |                     |
	| start   | -p                                                | docker-network-20220921214451-5916       | minikube2\jenkins | v1.27.0 | 21 Sep 22 21:44 GMT | 21 Sep 22 21:47 GMT |
	|         | docker-network-20220921214451-5916                |                                          |                   |         |                     |                     |
	|         | --network=                                        |                                          |                   |         |                     |                     |
	| delete  | -p                                                | docker-network-20220921214451-5916       | minikube2\jenkins | v1.27.0 | 21 Sep 22 21:47 GMT | 21 Sep 22 21:48 GMT |
	|         | docker-network-20220921214451-5916                |                                          |                   |         |                     |                     |
	| start   | -p                                                | docker-network-20220921214811-5916       | minikube2\jenkins | v1.27.0 | 21 Sep 22 21:48 GMT | 21 Sep 22 21:50 GMT |
	|         | docker-network-20220921214811-5916                |                                          |                   |         |                     |                     |
	|         | --network=bridge                                  |                                          |                   |         |                     |                     |
	| delete  | -p                                                | docker-network-20220921214811-5916       | minikube2\jenkins | v1.27.0 | 21 Sep 22 21:50 GMT | 21 Sep 22 21:51 GMT |
	|         | docker-network-20220921214811-5916                |                                          |                   |         |                     |                     |
	| start   | -p                                                | custom-subnet-20220921215125-5916        | minikube2\jenkins | v1.27.0 | 21 Sep 22 21:51 GMT | 21 Sep 22 21:54 GMT |
	|         | custom-subnet-20220921215125-5916                 |                                          |                   |         |                     |                     |
	|         | --subnet=192.168.60.0/24                          |                                          |                   |         |                     |                     |
	| delete  | -p                                                | custom-subnet-20220921215125-5916        | minikube2\jenkins | v1.27.0 | 21 Sep 22 21:54 GMT | 21 Sep 22 21:54 GMT |
	|         | custom-subnet-20220921215125-5916                 |                                          |                   |         |                     |                     |
	| start   | -p first-20220921215450-5916                      | first-20220921215450-5916                | minikube2\jenkins | v1.27.0 | 21 Sep 22 21:54 GMT |                     |
	|         | --driver=docker                                   |                                          |                   |         |                     |                     |
	| delete  | -p second-20220921215450-5916                     | second-20220921215450-5916               | minikube2\jenkins | v1.27.0 | 21 Sep 22 21:55 GMT | 21 Sep 22 21:55 GMT |
	| delete  | -p first-20220921215450-5916                      | first-20220921215450-5916                | minikube2\jenkins | v1.27.0 | 21 Sep 22 21:55 GMT | 21 Sep 22 21:55 GMT |
	| start   | -p                                                | mount-start-1-20220921215543-5916        | minikube2\jenkins | v1.27.0 | 21 Sep 22 21:55 GMT |                     |
	|         | mount-start-1-20220921215543-5916                 |                                          |                   |         |                     |                     |
	|         | --memory=2048 --mount                             |                                          |                   |         |                     |                     |
	|         | --mount-gid 0 --mount-msize 6543                  |                                          |                   |         |                     |                     |
	|         | --mount-port 46464 --mount-uid 0                  |                                          |                   |         |                     |                     |
	|         | --no-kubernetes --driver=docker                   |                                          |                   |         |                     |                     |
	| delete  | -p                                                | mount-start-2-20220921215543-5916        | minikube2\jenkins | v1.27.0 | 21 Sep 22 21:56 GMT | 21 Sep 22 21:56 GMT |
	|         | mount-start-2-20220921215543-5916                 |                                          |                   |         |                     |                     |
	| delete  | -p                                                | mount-start-1-20220921215543-5916        | minikube2\jenkins | v1.27.0 | 21 Sep 22 21:56 GMT | 21 Sep 22 21:56 GMT |
	|         | mount-start-1-20220921215543-5916                 |                                          |                   |         |                     |                     |
	| start   | -p                                                | multinode-20220921215635-5916            | minikube2\jenkins | v1.27.0 | 21 Sep 22 21:56 GMT |                     |
	|         | multinode-20220921215635-5916                     |                                          |                   |         |                     |                     |
	|         | --wait=true --memory=2200                         |                                          |                   |         |                     |                     |
	|         | --nodes=2 -v=8                                    |                                          |                   |         |                     |                     |
	|         | --alsologtostderr                                 |                                          |                   |         |                     |                     |
	|         | --driver=docker                                   |                                          |                   |         |                     |                     |
	| kubectl | -p multinode-20220921215635-5916 -- apply -f      | multinode-20220921215635-5916            | minikube2\jenkins | v1.27.0 | 21 Sep 22 21:57 GMT |                     |
	|         | ./testdata/multinodes/multinode-pod-dns-test.yaml |                                          |                   |         |                     |                     |
	| kubectl | -p                                                | multinode-20220921215635-5916            | minikube2\jenkins | v1.27.0 | 21 Sep 22 21:57 GMT |                     |
	|         | multinode-20220921215635-5916                     |                                          |                   |         |                     |                     |
	|         | -- rollout status                                 |                                          |                   |         |                     |                     |
	|         | deployment/busybox                                |                                          |                   |         |                     |                     |
	| kubectl | -p multinode-20220921215635-5916                  | multinode-20220921215635-5916            | minikube2\jenkins | v1.27.0 | 21 Sep 22 21:57 GMT |                     |
	|         | -- get pods -o                                    |                                          |                   |         |                     |                     |
	|         | jsonpath='{.items[*].status.podIP}'               |                                          |                   |         |                     |                     |
	| kubectl | -p multinode-20220921215635-5916                  | multinode-20220921215635-5916            | minikube2\jenkins | v1.27.0 | 21 Sep 22 21:57 GMT |                     |
	|         | -- get pods -o                                    |                                          |                   |         |                     |                     |
	|         | jsonpath='{.items[*].metadata.name}'              |                                          |                   |         |                     |                     |
	| kubectl | -p                                                | multinode-20220921215635-5916            | minikube2\jenkins | v1.27.0 | 21 Sep 22 21:57 GMT |                     |
	|         | multinode-20220921215635-5916                     |                                          |                   |         |                     |                     |
	|         | -- exec  -- nslookup                              |                                          |                   |         |                     |                     |
	|         | kubernetes.io                                     |                                          |                   |         |                     |                     |
	| kubectl | -p                                                | multinode-20220921215635-5916            | minikube2\jenkins | v1.27.0 | 21 Sep 22 21:57 GMT |                     |
	|         | multinode-20220921215635-5916                     |                                          |                   |         |                     |                     |
	|         | -- exec  -- nslookup                              |                                          |                   |         |                     |                     |
	|         | kubernetes.default                                |                                          |                   |         |                     |                     |
	| kubectl | -p multinode-20220921215635-5916                  | multinode-20220921215635-5916            | minikube2\jenkins | v1.27.0 | 21 Sep 22 21:57 GMT |                     |
	|         | -- exec  -- nslookup                              |                                          |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                                          |                   |         |                     |                     |
	| kubectl | -p multinode-20220921215635-5916                  | multinode-20220921215635-5916            | minikube2\jenkins | v1.27.0 | 21 Sep 22 21:57 GMT |                     |
	|         | -- get pods -o                                    |                                          |                   |         |                     |                     |
	|         | jsonpath='{.items[*].metadata.name}'              |                                          |                   |         |                     |                     |
	| node    | add -p                                            | multinode-20220921215635-5916            | minikube2\jenkins | v1.27.0 | 21 Sep 22 21:57 GMT |                     |
	|         | multinode-20220921215635-5916                     |                                          |                   |         |                     |                     |
	|         | -v 3 --alsologtostderr                            |                                          |                   |         |                     |                     |
	| profile | list --output json                                | minikube                                 | minikube2\jenkins | v1.27.0 | 21 Sep 22 21:57 GMT | 21 Sep 22 21:57 GMT |
	| node    | multinode-20220921215635-5916                     | multinode-20220921215635-5916            | minikube2\jenkins | v1.27.0 | 21 Sep 22 21:57 GMT |                     |
	|         | node stop m03                                     |                                          |                   |         |                     |                     |
	| node    | multinode-20220921215635-5916                     | multinode-20220921215635-5916            | minikube2\jenkins | v1.27.0 | 21 Sep 22 21:57 GMT |                     |
	|         | node start m03                                    |                                          |                   |         |                     |                     |
	|         | --alsologtostderr                                 |                                          |                   |         |                     |                     |
	| node    | list -p                                           | multinode-20220921215635-5916            | minikube2\jenkins | v1.27.0 | 21 Sep 22 21:57 GMT |                     |
	|         | multinode-20220921215635-5916                     |                                          |                   |         |                     |                     |
	| stop    | -p                                                | multinode-20220921215635-5916            | minikube2\jenkins | v1.27.0 | 21 Sep 22 21:57 GMT |                     |
	|         | multinode-20220921215635-5916                     |                                          |                   |         |                     |                     |
	| start   | -p                                                | multinode-20220921215635-5916            | minikube2\jenkins | v1.27.0 | 21 Sep 22 21:58 GMT |                     |
	|         | multinode-20220921215635-5916                     |                                          |                   |         |                     |                     |
	|         | --wait=true -v=8                                  |                                          |                   |         |                     |                     |
	|         | --alsologtostderr                                 |                                          |                   |         |                     |                     |
	| node    | list -p                                           | multinode-20220921215635-5916            | minikube2\jenkins | v1.27.0 | 21 Sep 22 21:59 GMT |                     |
	|         | multinode-20220921215635-5916                     |                                          |                   |         |                     |                     |
	| node    | multinode-20220921215635-5916                     | multinode-20220921215635-5916            | minikube2\jenkins | v1.27.0 | 21 Sep 22 21:59 GMT |                     |
	|         | node delete m03                                   |                                          |                   |         |                     |                     |
	| stop    | multinode-20220921215635-5916                     | multinode-20220921215635-5916            | minikube2\jenkins | v1.27.0 | 21 Sep 22 21:59 GMT |                     |
	|         | stop                                              |                                          |                   |         |                     |                     |
	| start   | -p                                                | multinode-20220921215635-5916            | minikube2\jenkins | v1.27.0 | 21 Sep 22 21:59 GMT |                     |
	|         | multinode-20220921215635-5916                     |                                          |                   |         |                     |                     |
	|         | --wait=true -v=8                                  |                                          |                   |         |                     |                     |
	|         | --alsologtostderr                                 |                                          |                   |         |                     |                     |
	|         | --driver=docker                                   |                                          |                   |         |                     |                     |
	| node    | list -p                                           | multinode-20220921215635-5916            | minikube2\jenkins | v1.27.0 | 21 Sep 22 22:00 GMT |                     |
	|         | multinode-20220921215635-5916                     |                                          |                   |         |                     |                     |
	| start   | -p                                                | multinode-20220921215635-5916-m01        | minikube2\jenkins | v1.27.0 | 21 Sep 22 22:00 GMT |                     |
	|         | multinode-20220921215635-5916-m01                 |                                          |                   |         |                     |                     |
	|         | --driver=docker                                   |                                          |                   |         |                     |                     |
	| start   | -p                                                | multinode-20220921215635-5916-m02        | minikube2\jenkins | v1.27.0 | 21 Sep 22 22:01 GMT |                     |
	|         | multinode-20220921215635-5916-m02                 |                                          |                   |         |                     |                     |
	|         | --driver=docker                                   |                                          |                   |         |                     |                     |
	| node    | add -p                                            | multinode-20220921215635-5916            | minikube2\jenkins | v1.27.0 | 21 Sep 22 22:02 GMT |                     |
	|         | multinode-20220921215635-5916                     |                                          |                   |         |                     |                     |
	| delete  | -p                                                | multinode-20220921215635-5916-m02        | minikube2\jenkins | v1.27.0 | 21 Sep 22 22:02 GMT | 21 Sep 22 22:02 GMT |
	|         | multinode-20220921215635-5916-m02                 |                                          |                   |         |                     |                     |
	| delete  | -p                                                | multinode-20220921215635-5916            | minikube2\jenkins | v1.27.0 | 21 Sep 22 22:02 GMT | 21 Sep 22 22:02 GMT |
	|         | multinode-20220921215635-5916                     |                                          |                   |         |                     |                     |
	| start   | -p                                                | test-preload-20220921220240-5916         | minikube2\jenkins | v1.27.0 | 21 Sep 22 22:02 GMT |                     |
	|         | test-preload-20220921220240-5916                  |                                          |                   |         |                     |                     |
	|         | --memory=2200 --alsologtostderr                   |                                          |                   |         |                     |                     |
	|         | --wait=true --preload=false                       |                                          |                   |         |                     |                     |
	|         | --driver=docker                                   |                                          |                   |         |                     |                     |
	|         | --kubernetes-version=v1.17.0                      |                                          |                   |         |                     |                     |
	| delete  | -p                                                | test-preload-20220921220240-5916         | minikube2\jenkins | v1.27.0 | 21 Sep 22 22:03 GMT | 21 Sep 22 22:03 GMT |
	|         | test-preload-20220921220240-5916                  |                                          |                   |         |                     |                     |
	| start   | -p                                                | scheduled-stop-20220921220332-5916       | minikube2\jenkins | v1.27.0 | 21 Sep 22 22:03 GMT |                     |
	|         | scheduled-stop-20220921220332-5916                |                                          |                   |         |                     |                     |
	|         | --memory=2048 --driver=docker                     |                                          |                   |         |                     |                     |
	| delete  | -p                                                | scheduled-stop-20220921220332-5916       | minikube2\jenkins | v1.27.0 | 21 Sep 22 22:04 GMT | 21 Sep 22 22:04 GMT |
	|         | scheduled-stop-20220921220332-5916                |                                          |                   |         |                     |                     |
	| start   | -p                                                | insufficient-storage-20220921220423-5916 | minikube2\jenkins | v1.27.0 | 21 Sep 22 22:04 GMT |                     |
	|         | insufficient-storage-20220921220423-5916          |                                          |                   |         |                     |                     |
	|         | --memory=2048 --output=json --wait=true           |                                          |                   |         |                     |                     |
	|         | --driver=docker                                   |                                          |                   |         |                     |                     |
	| delete  | -p                                                | insufficient-storage-20220921220423-5916 | minikube2\jenkins | v1.27.0 | 21 Sep 22 22:04 GMT | 21 Sep 22 22:04 GMT |
	|         | insufficient-storage-20220921220423-5916          |                                          |                   |         |                     |                     |
	| start   | -p                                                | offline-docker-20220921220434-5916       | minikube2\jenkins | v1.27.0 | 21 Sep 22 22:04 GMT |                     |
	|         | offline-docker-20220921220434-5916                |                                          |                   |         |                     |                     |
	|         | --alsologtostderr -v=1                            |                                          |                   |         |                     |                     |
	|         | --memory=2048 --wait=true                         |                                          |                   |         |                     |                     |
	|         | --driver=docker                                   |                                          |                   |         |                     |                     |
	| start   | -p                                                | force-systemd-flag-20220921220434-5916   | minikube2\jenkins | v1.27.0 | 21 Sep 22 22:04 GMT |                     |
	|         | force-systemd-flag-20220921220434-5916            |                                          |                   |         |                     |                     |
	|         | --memory=2048 --force-systemd                     |                                          |                   |         |                     |                     |
	|         | --alsologtostderr -v=5 --driver=docker            |                                          |                   |         |                     |                     |
	| start   | -p                                                | NoKubernetes-20220921220434-5916         | minikube2\jenkins | v1.27.0 | 21 Sep 22 22:04 GMT |                     |
	|         | NoKubernetes-20220921220434-5916                  |                                          |                   |         |                     |                     |
	|         | --no-kubernetes                                   |                                          |                   |         |                     |                     |
	|         | --kubernetes-version=1.20                         |                                          |                   |         |                     |                     |
	|         | --driver=docker                                   |                                          |                   |         |                     |                     |
	| start   | -p                                                | NoKubernetes-20220921220434-5916         | minikube2\jenkins | v1.27.0 | 21 Sep 22 22:04 GMT |                     |
	|         | NoKubernetes-20220921220434-5916                  |                                          |                   |         |                     |                     |
	|         | --driver=docker                                   |                                          |                   |         |                     |                     |
	| ssh     | force-systemd-flag-20220921220434-5916            | force-systemd-flag-20220921220434-5916   | minikube2\jenkins | v1.27.0 | 21 Sep 22 22:05 GMT |                     |
	|         | ssh docker info --format                          |                                          |                   |         |                     |                     |
	|         | {{.CgroupDriver}}                                 |                                          |                   |         |                     |                     |
	| delete  | -p                                                | force-systemd-flag-20220921220434-5916   | minikube2\jenkins | v1.27.0 | 21 Sep 22 22:05 GMT | 21 Sep 22 22:05 GMT |
	|         | force-systemd-flag-20220921220434-5916            |                                          |                   |         |                     |                     |
	| delete  | -p                                                | offline-docker-20220921220434-5916       | minikube2\jenkins | v1.27.0 | 21 Sep 22 22:05 GMT | 21 Sep 22 22:05 GMT |
	|         | offline-docker-20220921220434-5916                |                                          |                   |         |                     |                     |
	| start   | -p                                                | NoKubernetes-20220921220434-5916         | minikube2\jenkins | v1.27.0 | 21 Sep 22 22:05 GMT |                     |
	|         | NoKubernetes-20220921220434-5916                  |                                          |                   |         |                     |                     |
	|         | --no-kubernetes --driver=docker                   |                                          |                   |         |                     |                     |
	| delete  | -p flannel-20220921220528-5916                    | flannel-20220921220528-5916              | minikube2\jenkins | v1.27.0 | 21 Sep 22 22:05 GMT | 21 Sep 22 22:05 GMT |
	| delete  | -p                                                | custom-flannel-20220921220530-5916       | minikube2\jenkins | v1.27.0 | 21 Sep 22 22:05 GMT | 21 Sep 22 22:05 GMT |
	|         | custom-flannel-20220921220530-5916                |                                          |                   |         |                     |                     |
	| start   | -p pause-20220921220531-5916                      | pause-20220921220531-5916                | minikube2\jenkins | v1.27.0 | 21 Sep 22 22:05 GMT |                     |
	|         | --memory=2048                                     |                                          |                   |         |                     |                     |
	|         | --install-addons=false                            |                                          |                   |         |                     |                     |
	|         | --wait=all --driver=docker                        |                                          |                   |         |                     |                     |
	|---------|---------------------------------------------------|------------------------------------------|-------------------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/09/21 22:05:32
	Running on machine: minikube2
	Binary: Built with gc go1.19.1 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0921 22:05:32.237943    5492 out.go:296] Setting OutFile to fd 1564 ...
	I0921 22:05:32.299932    5492 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0921 22:05:32.299932    5492 out.go:309] Setting ErrFile to fd 1576...
	I0921 22:05:32.299932    5492 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0921 22:05:32.323946    5492 out.go:303] Setting JSON to false
	I0921 22:05:32.326944    5492 start.go:115] hostinfo: {"hostname":"minikube2","uptime":4000,"bootTime":1663793932,"procs":155,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19044 Build 19044","kernelVersion":"10.0.19044 Build 19044","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"29dced62-21fb-45d8-a34e-472b66ced002"}
	W0921 22:05:32.326944    5492 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0921 22:05:32.348176    5492 out.go:177] * [pause-20220921220531-5916] minikube v1.27.0 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	I0921 22:05:32.351886    5492 notify.go:214] Checking for updates...
	I0921 22:05:32.351886    5492 preload.go:306] deleting older generation preload C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball/preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4.download
	I0921 22:05:32.354840    5492 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	W0921 22:05:32.352757    5492 preload.go:309] Failed to clean up older preload files, consider running `minikube delete --all --purge`
	I0921 22:05:32.357059    5492 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	I0921 22:05:32.360274    5492 out.go:177]   - MINIKUBE_LOCATION=14995
	I0921 22:05:32.364855    5492 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0921 22:05:32.370413    5492 config.go:180] Loaded profile config "NoKubernetes-20220921220434-5916": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v0.0.0
	I0921 22:05:32.370413    5492 config.go:180] Loaded profile config "multinode-20220921215635-5916-m01": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.2
	I0921 22:05:32.371464    5492 config.go:180] Loaded profile config "stopped-upgrade-20220921220434-5916": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.0
	I0921 22:05:32.371464    5492 driver.go:365] Setting default libvirt URI to qemu:///system
	I0921 22:05:32.750788    5492 docker.go:137] docker version: linux-20.10.17
	I0921 22:05:32.757747    5492 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0921 22:05:33.390819    5492 info.go:265] docker info: {ID:6VFI:N2PO:PLBP:KXLR:EN7R:OR7P:564S:UY7N:ZWDA:5V2U:KGL2:KJZ7 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:44 OomKillDisable:true NGoroutines:70 SystemTime:2022-09-21 22:05:32.9714734 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.10.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.9] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-
plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.19.0]] Warnings:<nil>}}
	I0921 22:05:33.511534    5492 out.go:177] * Using the docker driver based on user configuration
	I0921 22:05:30.829816    5388 preload.go:132] Checking if preload exists for k8s version v0.0.0 and runtime docker
	I0921 22:05:30.829816    5388 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c in local docker daemon
	W0921 22:05:30.878826    5388 preload.go:115] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v0.0.0/preloaded-images-k8s-v18-v0.0.0-docker-overlay2-amd64.tar.lz4 status code: 404
	I0921 22:05:30.878826    5388 profile.go:148] Saving config to C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\NoKubernetes-20220921220434-5916\config.json ...
	I0921 22:05:31.057738    5388 cache.go:147] Downloading gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c to local cache
	I0921 22:05:31.057738    5388 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase_v0.0.34@sha256_f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c.tar
	I0921 22:05:31.057738    5388 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase_v0.0.34@sha256_f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c.tar
	I0921 22:05:31.057738    5388 image.go:59] Checking for gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c in local cache directory
	I0921 22:05:31.057738    5388 image.go:62] Found gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c in local cache directory, skipping pull
	I0921 22:05:31.057738    5388 image.go:103] gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c exists in cache, skipping pull
	I0921 22:05:31.057738    5388 cache.go:150] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c as a tarball
	I0921 22:05:31.057738    5388 cache.go:161] Loading gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c from local cache
	I0921 22:05:31.057738    5388 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase_v0.0.34@sha256_f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c.tar
	I0921 22:05:33.517097    5492 start.go:284] selected driver: docker
	I0921 22:05:33.517097    5492 start.go:808] validating driver "docker" against <nil>
	I0921 22:05:33.517338    5492 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0921 22:05:33.589403    5492 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0921 22:05:34.246898    5492 info.go:265] docker info: {ID:6VFI:N2PO:PLBP:KXLR:EN7R:OR7P:564S:UY7N:ZWDA:5V2U:KGL2:KJZ7 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:44 OomKillDisable:true NGoroutines:70 SystemTime:2022-09-21 22:05:33.80835 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64
IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,pr
ofile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.10.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.9] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-pl
ugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.19.0]] Warnings:<nil>}}
	I0921 22:05:34.247904    5492 start_flags.go:302] no existing cluster config was found, will generate one from the flags 
	I0921 22:05:34.248898    5492 start_flags.go:867] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0921 22:05:34.253945    5492 out.go:177] * Using Docker Desktop driver with root privileges
	I0921 22:05:34.256905    5492 cni.go:95] Creating CNI manager for ""
	I0921 22:05:34.256905    5492 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0921 22:05:34.256905    5492 start_flags.go:316] config:
	{Name:pause-20220921220531-5916 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.2 ClusterName:pause-20220921220531-5916 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker C
RISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I0921 22:05:34.264869    5492 out.go:177] * Starting control plane node pause-20220921220531-5916 in cluster pause-20220921220531-5916
	I0921 22:05:34.267487    5492 cache.go:120] Beginning downloading kic base image for docker with docker
	I0921 22:05:34.271475    5492 out.go:177] * Pulling base image ...
	I0921 22:05:34.091978    5388 cache.go:164] successfully loaded gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c from cached tarball
	I0921 22:05:34.092047    5388 cache.go:170] Downloading gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c to local daemon
	I0921 22:05:34.092121    5388 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c.d.lock -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase_v0.0.34@sha256_f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c.d.lock
	I0921 22:05:34.092490    5388 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c in local docker daemon
	I0921 22:05:34.292634    5388 image.go:243] Writing gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c to local daemon
	I0921 22:05:36.451432    5388 cache.go:177] use image loaded from cache gcr.io/k8s-minikube/kicbase:v0.0.34
	W0921 22:05:36.451432    5388 out.go:239] ! minikube was unable to download gcr.io/k8s-minikube/kicbase:v0.0.34, but successfully downloaded gcr.io/k8s-minikube/kicbase:v0.0.34 as a fallback image
	I0921 22:05:36.451432    5388 cache.go:208] Successfully downloaded all kic artifacts
	I0921 22:05:36.451432    5388 start.go:364] acquiring machines lock for NoKubernetes-20220921220434-5916: {Name:mkc08fa3b9614fd3c596a2381497b0f330e59f13 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0921 22:05:36.451432    5388 start.go:368] acquired machines lock for "NoKubernetes-20220921220434-5916" in 0s
	I0921 22:05:36.452446    5388 start.go:96] Skipping create...Using existing machine configuration
	I0921 22:05:36.452446    5388 fix.go:55] fixHost starting: 
	I0921 22:05:36.472430    5388 cli_runner.go:164] Run: docker container inspect NoKubernetes-20220921220434-5916 --format={{.State.Status}}
	W0921 22:05:36.683672    5388 cli_runner.go:211] docker container inspect NoKubernetes-20220921220434-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:05:36.683672    5388 fix.go:103] recreateIfNeeded on NoKubernetes-20220921220434-5916: state= err=unknown state "NoKubernetes-20220921220434-5916": docker container inspect NoKubernetes-20220921220434-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: NoKubernetes-20220921220434-5916
	I0921 22:05:36.683672    5388 fix.go:108] machineExists: false. err=machine does not exist
	I0921 22:05:36.697634    5388 out.go:177] * docker "NoKubernetes-20220921220434-5916" container is missing, will recreate.
	I0921 22:05:34.274102    5492 preload.go:132] Checking if preload exists for k8s version v1.25.2 and runtime docker
	I0921 22:05:34.274143    5492 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c in local docker daemon
	I0921 22:05:34.274218    5492 preload.go:148] Found local preload: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.25.2-docker-overlay2-amd64.tar.lz4
	I0921 22:05:34.274218    5492 cache.go:57] Caching tarball of preloaded images
	I0921 22:05:34.274218    5492 preload.go:174] Found C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.25.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0921 22:05:34.274827    5492 cache.go:60] Finished verifying existence of preloaded tar for  v1.25.2 on docker
	I0921 22:05:34.274880    5492 profile.go:148] Saving config to C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\pause-20220921220531-5916\config.json ...
	I0921 22:05:34.274880    5492 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\pause-20220921220531-5916\config.json: {Name:mkd887713ff73f91343ab39ceb20716472e6bd7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0921 22:05:34.510671    5492 cache.go:147] Downloading gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c to local cache
	I0921 22:05:34.510671    5492 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase_v0.0.34@sha256_f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c.tar
	I0921 22:05:34.510671    5492 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase_v0.0.34@sha256_f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c.tar
	I0921 22:05:34.510671    5492 image.go:59] Checking for gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c in local cache directory
	I0921 22:05:34.510671    5492 image.go:62] Found gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c in local cache directory, skipping pull
	I0921 22:05:34.510671    5492 image.go:103] gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c exists in cache, skipping pull
	I0921 22:05:34.511463    5492 cache.go:150] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c as a tarball
	I0921 22:05:34.511463    5492 cache.go:161] Loading gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c from local cache
	I0921 22:05:34.511607    5492 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase_v0.0.34@sha256_f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c.tar
	I0921 22:05:37.059188    5492 cache.go:164] successfully loaded gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c from cached tarball
	I0921 22:05:37.059188    5492 cache.go:170] Downloading gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c to local daemon
	I0921 22:05:37.059188    5492 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c.d.lock -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase_v0.0.34@sha256_f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c.d.lock
	I0921 22:05:37.059188    5492 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c in local docker daemon
	I0921 22:05:37.283243    5492 image.go:243] Writing gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c to local daemon
	I0921 22:05:36.701774    5388 delete.go:124] DEMOLISHING NoKubernetes-20220921220434-5916 ...
	I0921 22:05:36.715818    5388 cli_runner.go:164] Run: docker container inspect NoKubernetes-20220921220434-5916 --format={{.State.Status}}
	W0921 22:05:36.901855    5388 cli_runner.go:211] docker container inspect NoKubernetes-20220921220434-5916 --format={{.State.Status}} returned with exit code 1
	W0921 22:05:36.901855    5388 stop.go:75] unable to get state: unknown state "NoKubernetes-20220921220434-5916": docker container inspect NoKubernetes-20220921220434-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: NoKubernetes-20220921220434-5916
	I0921 22:05:36.901855    5388 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "NoKubernetes-20220921220434-5916": docker container inspect NoKubernetes-20220921220434-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: NoKubernetes-20220921220434-5916
	I0921 22:05:36.915856    5388 cli_runner.go:164] Run: docker container inspect NoKubernetes-20220921220434-5916 --format={{.State.Status}}
	W0921 22:05:37.110377    5388 cli_runner.go:211] docker container inspect NoKubernetes-20220921220434-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:05:37.110377    5388 delete.go:82] Unable to get host status for NoKubernetes-20220921220434-5916, assuming it has already been deleted: state: unknown state "NoKubernetes-20220921220434-5916": docker container inspect NoKubernetes-20220921220434-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: NoKubernetes-20220921220434-5916
	I0921 22:05:37.119356    5388 cli_runner.go:164] Run: docker container inspect -f {{.Id}} NoKubernetes-20220921220434-5916
	W0921 22:05:37.329985    5388 cli_runner.go:211] docker container inspect -f {{.Id}} NoKubernetes-20220921220434-5916 returned with exit code 1
	I0921 22:05:37.329985    5388 kic.go:356] could not find the container NoKubernetes-20220921220434-5916 to remove it. will try anyways
	I0921 22:05:37.339414    5388 cli_runner.go:164] Run: docker container inspect NoKubernetes-20220921220434-5916 --format={{.State.Status}}
	W0921 22:05:37.535111    5388 cli_runner.go:211] docker container inspect NoKubernetes-20220921220434-5916 --format={{.State.Status}} returned with exit code 1
	W0921 22:05:37.535111    5388 oci.go:84] error getting container status, will try to delete anyways: unknown state "NoKubernetes-20220921220434-5916": docker container inspect NoKubernetes-20220921220434-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: NoKubernetes-20220921220434-5916
	I0921 22:05:37.542094    5388 cli_runner.go:164] Run: docker exec --privileged -t NoKubernetes-20220921220434-5916 /bin/bash -c "sudo init 0"
	W0921 22:05:37.737058    5388 cli_runner.go:211] docker exec --privileged -t NoKubernetes-20220921220434-5916 /bin/bash -c "sudo init 0" returned with exit code 1
	I0921 22:05:37.737058    5388 oci.go:646] error shutdown NoKubernetes-20220921220434-5916: docker exec --privileged -t NoKubernetes-20220921220434-5916 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: NoKubernetes-20220921220434-5916
	I0921 22:05:38.745243    5388 cli_runner.go:164] Run: docker container inspect NoKubernetes-20220921220434-5916 --format={{.State.Status}}
	W0921 22:05:38.939643    5388 cli_runner.go:211] docker container inspect NoKubernetes-20220921220434-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:05:38.939643    5388 oci.go:658] temporary error verifying shutdown: unknown state "NoKubernetes-20220921220434-5916": docker container inspect NoKubernetes-20220921220434-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: NoKubernetes-20220921220434-5916
	I0921 22:05:38.939643    5388 oci.go:660] temporary error: container NoKubernetes-20220921220434-5916 status is  but expect it to be exited
	I0921 22:05:38.939643    5388 retry.go:31] will retry after 552.330144ms: couldn't verify container is exited. %!v(MISSING): unknown state "NoKubernetes-20220921220434-5916": docker container inspect NoKubernetes-20220921220434-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: NoKubernetes-20220921220434-5916
	I0921 22:05:39.142202    5492 cache.go:177] use image loaded from cache gcr.io/k8s-minikube/kicbase:v0.0.34
	W0921 22:05:39.142202    5492 out.go:239] ! minikube was unable to download gcr.io/k8s-minikube/kicbase:v0.0.34, but successfully downloaded gcr.io/k8s-minikube/kicbase:v0.0.34 as a fallback image
	I0921 22:05:39.142202    5492 cache.go:208] Successfully downloaded all kic artifacts
	I0921 22:05:39.142202    5492 start.go:364] acquiring machines lock for pause-20220921220531-5916: {Name:mke6391a2bb25b0d25df6636a4983db8f8affe6c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0921 22:05:39.142741    5492 start.go:368] acquired machines lock for "pause-20220921220531-5916" in 539.5µs
	I0921 22:05:39.142898    5492 start.go:93] Provisioning new machine with config: &{Name:pause-20220921220531-5916 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.34 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.2 ClusterName:pause-20220921220531-5916 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.25.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPa
th:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet} &{Name: IP: Port:8443 KubernetesVersion:v1.25.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0921 22:05:39.142898    5492 start.go:125] createHost starting for "" (driver="docker")
	I0921 22:05:39.146352    5492 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0921 22:05:39.147006    5492 start.go:159] libmachine.API.Create for "pause-20220921220531-5916" (driver="docker")
	I0921 22:05:39.147006    5492 client.go:168] LocalClient.Create starting
	I0921 22:05:39.147006    5492 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I0921 22:05:39.147681    5492 main.go:134] libmachine: Decoding PEM data...
	I0921 22:05:39.147681    5492 main.go:134] libmachine: Parsing certificate...
	I0921 22:05:39.148403    5492 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I0921 22:05:39.148403    5492 main.go:134] libmachine: Decoding PEM data...
	I0921 22:05:39.148643    5492 main.go:134] libmachine: Parsing certificate...
	I0921 22:05:39.160668    5492 cli_runner.go:164] Run: docker network inspect pause-20220921220531-5916 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0921 22:05:39.343863    5492 cli_runner.go:211] docker network inspect pause-20220921220531-5916 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0921 22:05:39.351211    5492 network_create.go:272] running [docker network inspect pause-20220921220531-5916] to gather additional debugging logs...
	I0921 22:05:39.351211    5492 cli_runner.go:164] Run: docker network inspect pause-20220921220531-5916
	W0921 22:05:39.545684    5492 cli_runner.go:211] docker network inspect pause-20220921220531-5916 returned with exit code 1
	I0921 22:05:39.545684    5492 network_create.go:275] error running [docker network inspect pause-20220921220531-5916]: docker network inspect pause-20220921220531-5916: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: pause-20220921220531-5916
	I0921 22:05:39.545684    5492 network_create.go:277] output of [docker network inspect pause-20220921220531-5916]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: pause-20220921220531-5916
	
	** /stderr **
	I0921 22:05:39.553936    5492 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0921 22:05:39.798733    5492 network.go:290] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc0005469f8] misses:0}
	I0921 22:05:39.799850    5492 network.go:236] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0921 22:05:39.799850    5492 network_create.go:115] attempt to create docker network pause-20220921220531-5916 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0921 22:05:39.806922    5492 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=pause-20220921220531-5916 pause-20220921220531-5916
	W0921 22:05:39.995575    5492 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=pause-20220921220531-5916 pause-20220921220531-5916 returned with exit code 1
	E0921 22:05:39.995575    5492 network_create.go:104] error while trying to create docker network pause-20220921220531-5916 192.168.49.0/24: create docker network pause-20220921220531-5916 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=pause-20220921220531-5916 pause-20220921220531-5916: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network d369c77bf76ce8a226ce0ce224b94404605d0b4f525befc3ed80b66e92e22b02 (br-d369c77bf76c): conflicts with network a04d36bfb3cf4b1b1ab643a2ec70fadfc4a485878b32a275b4c7653037d66dd0 (br-a04d36bfb3cf): networks have overlapping IPv4
	W0921 22:05:39.995575    5492 out.go:239] ! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network pause-20220921220531-5916 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=pause-20220921220531-5916 pause-20220921220531-5916: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network d369c77bf76ce8a226ce0ce224b94404605d0b4f525befc3ed80b66e92e22b02 (br-d369c77bf76c): conflicts with network a04d36bfb3cf4b1b1ab643a2ec70fadfc4a485878b32a275b4c7653037d66dd0 (br-a04d36bfb3cf): networks have overlapping IPv4
	
	I0921 22:05:40.015501    5492 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0921 22:05:40.217666    5492 cli_runner.go:164] Run: docker volume create pause-20220921220531-5916 --label name.minikube.sigs.k8s.io=pause-20220921220531-5916 --label created_by.minikube.sigs.k8s.io=true
	W0921 22:05:40.427842    5492 cli_runner.go:211] docker volume create pause-20220921220531-5916 --label name.minikube.sigs.k8s.io=pause-20220921220531-5916 --label created_by.minikube.sigs.k8s.io=true returned with exit code 1
	I0921 22:05:40.427842    5492 client.go:171] LocalClient.Create took 1.2808271s
	I0921 22:05:39.508941    5388 cli_runner.go:164] Run: docker container inspect NoKubernetes-20220921220434-5916 --format={{.State.Status}}
	W0921 22:05:39.715808    5388 cli_runner.go:211] docker container inspect NoKubernetes-20220921220434-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:05:39.715952    5388 oci.go:658] temporary error verifying shutdown: unknown state "NoKubernetes-20220921220434-5916": docker container inspect NoKubernetes-20220921220434-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: NoKubernetes-20220921220434-5916
	I0921 22:05:39.715952    5388 oci.go:660] temporary error: container NoKubernetes-20220921220434-5916 status is  but expect it to be exited
	I0921 22:05:39.715952    5388 retry.go:31] will retry after 1.080381816s: couldn't verify container is exited. %!v(MISSING): unknown state "NoKubernetes-20220921220434-5916": docker container inspect NoKubernetes-20220921220434-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: NoKubernetes-20220921220434-5916
	I0921 22:05:40.812151    5388 cli_runner.go:164] Run: docker container inspect NoKubernetes-20220921220434-5916 --format={{.State.Status}}
	W0921 22:05:41.004438    5388 cli_runner.go:211] docker container inspect NoKubernetes-20220921220434-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:05:41.004498    5388 oci.go:658] temporary error verifying shutdown: unknown state "NoKubernetes-20220921220434-5916": docker container inspect NoKubernetes-20220921220434-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: NoKubernetes-20220921220434-5916
	I0921 22:05:41.004530    5388 oci.go:660] temporary error: container NoKubernetes-20220921220434-5916 status is  but expect it to be exited
	I0921 22:05:41.004571    5388 retry.go:31] will retry after 1.31013006s: couldn't verify container is exited. %!v(MISSING): unknown state "NoKubernetes-20220921220434-5916": docker container inspect NoKubernetes-20220921220434-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: NoKubernetes-20220921220434-5916
	I0921 22:05:42.335500    5388 cli_runner.go:164] Run: docker container inspect NoKubernetes-20220921220434-5916 --format={{.State.Status}}
	W0921 22:05:42.545307    5388 cli_runner.go:211] docker container inspect NoKubernetes-20220921220434-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:05:42.545467    5388 oci.go:658] temporary error verifying shutdown: unknown state "NoKubernetes-20220921220434-5916": docker container inspect NoKubernetes-20220921220434-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: NoKubernetes-20220921220434-5916
	I0921 22:05:42.545467    5388 oci.go:660] temporary error: container NoKubernetes-20220921220434-5916 status is  but expect it to be exited
	I0921 22:05:42.545530    5388 retry.go:31] will retry after 1.582392691s: couldn't verify container is exited. %!v(MISSING): unknown state "NoKubernetes-20220921220434-5916": docker container inspect NoKubernetes-20220921220434-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: NoKubernetes-20220921220434-5916
	I0921 22:05:42.443517    5492 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0921 22:05:42.453166    5492 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20220921220531-5916
	W0921 22:05:42.683581    5492 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20220921220531-5916 returned with exit code 1
	I0921 22:05:42.683968    5492 retry.go:31] will retry after 276.165072ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "pause-20220921220531-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20220921220531-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: pause-20220921220531-5916
	I0921 22:05:42.984901    5492 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20220921220531-5916
	W0921 22:05:43.188936    5492 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20220921220531-5916 returned with exit code 1
	I0921 22:05:43.188936    5492 retry.go:31] will retry after 540.190908ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "pause-20220921220531-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20220921220531-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: pause-20220921220531-5916
	I0921 22:05:43.747981    5492 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20220921220531-5916
	W0921 22:05:43.951966    5492 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20220921220531-5916 returned with exit code 1
	W0921 22:05:43.952148    5492 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "pause-20220921220531-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20220921220531-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: pause-20220921220531-5916
	
	W0921 22:05:43.952148    5492 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "pause-20220921220531-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20220921220531-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: pause-20220921220531-5916
	I0921 22:05:43.963275    5492 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0921 22:05:43.970964    5492 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20220921220531-5916
	W0921 22:05:44.197298    5492 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20220921220531-5916 returned with exit code 1
	I0921 22:05:44.197298    5492 retry.go:31] will retry after 234.428547ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "pause-20220921220531-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20220921220531-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: pause-20220921220531-5916
	I0921 22:05:44.450661    5492 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20220921220531-5916
	W0921 22:05:44.644694    5492 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20220921220531-5916 returned with exit code 1
	I0921 22:05:44.644694    5492 retry.go:31] will retry after 346.739061ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "pause-20220921220531-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20220921220531-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: pause-20220921220531-5916
	I0921 22:05:45.011785    5492 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20220921220531-5916
	W0921 22:05:45.189033    5492 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20220921220531-5916 returned with exit code 1
	I0921 22:05:45.189033    5492 retry.go:31] will retry after 667.587979ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "pause-20220921220531-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20220921220531-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: pause-20220921220531-5916
	I0921 22:05:45.880612    5492 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20220921220531-5916
	W0921 22:05:46.103053    5492 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20220921220531-5916 returned with exit code 1
	W0921 22:05:46.103053    5492 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "pause-20220921220531-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20220921220531-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: pause-20220921220531-5916
	
	W0921 22:05:46.103053    5492 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "pause-20220921220531-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20220921220531-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: pause-20220921220531-5916
	I0921 22:05:46.103053    5492 start.go:128] duration metric: createHost completed in 6.9601033s
	I0921 22:05:46.103053    5492 start.go:83] releasing machines lock for "pause-20220921220531-5916", held for 6.9602597s
	W0921 22:05:46.103053    5492 start.go:602] error starting host: creating host: create: creating: setting up container node: creating volume for pause-20220921220531-5916 container: docker volume create pause-20220921220531-5916 --label name.minikube.sigs.k8s.io=pause-20220921220531-5916 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create pause-20220921220531-5916: error while creating volume root path '/var/lib/docker/volumes/pause-20220921220531-5916': mkdir /var/lib/docker/volumes/pause-20220921220531-5916: read-only file system
	I0921 22:05:46.117100    5492 cli_runner.go:164] Run: docker container inspect pause-20220921220531-5916 --format={{.State.Status}}
	W0921 22:05:46.319451    5492 cli_runner.go:211] docker container inspect pause-20220921220531-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:05:46.319451    5492 delete.go:82] Unable to get host status for pause-20220921220531-5916, assuming it has already been deleted: state: unknown state "pause-20220921220531-5916": docker container inspect pause-20220921220531-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: pause-20220921220531-5916
	W0921 22:05:46.319451    5492 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: setting up container node: creating volume for pause-20220921220531-5916 container: docker volume create pause-20220921220531-5916 --label name.minikube.sigs.k8s.io=pause-20220921220531-5916 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create pause-20220921220531-5916: error while creating volume root path '/var/lib/docker/volumes/pause-20220921220531-5916': mkdir /var/lib/docker/volumes/pause-20220921220531-5916: read-only file system
	
	I0921 22:05:46.319451    5492 start.go:617] Will try again in 5 seconds ...
	I0921 22:05:44.148080    5388 cli_runner.go:164] Run: docker container inspect NoKubernetes-20220921220434-5916 --format={{.State.Status}}
	W0921 22:05:44.349482    5388 cli_runner.go:211] docker container inspect NoKubernetes-20220921220434-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:05:44.349527    5388 oci.go:658] temporary error verifying shutdown: unknown state "NoKubernetes-20220921220434-5916": docker container inspect NoKubernetes-20220921220434-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: NoKubernetes-20220921220434-5916
	I0921 22:05:44.349579    5388 oci.go:660] temporary error: container NoKubernetes-20220921220434-5916 status is  but expect it to be exited
	I0921 22:05:44.349616    5388 retry.go:31] will retry after 2.340488664s: couldn't verify container is exited. %!v(MISSING): unknown state "NoKubernetes-20220921220434-5916": docker container inspect NoKubernetes-20220921220434-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: NoKubernetes-20220921220434-5916
	I0921 22:05:46.699334    5388 cli_runner.go:164] Run: docker container inspect NoKubernetes-20220921220434-5916 --format={{.State.Status}}
	W0921 22:05:46.923780    5388 cli_runner.go:211] docker container inspect NoKubernetes-20220921220434-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:05:46.923947    5388 oci.go:658] temporary error verifying shutdown: unknown state "NoKubernetes-20220921220434-5916": docker container inspect NoKubernetes-20220921220434-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: NoKubernetes-20220921220434-5916
	I0921 22:05:46.923984    5388 oci.go:660] temporary error: container NoKubernetes-20220921220434-5916 status is  but expect it to be exited
	I0921 22:05:46.924008    5388 retry.go:31] will retry after 4.506218855s: couldn't verify container is exited. %!v(MISSING): unknown state "NoKubernetes-20220921220434-5916": docker container inspect NoKubernetes-20220921220434-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: NoKubernetes-20220921220434-5916
	I0921 22:05:51.320803    5492 start.go:364] acquiring machines lock for pause-20220921220531-5916: {Name:mke6391a2bb25b0d25df6636a4983db8f8affe6c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0921 22:05:51.320803    5492 start.go:368] acquired machines lock for "pause-20220921220531-5916" in 0s
	I0921 22:05:51.321343    5492 start.go:96] Skipping create...Using existing machine configuration
	I0921 22:05:51.321343    5492 fix.go:55] fixHost starting: 
	I0921 22:05:51.335062    5492 cli_runner.go:164] Run: docker container inspect pause-20220921220531-5916 --format={{.State.Status}}
	W0921 22:05:51.524080    5492 cli_runner.go:211] docker container inspect pause-20220921220531-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:05:51.524080    5492 fix.go:103] recreateIfNeeded on pause-20220921220531-5916: state= err=unknown state "pause-20220921220531-5916": docker container inspect pause-20220921220531-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: pause-20220921220531-5916
	I0921 22:05:51.524080    5492 fix.go:108] machineExists: false. err=machine does not exist
	I0921 22:05:51.611038    5492 out.go:177] * docker "pause-20220921220531-5916" container is missing, will recreate.
	I0921 22:05:51.614332    5492 delete.go:124] DEMOLISHING pause-20220921220531-5916 ...
	I0921 22:05:51.633758    5492 cli_runner.go:164] Run: docker container inspect pause-20220921220531-5916 --format={{.State.Status}}
	W0921 22:05:51.848908    5492 cli_runner.go:211] docker container inspect pause-20220921220531-5916 --format={{.State.Status}} returned with exit code 1
	W0921 22:05:51.849069    5492 stop.go:75] unable to get state: unknown state "pause-20220921220531-5916": docker container inspect pause-20220921220531-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: pause-20220921220531-5916
	I0921 22:05:51.849069    5492 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "pause-20220921220531-5916": docker container inspect pause-20220921220531-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: pause-20220921220531-5916
	I0921 22:05:51.863359    5492 cli_runner.go:164] Run: docker container inspect pause-20220921220531-5916 --format={{.State.Status}}
	W0921 22:05:52.048699    5492 cli_runner.go:211] docker container inspect pause-20220921220531-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:05:52.048780    5492 delete.go:82] Unable to get host status for pause-20220921220531-5916, assuming it has already been deleted: state: unknown state "pause-20220921220531-5916": docker container inspect pause-20220921220531-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: pause-20220921220531-5916
	I0921 22:05:52.056236    5492 cli_runner.go:164] Run: docker container inspect -f {{.Id}} pause-20220921220531-5916
	W0921 22:05:52.235958    5492 cli_runner.go:211] docker container inspect -f {{.Id}} pause-20220921220531-5916 returned with exit code 1
	I0921 22:05:52.235988    5492 kic.go:356] could not find the container pause-20220921220531-5916 to remove it. will try anyways
	I0921 22:05:52.243465    5492 cli_runner.go:164] Run: docker container inspect pause-20220921220531-5916 --format={{.State.Status}}
	I0921 22:05:51.454204    5388 cli_runner.go:164] Run: docker container inspect NoKubernetes-20220921220434-5916 --format={{.State.Status}}
	W0921 22:05:51.678112    5388 cli_runner.go:211] docker container inspect NoKubernetes-20220921220434-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:05:51.678112    5388 oci.go:658] temporary error verifying shutdown: unknown state "NoKubernetes-20220921220434-5916": docker container inspect NoKubernetes-20220921220434-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: NoKubernetes-20220921220434-5916
	I0921 22:05:51.678112    5388 oci.go:660] temporary error: container NoKubernetes-20220921220434-5916 status is  but expect it to be exited
	I0921 22:05:51.678112    5388 retry.go:31] will retry after 3.221479586s: couldn't verify container is exited. %!v(MISSING): unknown state "NoKubernetes-20220921220434-5916": docker container inspect NoKubernetes-20220921220434-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: NoKubernetes-20220921220434-5916
	I0921 22:05:54.922406    5388 cli_runner.go:164] Run: docker container inspect NoKubernetes-20220921220434-5916 --format={{.State.Status}}
	W0921 22:05:55.113803    5388 cli_runner.go:211] docker container inspect NoKubernetes-20220921220434-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:05:55.113877    5388 oci.go:658] temporary error verifying shutdown: unknown state "NoKubernetes-20220921220434-5916": docker container inspect NoKubernetes-20220921220434-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: NoKubernetes-20220921220434-5916
	I0921 22:05:55.113877    5388 oci.go:660] temporary error: container NoKubernetes-20220921220434-5916 status is  but expect it to be exited
	I0921 22:05:55.113877    5388 oci.go:88] couldn't shut down NoKubernetes-20220921220434-5916 (might be okay): verify shutdown: couldn't verify container is exited. %!v(MISSING): unknown state "NoKubernetes-20220921220434-5916": docker container inspect NoKubernetes-20220921220434-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: NoKubernetes-20220921220434-5916
	 
	I0921 22:05:55.120678    5388 cli_runner.go:164] Run: docker rm -f -v NoKubernetes-20220921220434-5916
	I0921 22:05:55.325362    5388 cli_runner.go:164] Run: docker container inspect -f {{.Id}} NoKubernetes-20220921220434-5916
	W0921 22:05:55.505560    5388 cli_runner.go:211] docker container inspect -f {{.Id}} NoKubernetes-20220921220434-5916 returned with exit code 1
	I0921 22:05:55.511951    5388 cli_runner.go:164] Run: docker network inspect NoKubernetes-20220921220434-5916 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0921 22:05:55.723950    5388 cli_runner.go:211] docker network inspect NoKubernetes-20220921220434-5916 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0921 22:05:55.730445    5388 network_create.go:272] running [docker network inspect NoKubernetes-20220921220434-5916] to gather additional debugging logs...
	I0921 22:05:55.730445    5388 cli_runner.go:164] Run: docker network inspect NoKubernetes-20220921220434-5916
	W0921 22:05:55.924819    5388 cli_runner.go:211] docker network inspect NoKubernetes-20220921220434-5916 returned with exit code 1
	I0921 22:05:55.924866    5388 network_create.go:275] error running [docker network inspect NoKubernetes-20220921220434-5916]: docker network inspect NoKubernetes-20220921220434-5916: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: NoKubernetes-20220921220434-5916
	I0921 22:05:55.924866    5388 network_create.go:277] output of [docker network inspect NoKubernetes-20220921220434-5916]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: NoKubernetes-20220921220434-5916
	
	** /stderr **
	W0921 22:05:55.925853    5388 delete.go:139] delete failed (probably ok) <nil>
	I0921 22:05:55.925853    5388 fix.go:115] Sleeping 1 second for extra luck!
	I0921 22:05:56.938639    5388 start.go:125] createHost starting for "" (driver="docker")
	W0921 22:05:52.436595    5492 cli_runner.go:211] docker container inspect pause-20220921220531-5916 --format={{.State.Status}} returned with exit code 1
	W0921 22:05:52.436804    5492 oci.go:84] error getting container status, will try to delete anyways: unknown state "pause-20220921220531-5916": docker container inspect pause-20220921220531-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: pause-20220921220531-5916
	I0921 22:05:52.443608    5492 cli_runner.go:164] Run: docker exec --privileged -t pause-20220921220531-5916 /bin/bash -c "sudo init 0"
	W0921 22:05:52.637935    5492 cli_runner.go:211] docker exec --privileged -t pause-20220921220531-5916 /bin/bash -c "sudo init 0" returned with exit code 1
	I0921 22:05:52.637935    5492 oci.go:646] error shutdown pause-20220921220531-5916: docker exec --privileged -t pause-20220921220531-5916 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: pause-20220921220531-5916
	I0921 22:05:53.660970    5492 cli_runner.go:164] Run: docker container inspect pause-20220921220531-5916 --format={{.State.Status}}
	W0921 22:05:53.853738    5492 cli_runner.go:211] docker container inspect pause-20220921220531-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:05:53.853738    5492 oci.go:658] temporary error verifying shutdown: unknown state "pause-20220921220531-5916": docker container inspect pause-20220921220531-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: pause-20220921220531-5916
	I0921 22:05:53.853738    5492 oci.go:660] temporary error: container pause-20220921220531-5916 status is  but expect it to be exited
	I0921 22:05:53.853738    5492 retry.go:31] will retry after 328.259627ms: couldn't verify container is exited. %!v(MISSING): unknown state "pause-20220921220531-5916": docker container inspect pause-20220921220531-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: pause-20220921220531-5916
	I0921 22:05:54.202965    5492 cli_runner.go:164] Run: docker container inspect pause-20220921220531-5916 --format={{.State.Status}}
	W0921 22:05:54.395890    5492 cli_runner.go:211] docker container inspect pause-20220921220531-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:05:54.395890    5492 oci.go:658] temporary error verifying shutdown: unknown state "pause-20220921220531-5916": docker container inspect pause-20220921220531-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: pause-20220921220531-5916
	I0921 22:05:54.395890    5492 oci.go:660] temporary error: container pause-20220921220531-5916 status is  but expect it to be exited
	I0921 22:05:54.395890    5492 retry.go:31] will retry after 447.727139ms: couldn't verify container is exited. %!v(MISSING): unknown state "pause-20220921220531-5916": docker container inspect pause-20220921220531-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: pause-20220921220531-5916
	I0921 22:05:54.862103    5492 cli_runner.go:164] Run: docker container inspect pause-20220921220531-5916 --format={{.State.Status}}
	W0921 22:05:55.081849    5492 cli_runner.go:211] docker container inspect pause-20220921220531-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:05:55.081849    5492 oci.go:658] temporary error verifying shutdown: unknown state "pause-20220921220531-5916": docker container inspect pause-20220921220531-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: pause-20220921220531-5916
	I0921 22:05:55.081849    5492 oci.go:660] temporary error: container pause-20220921220531-5916 status is  but expect it to be exited
	I0921 22:05:55.081849    5492 retry.go:31] will retry after 901.025843ms: couldn't verify container is exited. %!v(MISSING): unknown state "pause-20220921220531-5916": docker container inspect pause-20220921220531-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: pause-20220921220531-5916
	I0921 22:05:55.996591    5492 cli_runner.go:164] Run: docker container inspect pause-20220921220531-5916 --format={{.State.Status}}
	W0921 22:05:56.190053    5492 cli_runner.go:211] docker container inspect pause-20220921220531-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:05:56.190053    5492 oci.go:658] temporary error verifying shutdown: unknown state "pause-20220921220531-5916": docker container inspect pause-20220921220531-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: pause-20220921220531-5916
	I0921 22:05:56.190053    5492 oci.go:660] temporary error: container pause-20220921220531-5916 status is  but expect it to be exited
	I0921 22:05:56.190053    5492 retry.go:31] will retry after 1.713171311s: couldn't verify container is exited. %!v(MISSING): unknown state "pause-20220921220531-5916": docker container inspect pause-20220921220531-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: pause-20220921220531-5916
	I0921 22:05:56.942154    5388 out.go:204] * Creating docker container (CPUs=2, Memory=16300MB) ...
	I0921 22:05:56.943056    5388 start.go:159] libmachine.API.Create for "NoKubernetes-20220921220434-5916" (driver="docker")
	I0921 22:05:56.943056    5388 client.go:168] LocalClient.Create starting
	I0921 22:05:56.943680    5388 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I0921 22:05:56.943869    5388 main.go:134] libmachine: Decoding PEM data...
	I0921 22:05:56.943869    5388 main.go:134] libmachine: Parsing certificate...
	I0921 22:05:56.944093    5388 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I0921 22:05:56.944211    5388 main.go:134] libmachine: Decoding PEM data...
	I0921 22:05:56.944211    5388 main.go:134] libmachine: Parsing certificate...
	I0921 22:05:56.952857    5388 cli_runner.go:164] Run: docker network inspect NoKubernetes-20220921220434-5916 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0921 22:05:57.185556    5388 cli_runner.go:211] docker network inspect NoKubernetes-20220921220434-5916 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0921 22:05:57.192766    5388 network_create.go:272] running [docker network inspect NoKubernetes-20220921220434-5916] to gather additional debugging logs...
	I0921 22:05:57.192766    5388 cli_runner.go:164] Run: docker network inspect NoKubernetes-20220921220434-5916
	W0921 22:05:57.379391    5388 cli_runner.go:211] docker network inspect NoKubernetes-20220921220434-5916 returned with exit code 1
	I0921 22:05:57.379604    5388 network_create.go:275] error running [docker network inspect NoKubernetes-20220921220434-5916]: docker network inspect NoKubernetes-20220921220434-5916: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: NoKubernetes-20220921220434-5916
	I0921 22:05:57.379604    5388 network_create.go:277] output of [docker network inspect NoKubernetes-20220921220434-5916]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: NoKubernetes-20220921220434-5916
	
	** /stderr **
	I0921 22:05:57.386285    5388 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0921 22:05:57.589191    5388 network.go:290] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc00060ebf0] misses:0}
	I0921 22:05:57.589552    5388 network.go:236] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0921 22:05:57.589552    5388 network_create.go:115] attempt to create docker network NoKubernetes-20220921220434-5916 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0921 22:05:57.596513    5388 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=NoKubernetes-20220921220434-5916 NoKubernetes-20220921220434-5916
	W0921 22:05:57.783336    5388 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=NoKubernetes-20220921220434-5916 NoKubernetes-20220921220434-5916 returned with exit code 1
	E0921 22:05:57.783487    5388 network_create.go:104] error while trying to create docker network NoKubernetes-20220921220434-5916 192.168.49.0/24: create docker network NoKubernetes-20220921220434-5916 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=NoKubernetes-20220921220434-5916 NoKubernetes-20220921220434-5916: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 7509e19f81ad64dca4e198fc8c96b9ae339844a71407e9ff065883a6332abc01 (br-7509e19f81ad): conflicts with network a04d36bfb3cf4b1b1ab643a2ec70fadfc4a485878b32a275b4c7653037d66dd0 (br-a04d36bfb3cf): networks have overlapping IPv4
	W0921 22:05:57.783487    5388 out.go:239] ! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network NoKubernetes-20220921220434-5916 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=NoKubernetes-20220921220434-5916 NoKubernetes-20220921220434-5916: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 7509e19f81ad64dca4e198fc8c96b9ae339844a71407e9ff065883a6332abc01 (br-7509e19f81ad): conflicts with network a04d36bfb3cf4b1b1ab643a2ec70fadfc4a485878b32a275b4c7653037d66dd0 (br-a04d36bfb3cf): networks have overlapping IPv4
	
	I0921 22:05:57.798868    5388 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0921 22:05:58.008521    5388 cli_runner.go:164] Run: docker volume create NoKubernetes-20220921220434-5916 --label name.minikube.sigs.k8s.io=NoKubernetes-20220921220434-5916 --label created_by.minikube.sigs.k8s.io=true
	W0921 22:05:58.219527    5388 cli_runner.go:211] docker volume create NoKubernetes-20220921220434-5916 --label name.minikube.sigs.k8s.io=NoKubernetes-20220921220434-5916 --label created_by.minikube.sigs.k8s.io=true returned with exit code 1
	I0921 22:05:58.219635    5388 client.go:171] LocalClient.Create took 1.2765693s
	I0921 22:05:57.919597    5492 cli_runner.go:164] Run: docker container inspect pause-20220921220531-5916 --format={{.State.Status}}
	W0921 22:05:58.126242    5492 cli_runner.go:211] docker container inspect pause-20220921220531-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:05:58.126355    5492 oci.go:658] temporary error verifying shutdown: unknown state "pause-20220921220531-5916": docker container inspect pause-20220921220531-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: pause-20220921220531-5916
	I0921 22:05:58.126373    5492 oci.go:660] temporary error: container pause-20220921220531-5916 status is  but expect it to be exited
	I0921 22:05:58.126393    5492 retry.go:31] will retry after 3.325151152s: couldn't verify container is exited. %!v(MISSING): unknown state "pause-20220921220531-5916": docker container inspect pause-20220921220531-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: pause-20220921220531-5916
	I0921 22:06:01.462909    5492 cli_runner.go:164] Run: docker container inspect pause-20220921220531-5916 --format={{.State.Status}}
	W0921 22:06:01.680465    5492 cli_runner.go:211] docker container inspect pause-20220921220531-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:06:01.680569    5492 oci.go:658] temporary error verifying shutdown: unknown state "pause-20220921220531-5916": docker container inspect pause-20220921220531-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: pause-20220921220531-5916
	I0921 22:06:01.680598    5492 oci.go:660] temporary error: container pause-20220921220531-5916 status is  but expect it to be exited
	I0921 22:06:01.680598    5492 retry.go:31] will retry after 2.711970641s: couldn't verify container is exited. %!v(MISSING): unknown state "pause-20220921220531-5916": docker container inspect pause-20220921220531-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: pause-20220921220531-5916
	I0921 22:06:00.238153    5388 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0921 22:06:00.245627    5388 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" NoKubernetes-20220921220434-5916
	W0921 22:06:00.443010    5388 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" NoKubernetes-20220921220434-5916 returned with exit code 1
	I0921 22:06:00.443010    5388 retry.go:31] will retry after 149.242379ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "NoKubernetes-20220921220434-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" NoKubernetes-20220921220434-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: NoKubernetes-20220921220434-5916
	I0921 22:06:00.606062    5388 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" NoKubernetes-20220921220434-5916
	W0921 22:06:00.800280    5388 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" NoKubernetes-20220921220434-5916 returned with exit code 1
	I0921 22:06:00.800280    5388 retry.go:31] will retry after 300.341948ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "NoKubernetes-20220921220434-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" NoKubernetes-20220921220434-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: NoKubernetes-20220921220434-5916
	I0921 22:06:01.119221    5388 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" NoKubernetes-20220921220434-5916
	W0921 22:06:01.300544    5388 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" NoKubernetes-20220921220434-5916 returned with exit code 1
	I0921 22:06:01.300544    5388 retry.go:31] will retry after 571.057104ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "NoKubernetes-20220921220434-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" NoKubernetes-20220921220434-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: NoKubernetes-20220921220434-5916
	I0921 22:06:01.889617    5388 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" NoKubernetes-20220921220434-5916
	W0921 22:06:02.083660    5388 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" NoKubernetes-20220921220434-5916 returned with exit code 1
	W0921 22:06:02.083660    5388 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "NoKubernetes-20220921220434-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" NoKubernetes-20220921220434-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: NoKubernetes-20220921220434-5916
	
	W0921 22:06:02.083660    5388 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "NoKubernetes-20220921220434-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" NoKubernetes-20220921220434-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: NoKubernetes-20220921220434-5916
	I0921 22:06:02.095497    5388 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0921 22:06:02.101139    5388 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" NoKubernetes-20220921220434-5916
	W0921 22:06:02.299805    5388 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" NoKubernetes-20220921220434-5916 returned with exit code 1
	I0921 22:06:02.299805    5388 retry.go:31] will retry after 178.565968ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "NoKubernetes-20220921220434-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" NoKubernetes-20220921220434-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: NoKubernetes-20220921220434-5916
	I0921 22:06:02.496753    5388 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" NoKubernetes-20220921220434-5916
	W0921 22:06:02.672056    5388 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" NoKubernetes-20220921220434-5916 returned with exit code 1
	I0921 22:06:02.672056    5388 retry.go:31] will retry after 330.246446ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "NoKubernetes-20220921220434-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" NoKubernetes-20220921220434-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: NoKubernetes-20220921220434-5916
	I0921 22:06:03.025960    5388 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" NoKubernetes-20220921220434-5916
	W0921 22:06:03.222257    5388 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" NoKubernetes-20220921220434-5916 returned with exit code 1
	I0921 22:06:03.222257    5388 retry.go:31] will retry after 460.157723ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "NoKubernetes-20220921220434-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" NoKubernetes-20220921220434-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: NoKubernetes-20220921220434-5916
	I0921 22:06:03.697511    5388 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" NoKubernetes-20220921220434-5916
	W0921 22:06:03.876474    5388 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" NoKubernetes-20220921220434-5916 returned with exit code 1
	W0921 22:06:03.876474    5388 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "NoKubernetes-20220921220434-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" NoKubernetes-20220921220434-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: NoKubernetes-20220921220434-5916
	
	W0921 22:06:03.876474    5388 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "NoKubernetes-20220921220434-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" NoKubernetes-20220921220434-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: NoKubernetes-20220921220434-5916
	I0921 22:06:03.876474    5388 start.go:128] duration metric: createHost completed in 6.9368621s
	I0921 22:06:03.887433    5388 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0921 22:06:03.893393    5388 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" NoKubernetes-20220921220434-5916
	I0921 22:06:04.407344    5492 cli_runner.go:164] Run: docker container inspect pause-20220921220531-5916 --format={{.State.Status}}
	W0921 22:06:04.602437    5492 cli_runner.go:211] docker container inspect pause-20220921220531-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:06:04.602437    5492 oci.go:658] temporary error verifying shutdown: unknown state "pause-20220921220531-5916": docker container inspect pause-20220921220531-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: pause-20220921220531-5916
	I0921 22:06:04.602437    5492 oci.go:660] temporary error: container pause-20220921220531-5916 status is  but expect it to be exited
	I0921 22:06:04.602437    5492 retry.go:31] will retry after 5.015617898s: couldn't verify container is exited. %!v(MISSING): unknown state "pause-20220921220531-5916": docker container inspect pause-20220921220531-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: pause-20220921220531-5916
	W0921 22:06:04.079458    5388 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" NoKubernetes-20220921220434-5916 returned with exit code 1
	I0921 22:06:04.079458    5388 retry.go:31] will retry after 195.758538ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "NoKubernetes-20220921220434-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" NoKubernetes-20220921220434-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: NoKubernetes-20220921220434-5916
	I0921 22:06:04.298631    5388 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" NoKubernetes-20220921220434-5916
	W0921 22:06:04.494384    5388 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" NoKubernetes-20220921220434-5916 returned with exit code 1
	I0921 22:06:04.494384    5388 retry.go:31] will retry after 297.413196ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "NoKubernetes-20220921220434-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" NoKubernetes-20220921220434-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: NoKubernetes-20220921220434-5916
	I0921 22:06:04.811705    5388 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" NoKubernetes-20220921220434-5916
	W0921 22:06:04.992299    5388 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" NoKubernetes-20220921220434-5916 returned with exit code 1
	I0921 22:06:04.992299    5388 retry.go:31] will retry after 663.23513ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "NoKubernetes-20220921220434-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" NoKubernetes-20220921220434-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: NoKubernetes-20220921220434-5916
	I0921 22:06:05.672434    5388 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" NoKubernetes-20220921220434-5916
	W0921 22:06:05.850582    5388 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" NoKubernetes-20220921220434-5916 returned with exit code 1
	W0921 22:06:05.850582    5388 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "NoKubernetes-20220921220434-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" NoKubernetes-20220921220434-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: NoKubernetes-20220921220434-5916
	
	W0921 22:06:05.850582    5388 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "NoKubernetes-20220921220434-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" NoKubernetes-20220921220434-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: NoKubernetes-20220921220434-5916
	I0921 22:06:05.861811    5388 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0921 22:06:05.868811    5388 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" NoKubernetes-20220921220434-5916
	W0921 22:06:06.051799    5388 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" NoKubernetes-20220921220434-5916 returned with exit code 1
	I0921 22:06:06.052257    5388 retry.go:31] will retry after 175.796719ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "NoKubernetes-20220921220434-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" NoKubernetes-20220921220434-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: NoKubernetes-20220921220434-5916
	I0921 22:06:06.248419    5388 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" NoKubernetes-20220921220434-5916
	W0921 22:06:06.440704    5388 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" NoKubernetes-20220921220434-5916 returned with exit code 1
	I0921 22:06:06.440704    5388 retry.go:31] will retry after 322.826781ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "NoKubernetes-20220921220434-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" NoKubernetes-20220921220434-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: NoKubernetes-20220921220434-5916
	I0921 22:06:06.784670    5388 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" NoKubernetes-20220921220434-5916
	W0921 22:06:06.991793    5388 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" NoKubernetes-20220921220434-5916 returned with exit code 1
	I0921 22:06:06.991793    5388 retry.go:31] will retry after 602.253718ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "NoKubernetes-20220921220434-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" NoKubernetes-20220921220434-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: NoKubernetes-20220921220434-5916
	I0921 22:06:07.608435    5388 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" NoKubernetes-20220921220434-5916
	W0921 22:06:07.816347    5388 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" NoKubernetes-20220921220434-5916 returned with exit code 1
	W0921 22:06:07.816347    5388 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "NoKubernetes-20220921220434-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" NoKubernetes-20220921220434-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: NoKubernetes-20220921220434-5916
	
	W0921 22:06:07.816347    5388 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "NoKubernetes-20220921220434-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" NoKubernetes-20220921220434-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: NoKubernetes-20220921220434-5916
	I0921 22:06:07.816347    5388 fix.go:57] fixHost completed within 31.3636649s
	I0921 22:06:07.816347    5388 start.go:83] releasing machines lock for "NoKubernetes-20220921220434-5916", held for 31.3646787s
	W0921 22:06:07.816347    5388 start.go:602] error starting host: recreate: creating host: create: creating: setting up container node: creating volume for NoKubernetes-20220921220434-5916 container: docker volume create NoKubernetes-20220921220434-5916 --label name.minikube.sigs.k8s.io=NoKubernetes-20220921220434-5916 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create NoKubernetes-20220921220434-5916: error while creating volume root path '/var/lib/docker/volumes/NoKubernetes-20220921220434-5916': mkdir /var/lib/docker/volumes/NoKubernetes-20220921220434-5916: read-only file system
	W0921 22:06:07.817345    5388 out.go:239] ! StartHost failed, but will try again: recreate: creating host: create: creating: setting up container node: creating volume for NoKubernetes-20220921220434-5916 container: docker volume create NoKubernetes-20220921220434-5916 --label name.minikube.sigs.k8s.io=NoKubernetes-20220921220434-5916 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create NoKubernetes-20220921220434-5916: error while creating volume root path '/var/lib/docker/volumes/NoKubernetes-20220921220434-5916': mkdir /var/lib/docker/volumes/NoKubernetes-20220921220434-5916: read-only file system
	
	I0921 22:06:07.817345    5388 start.go:617] Will try again in 5 seconds ...
	I0921 22:06:09.637549    5492 cli_runner.go:164] Run: docker container inspect pause-20220921220531-5916 --format={{.State.Status}}
	W0921 22:06:09.860015    5492 cli_runner.go:211] docker container inspect pause-20220921220531-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:06:09.860283    5492 oci.go:658] temporary error verifying shutdown: unknown state "pause-20220921220531-5916": docker container inspect pause-20220921220531-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: pause-20220921220531-5916
	I0921 22:06:09.860283    5492 oci.go:660] temporary error: container pause-20220921220531-5916 status is  but expect it to be exited
	I0921 22:06:09.860283    5492 oci.go:88] couldn't shut down pause-20220921220531-5916 (might be okay): verify shutdown: couldn't verify container is exited. %!v(MISSING): unknown state "pause-20220921220531-5916": docker container inspect pause-20220921220531-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: pause-20220921220531-5916
	 
	I0921 22:06:09.870840    5492 cli_runner.go:164] Run: docker rm -f -v pause-20220921220531-5916
	I0921 22:06:10.057135    5492 cli_runner.go:164] Run: docker container inspect -f {{.Id}} pause-20220921220531-5916
	W0921 22:06:10.267089    5492 cli_runner.go:211] docker container inspect -f {{.Id}} pause-20220921220531-5916 returned with exit code 1
	I0921 22:06:10.275072    5492 cli_runner.go:164] Run: docker network inspect pause-20220921220531-5916 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0921 22:06:10.453211    5492 cli_runner.go:211] docker network inspect pause-20220921220531-5916 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0921 22:06:10.459735    5492 network_create.go:272] running [docker network inspect pause-20220921220531-5916] to gather additional debugging logs...
	I0921 22:06:10.459735    5492 cli_runner.go:164] Run: docker network inspect pause-20220921220531-5916
	W0921 22:06:10.670011    5492 cli_runner.go:211] docker network inspect pause-20220921220531-5916 returned with exit code 1
	I0921 22:06:10.670011    5492 network_create.go:275] error running [docker network inspect pause-20220921220531-5916]: docker network inspect pause-20220921220531-5916: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: pause-20220921220531-5916
	I0921 22:06:10.670222    5492 network_create.go:277] output of [docker network inspect pause-20220921220531-5916]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: pause-20220921220531-5916
	
	** /stderr **
	W0921 22:06:10.671575    5492 delete.go:139] delete failed (probably ok) <nil>
	I0921 22:06:10.671575    5492 fix.go:115] Sleeping 1 second for extra luck!
	I0921 22:06:11.680184    5492 start.go:125] createHost starting for "" (driver="docker")
	I0921 22:06:11.705553    5492 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0921 22:06:11.705941    5492 start.go:159] libmachine.API.Create for "pause-20220921220531-5916" (driver="docker")
	I0921 22:06:11.705941    5492 client.go:168] LocalClient.Create starting
	I0921 22:06:11.706694    5492 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I0921 22:06:11.706694    5492 main.go:134] libmachine: Decoding PEM data...
	I0921 22:06:11.706694    5492 main.go:134] libmachine: Parsing certificate...
	I0921 22:06:11.706694    5492 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I0921 22:06:11.707376    5492 main.go:134] libmachine: Decoding PEM data...
	I0921 22:06:11.707376    5492 main.go:134] libmachine: Parsing certificate...
	I0921 22:06:11.716838    5492 cli_runner.go:164] Run: docker network inspect pause-20220921220531-5916 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0921 22:06:11.914360    5492 cli_runner.go:211] docker network inspect pause-20220921220531-5916 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0921 22:06:11.922474    5492 network_create.go:272] running [docker network inspect pause-20220921220531-5916] to gather additional debugging logs...
	I0921 22:06:11.922543    5492 cli_runner.go:164] Run: docker network inspect pause-20220921220531-5916
	W0921 22:06:12.114647    5492 cli_runner.go:211] docker network inspect pause-20220921220531-5916 returned with exit code 1
	I0921 22:06:12.114647    5492 network_create.go:275] error running [docker network inspect pause-20220921220531-5916]: docker network inspect pause-20220921220531-5916: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: pause-20220921220531-5916
	I0921 22:06:12.114647    5492 network_create.go:277] output of [docker network inspect pause-20220921220531-5916]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: pause-20220921220531-5916
	
	** /stderr **
	I0921 22:06:12.122925    5492 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0921 22:06:12.826265    5388 start.go:364] acquiring machines lock for NoKubernetes-20220921220434-5916: {Name:mkc08fa3b9614fd3c596a2381497b0f330e59f13 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0921 22:06:12.826265    5388 start.go:368] acquired machines lock for "NoKubernetes-20220921220434-5916" in 0s
	I0921 22:06:12.826265    5388 start.go:96] Skipping create...Using existing machine configuration
	I0921 22:06:12.826265    5388 fix.go:55] fixHost starting: 
	I0921 22:06:12.844299    5388 cli_runner.go:164] Run: docker container inspect NoKubernetes-20220921220434-5916 --format={{.State.Status}}
	W0921 22:06:13.074415    5388 cli_runner.go:211] docker container inspect NoKubernetes-20220921220434-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:06:13.074415    5388 fix.go:103] recreateIfNeeded on NoKubernetes-20220921220434-5916: state= err=unknown state "NoKubernetes-20220921220434-5916": docker container inspect NoKubernetes-20220921220434-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: NoKubernetes-20220921220434-5916
	I0921 22:06:13.074415    5388 fix.go:108] machineExists: false. err=machine does not exist
	I0921 22:06:13.099655    5388 out.go:177] * docker "NoKubernetes-20220921220434-5916" container is missing, will recreate.
	I0921 22:06:13.103618    5388 delete.go:124] DEMOLISHING NoKubernetes-20220921220434-5916 ...
	I0921 22:06:13.119793    5388 cli_runner.go:164] Run: docker container inspect NoKubernetes-20220921220434-5916 --format={{.State.Status}}
	W0921 22:06:13.307429    5388 cli_runner.go:211] docker container inspect NoKubernetes-20220921220434-5916 --format={{.State.Status}} returned with exit code 1
	W0921 22:06:13.307503    5388 stop.go:75] unable to get state: unknown state "NoKubernetes-20220921220434-5916": docker container inspect NoKubernetes-20220921220434-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: NoKubernetes-20220921220434-5916
	I0921 22:06:13.307550    5388 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "NoKubernetes-20220921220434-5916": docker container inspect NoKubernetes-20220921220434-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: NoKubernetes-20220921220434-5916
	I0921 22:06:13.324269    5388 cli_runner.go:164] Run: docker container inspect NoKubernetes-20220921220434-5916 --format={{.State.Status}}
	W0921 22:06:13.554606    5388 cli_runner.go:211] docker container inspect NoKubernetes-20220921220434-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:06:13.554777    5388 delete.go:82] Unable to get host status for NoKubernetes-20220921220434-5916, assuming it has already been deleted: state: unknown state "NoKubernetes-20220921220434-5916": docker container inspect NoKubernetes-20220921220434-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: NoKubernetes-20220921220434-5916
	I0921 22:06:13.566710    5388 cli_runner.go:164] Run: docker container inspect -f {{.Id}} NoKubernetes-20220921220434-5916
	W0921 22:06:13.769550    5388 cli_runner.go:211] docker container inspect -f {{.Id}} NoKubernetes-20220921220434-5916 returned with exit code 1
	I0921 22:06:13.769550    5388 kic.go:356] could not find the container NoKubernetes-20220921220434-5916 to remove it. will try anyways
	I0921 22:06:13.776684    5388 cli_runner.go:164] Run: docker container inspect NoKubernetes-20220921220434-5916 --format={{.State.Status}}
	W0921 22:06:13.957448    5388 cli_runner.go:211] docker container inspect NoKubernetes-20220921220434-5916 --format={{.State.Status}} returned with exit code 1
	W0921 22:06:13.957507    5388 oci.go:84] error getting container status, will try to delete anyways: unknown state "NoKubernetes-20220921220434-5916": docker container inspect NoKubernetes-20220921220434-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: NoKubernetes-20220921220434-5916
	I0921 22:06:13.965374    5388 cli_runner.go:164] Run: docker exec --privileged -t NoKubernetes-20220921220434-5916 /bin/bash -c "sudo init 0"
	I0921 22:06:12.347046    5492 network.go:281] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0005469f8] amended:false}} dirty:map[] misses:0}
	I0921 22:06:12.347046    5492 network.go:239] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0921 22:06:12.363010    5492 network.go:290] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0005469f8] amended:true}} dirty:map[192.168.49.0:0xc0005469f8 192.168.58.0:0xc000746228] misses:0}
	I0921 22:06:12.363010    5492 network.go:236] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0921 22:06:12.363010    5492 network_create.go:115] attempt to create docker network pause-20220921220531-5916 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0921 22:06:12.371034    5492 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=pause-20220921220531-5916 pause-20220921220531-5916
	W0921 22:06:12.566574    5492 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=pause-20220921220531-5916 pause-20220921220531-5916 returned with exit code 1
	E0921 22:06:12.566574    5492 network_create.go:104] error while trying to create docker network pause-20220921220531-5916 192.168.58.0/24: create docker network pause-20220921220531-5916 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=pause-20220921220531-5916 pause-20220921220531-5916: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 04f89a1e7e692738319b2c79dd3144813c707c521b2dce3c4cd2e27085331103 (br-04f89a1e7e69): conflicts with network 8a3cd8d165a4af032276277dbee93b52b8ee4aed7ca6d8aceee6814901b31792 (br-8a3cd8d165a4): networks have overlapping IPv4
	W0921 22:06:12.567457    5492 out.go:239] ! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network pause-20220921220531-5916 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=pause-20220921220531-5916 pause-20220921220531-5916: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 04f89a1e7e692738319b2c79dd3144813c707c521b2dce3c4cd2e27085331103 (br-04f89a1e7e69): conflicts with network 8a3cd8d165a4af032276277dbee93b52b8ee4aed7ca6d8aceee6814901b31792 (br-8a3cd8d165a4): networks have overlapping IPv4
	
	I0921 22:06:12.580495    5492 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0921 22:06:12.772395    5492 cli_runner.go:164] Run: docker volume create pause-20220921220531-5916 --label name.minikube.sigs.k8s.io=pause-20220921220531-5916 --label created_by.minikube.sigs.k8s.io=true
	W0921 22:06:12.965161    5492 cli_runner.go:211] docker volume create pause-20220921220531-5916 --label name.minikube.sigs.k8s.io=pause-20220921220531-5916 --label created_by.minikube.sigs.k8s.io=true returned with exit code 1
	I0921 22:06:12.965161    5492 client.go:171] LocalClient.Create took 1.2592101s
	I0921 22:06:14.975825    5492 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0921 22:06:14.981825    5492 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20220921220531-5916
	W0921 22:06:15.174094    5492 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20220921220531-5916 returned with exit code 1
	I0921 22:06:15.174281    5492 retry.go:31] will retry after 242.222461ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "pause-20220921220531-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20220921220531-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: pause-20220921220531-5916
	I0921 22:06:15.434347    5492 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20220921220531-5916
	W0921 22:06:15.644069    5492 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20220921220531-5916 returned with exit code 1
	I0921 22:06:15.644124    5492 retry.go:31] will retry after 293.637806ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "pause-20220921220531-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20220921220531-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: pause-20220921220531-5916
	I0921 22:06:15.949508    5492 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20220921220531-5916
	W0921 22:06:16.158005    5492 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20220921220531-5916 returned with exit code 1
	I0921 22:06:16.158151    5492 retry.go:31] will retry after 446.119795ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "pause-20220921220531-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20220921220531-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: pause-20220921220531-5916
	I0921 22:06:16.621804    5492 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20220921220531-5916
	W0921 22:06:16.829511    5492 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20220921220531-5916 returned with exit code 1
	W0921 22:06:16.829511    5492 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "pause-20220921220531-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20220921220531-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: pause-20220921220531-5916
	
	W0921 22:06:16.829511    5492 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "pause-20220921220531-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20220921220531-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: pause-20220921220531-5916
	I0921 22:06:16.840462    5492 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0921 22:06:16.846490    5492 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20220921220531-5916
	W0921 22:06:17.052305    5492 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20220921220531-5916 returned with exit code 1
	I0921 22:06:17.052601    5492 retry.go:31] will retry after 179.638263ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "pause-20220921220531-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20220921220531-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: pause-20220921220531-5916
	I0921 22:06:17.245984    5492 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20220921220531-5916
	W0921 22:06:14.172950    5388 cli_runner.go:211] docker exec --privileged -t NoKubernetes-20220921220434-5916 /bin/bash -c "sudo init 0" returned with exit code 1
	I0921 22:06:14.172950    5388 oci.go:646] error shutdown NoKubernetes-20220921220434-5916: docker exec --privileged -t NoKubernetes-20220921220434-5916 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: NoKubernetes-20220921220434-5916
	I0921 22:06:15.182018    5388 cli_runner.go:164] Run: docker container inspect NoKubernetes-20220921220434-5916 --format={{.State.Status}}
	W0921 22:06:15.394087    5388 cli_runner.go:211] docker container inspect NoKubernetes-20220921220434-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:06:15.394087    5388 oci.go:658] temporary error verifying shutdown: unknown state "NoKubernetes-20220921220434-5916": docker container inspect NoKubernetes-20220921220434-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: NoKubernetes-20220921220434-5916
	I0921 22:06:15.394087    5388 oci.go:660] temporary error: container NoKubernetes-20220921220434-5916 status is  but expect it to be exited
	I0921 22:06:15.394087    5388 retry.go:31] will retry after 396.557122ms: couldn't verify container is exited. %!v(MISSING): unknown state "NoKubernetes-20220921220434-5916": docker container inspect NoKubernetes-20220921220434-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: NoKubernetes-20220921220434-5916
	I0921 22:06:15.809598    5388 cli_runner.go:164] Run: docker container inspect NoKubernetes-20220921220434-5916 --format={{.State.Status}}
	W0921 22:06:16.000195    5388 cli_runner.go:211] docker container inspect NoKubernetes-20220921220434-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:06:16.000195    5388 oci.go:658] temporary error verifying shutdown: unknown state "NoKubernetes-20220921220434-5916": docker container inspect NoKubernetes-20220921220434-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: NoKubernetes-20220921220434-5916
	I0921 22:06:16.000195    5388 oci.go:660] temporary error: container NoKubernetes-20220921220434-5916 status is  but expect it to be exited
	I0921 22:06:16.000195    5388 retry.go:31] will retry after 597.811922ms: couldn't verify container is exited. %!v(MISSING): unknown state "NoKubernetes-20220921220434-5916": docker container inspect NoKubernetes-20220921220434-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: NoKubernetes-20220921220434-5916
	I0921 22:06:16.620509    5388 cli_runner.go:164] Run: docker container inspect NoKubernetes-20220921220434-5916 --format={{.State.Status}}
	W0921 22:06:16.845486    5388 cli_runner.go:211] docker container inspect NoKubernetes-20220921220434-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:06:16.845486    5388 oci.go:658] temporary error verifying shutdown: unknown state "NoKubernetes-20220921220434-5916": docker container inspect NoKubernetes-20220921220434-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: NoKubernetes-20220921220434-5916
	I0921 22:06:16.845486    5388 oci.go:660] temporary error: container NoKubernetes-20220921220434-5916 status is  but expect it to be exited
	I0921 22:06:16.845486    5388 retry.go:31] will retry after 1.409144665s: couldn't verify container is exited. %!v(MISSING): unknown state "NoKubernetes-20220921220434-5916": docker container inspect NoKubernetes-20220921220434-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: NoKubernetes-20220921220434-5916
	I0921 22:06:18.276006    5388 cli_runner.go:164] Run: docker container inspect NoKubernetes-20220921220434-5916 --format={{.State.Status}}
	W0921 22:06:18.478640    5388 cli_runner.go:211] docker container inspect NoKubernetes-20220921220434-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:06:18.478640    5388 oci.go:658] temporary error verifying shutdown: unknown state "NoKubernetes-20220921220434-5916": docker container inspect NoKubernetes-20220921220434-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: NoKubernetes-20220921220434-5916
	I0921 22:06:18.478640    5388 oci.go:660] temporary error: container NoKubernetes-20220921220434-5916 status is  but expect it to be exited
	I0921 22:06:18.478640    5388 retry.go:31] will retry after 1.192358242s: couldn't verify container is exited. %!v(MISSING): unknown state "NoKubernetes-20220921220434-5916": docker container inspect NoKubernetes-20220921220434-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: NoKubernetes-20220921220434-5916
	W0921 22:06:17.440778    5492 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20220921220531-5916 returned with exit code 1
	I0921 22:06:17.440987    5492 retry.go:31] will retry after 263.695078ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "pause-20220921220531-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20220921220531-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: pause-20220921220531-5916
	I0921 22:06:17.712901    5492 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20220921220531-5916
	W0921 22:06:17.904769    5492 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20220921220531-5916 returned with exit code 1
	I0921 22:06:17.905013    5492 retry.go:31] will retry after 484.240172ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "pause-20220921220531-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20220921220531-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: pause-20220921220531-5916
	I0921 22:06:18.410441    5492 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20220921220531-5916
	W0921 22:06:18.590613    5492 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20220921220531-5916 returned with exit code 1
	W0921 22:06:18.590613    5492 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "pause-20220921220531-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20220921220531-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: pause-20220921220531-5916
	
	W0921 22:06:18.590613    5492 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "pause-20220921220531-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20220921220531-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: pause-20220921220531-5916
	I0921 22:06:18.590613    5492 start.go:128] duration metric: createHost completed in 6.9101258s
	I0921 22:06:18.600617    5492 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0921 22:06:18.606614    5492 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20220921220531-5916
	W0921 22:06:18.792462    5492 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20220921220531-5916 returned with exit code 1
	I0921 22:06:18.792462    5492 retry.go:31] will retry after 340.62286ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "pause-20220921220531-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20220921220531-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: pause-20220921220531-5916
	I0921 22:06:19.144025    5492 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20220921220531-5916
	W0921 22:06:19.352538    5492 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20220921220531-5916 returned with exit code 1
	I0921 22:06:19.352715    5492 retry.go:31] will retry after 297.417842ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "pause-20220921220531-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20220921220531-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: pause-20220921220531-5916
	I0921 22:06:19.671099    5492 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20220921220531-5916
	W0921 22:06:19.880915    5492 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20220921220531-5916 returned with exit code 1
	I0921 22:06:19.880915    5492 retry.go:31] will retry after 448.358942ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "pause-20220921220531-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20220921220531-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: pause-20220921220531-5916
	I0921 22:06:20.341919    5492 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20220921220531-5916
	W0921 22:06:20.545143    5492 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20220921220531-5916 returned with exit code 1
	W0921 22:06:20.545143    5492 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "pause-20220921220531-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20220921220531-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: pause-20220921220531-5916
	
	W0921 22:06:20.545143    5492 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "pause-20220921220531-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20220921220531-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: pause-20220921220531-5916
	I0921 22:06:20.555174    5492 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0921 22:06:20.561199    5492 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20220921220531-5916
	W0921 22:06:20.748637    5492 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20220921220531-5916 returned with exit code 1
	I0921 22:06:20.748960    5492 retry.go:31] will retry after 176.645665ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "pause-20220921220531-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20220921220531-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: pause-20220921220531-5916
	I0921 22:06:20.942041    5492 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20220921220531-5916
	W0921 22:06:21.120157    5492 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20220921220531-5916 returned with exit code 1
	I0921 22:06:21.120573    5492 retry.go:31] will retry after 512.00063ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "pause-20220921220531-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20220921220531-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: pause-20220921220531-5916
	I0921 22:06:21.644155    5492 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20220921220531-5916
	W0921 22:06:21.820395    5492 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20220921220531-5916 returned with exit code 1
	I0921 22:06:21.820395    5492 retry.go:31] will retry after 673.154531ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "pause-20220921220531-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20220921220531-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: pause-20220921220531-5916
	I0921 22:06:22.502268    5492 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20220921220531-5916
	W0921 22:06:22.704256    5492 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20220921220531-5916 returned with exit code 1
	W0921 22:06:22.704512    5492 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "pause-20220921220531-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20220921220531-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: pause-20220921220531-5916
	
	W0921 22:06:22.704512    5492 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "pause-20220921220531-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20220921220531-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: pause-20220921220531-5916
	I0921 22:06:22.704512    5492 fix.go:57] fixHost completed within 31.3829304s
	I0921 22:06:22.704512    5492 start.go:83] releasing machines lock for "pause-20220921220531-5916", held for 31.3834698s
	W0921 22:06:22.705167    5492 out.go:239] * Failed to start docker container. Running "minikube delete -p pause-20220921220531-5916" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for pause-20220921220531-5916 container: docker volume create pause-20220921220531-5916 --label name.minikube.sigs.k8s.io=pause-20220921220531-5916 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create pause-20220921220531-5916: error while creating volume root path '/var/lib/docker/volumes/pause-20220921220531-5916': mkdir /var/lib/docker/volumes/pause-20220921220531-5916: read-only file system
	
	I0921 22:06:22.711107    5492 out.go:177] 
	W0921 22:06:22.715721    5492 out.go:239] X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for pause-20220921220531-5916 container: docker volume create pause-20220921220531-5916 --label name.minikube.sigs.k8s.io=pause-20220921220531-5916 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create pause-20220921220531-5916: error while creating volume root path '/var/lib/docker/volumes/pause-20220921220531-5916': mkdir /var/lib/docker/volumes/pause-20220921220531-5916: read-only file system
	
	W0921 22:06:22.716064    5492 out.go:239] * Suggestion: Restart Docker
	W0921 22:06:22.716064    5492 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/6825
	I0921 22:06:22.719070    5492 out.go:177] 
	
	* 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "stopped-upgrade-20220921220434-5916": docker container inspect stopped-upgrade-20220921220434-5916 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: stopped-upgrade-20220921220434-5916
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_logs_80bd2298da0c083373823443180fffe8ad701919_1059.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
version_upgrade_test.go:215: `minikube logs` after upgrade to HEAD from v1.9.0 failed: exit status 80
--- FAIL: TestStoppedBinaryUpgrade/MinikubeLogs (1.59s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (77.51s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-windows-amd64.exe start -p NoKubernetes-20220921220434-5916 --no-kubernetes --driver=docker

                                                
                                                
=== CONT  TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p NoKubernetes-20220921220434-5916 --no-kubernetes --driver=docker: exit status 60 (1m16.6555429s)

                                                
                                                
-- stdout --
	* [NoKubernetes-20220921220434-5916] minikube v1.27.0 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	  - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=14995
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-20220921220434-5916
	* Pulling base image ...
	* docker "NoKubernetes-20220921220434-5916" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=16300MB) ...
	* docker "NoKubernetes-20220921220434-5916" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=16300MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	    > gcr.io/k8s-minikube/kicbase:  0 B [__________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase:  0 B [__________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase:  0 B [__________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase:  0 B [______________________] ?% ? p/s 800ms! minikube was unable to download gcr.io/k8s-minikube/kicbase:v0.0.34, but successfully downloaded gcr.io/k8s-minikube/kicbase:v0.0.34 as a fallback image
	E0921 22:07:15.629844    6064 network_create.go:104] error while trying to create docker network NoKubernetes-20220921220434-5916 192.168.49.0/24: create docker network NoKubernetes-20220921220434-5916 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=NoKubernetes-20220921220434-5916 NoKubernetes-20220921220434-5916: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 9c2fa4e117758db98552c3f532d8d198eedb6d70d1ea8b51aa725cc101ceca8a (br-9c2fa4e11775): conflicts with network a04d36bfb3cf4b1b1ab643a2ec70fadfc4a485878b32a275b4c7653037d66dd0 (br-a04d36bfb3cf): networks have overlapping IPv4
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network NoKubernetes-20220921220434-5916 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=NoKubernetes-20220921220434-5916 NoKubernetes-20220921220434-5916: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 9c2fa4e117758db98552c3f532d8d198eedb6d70d1ea8b51aa725cc101ceca8a (br-9c2fa4e11775): conflicts with network a04d36bfb3cf4b1b1ab643a2ec70fadfc4a485878b32a275b4c7653037d66dd0 (br-a04d36bfb3cf): networks have overlapping IPv4
	
	! StartHost failed, but will try again: recreate: creating host: create: creating: setting up container node: creating volume for NoKubernetes-20220921220434-5916 container: docker volume create NoKubernetes-20220921220434-5916 --label name.minikube.sigs.k8s.io=NoKubernetes-20220921220434-5916 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create NoKubernetes-20220921220434-5916: error while creating volume root path '/var/lib/docker/volumes/NoKubernetes-20220921220434-5916': mkdir /var/lib/docker/volumes/NoKubernetes-20220921220434-5916: read-only file system
	
	E0921 22:07:55.106169    6064 network_create.go:104] error while trying to create docker network NoKubernetes-20220921220434-5916 192.168.58.0/24: create docker network NoKubernetes-20220921220434-5916 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=NoKubernetes-20220921220434-5916 NoKubernetes-20220921220434-5916: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network e3bf83708e426f70a578c4032ac185c1b8586cd99f3e6cc45f761985d2a19a8d (br-e3bf83708e42): conflicts with network 8a3cd8d165a4af032276277dbee93b52b8ee4aed7ca6d8aceee6814901b31792 (br-8a3cd8d165a4): networks have overlapping IPv4
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network NoKubernetes-20220921220434-5916 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=NoKubernetes-20220921220434-5916 NoKubernetes-20220921220434-5916: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network e3bf83708e426f70a578c4032ac185c1b8586cd99f3e6cc45f761985d2a19a8d (br-e3bf83708e42): conflicts with network 8a3cd8d165a4af032276277dbee93b52b8ee4aed7ca6d8aceee6814901b31792 (br-8a3cd8d165a4): networks have overlapping IPv4
	
	* Failed to start docker container. Running "minikube delete -p NoKubernetes-20220921220434-5916" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for NoKubernetes-20220921220434-5916 container: docker volume create NoKubernetes-20220921220434-5916 --label name.minikube.sigs.k8s.io=NoKubernetes-20220921220434-5916 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create NoKubernetes-20220921220434-5916: error while creating volume root path '/var/lib/docker/volumes/NoKubernetes-20220921220434-5916': mkdir /var/lib/docker/volumes/NoKubernetes-20220921220434-5916: read-only file system
	
	X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for NoKubernetes-20220921220434-5916 container: docker volume create NoKubernetes-20220921220434-5916 --label name.minikube.sigs.k8s.io=NoKubernetes-20220921220434-5916 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create NoKubernetes-20220921220434-5916: error while creating volume root path '/var/lib/docker/volumes/NoKubernetes-20220921220434-5916': mkdir /var/lib/docker/volumes/NoKubernetes-20220921220434-5916: read-only file system
	
	* Suggestion: Restart Docker
	* Related issue: https://github.com/kubernetes/minikube/issues/6825

                                                
                                                
** /stderr **
no_kubernetes_test.go:138: failed to start minikube with args: "out/minikube-windows-amd64.exe start -p NoKubernetes-20220921220434-5916 --no-kubernetes --driver=docker" : exit status 60
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestNoKubernetes/serial/Start]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect NoKubernetes-20220921220434-5916
helpers_test.go:231: (dbg) Non-zero exit: docker inspect NoKubernetes-20220921220434-5916: exit status 1 (250.386ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: NoKubernetes-20220921220434-5916

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p NoKubernetes-20220921220434-5916 -n NoKubernetes-20220921220434-5916
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p NoKubernetes-20220921220434-5916 -n NoKubernetes-20220921220434-5916: exit status 7 (579.1195ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0921 22:08:05.083402    7884 status.go:247] status error: host: state: unknown state "NoKubernetes-20220921220434-5916": docker container inspect NoKubernetes-20220921220434-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: NoKubernetes-20220921220434-5916

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-20220921220434-5916" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestNoKubernetes/serial/Start (77.51s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (20.31s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-windows-amd64.exe stop -p NoKubernetes-20220921220434-5916
no_kubernetes_test.go:158: (dbg) Non-zero exit: out/minikube-windows-amd64.exe stop -p NoKubernetes-20220921220434-5916: exit status 82 (19.3921405s)

                                                
                                                
-- stdout --
	* Stopping node "NoKubernetes-20220921220434-5916"  ...
	* Stopping node "NoKubernetes-20220921220434-5916"  ...
	* Stopping node "NoKubernetes-20220921220434-5916"  ...
	* Stopping node "NoKubernetes-20220921220434-5916"  ...
	* Stopping node "NoKubernetes-20220921220434-5916"  ...
	* Stopping node "NoKubernetes-20220921220434-5916"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0921 22:08:14.102885    7412 daemonize_windows.go:38] error terminating scheduled stop for profile NoKubernetes-20220921220434-5916: stopping schedule-stop service for profile NoKubernetes-20220921220434-5916: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "NoKubernetes-20220921220434-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" NoKubernetes-20220921220434-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: NoKubernetes-20220921220434-5916
	X Exiting due to GUEST_STOP_TIMEOUT: docker container inspect NoKubernetes-20220921220434-5916 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: NoKubernetes-20220921220434-5916
	
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                        │
	│    * If the above advice does not help, please let us know:                                                            │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                          │
	│                                                                                                                        │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                               │
	│    * Please also attach the following file to the GitHub issue:                                                        │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_stop_eeafc14e343686f0df1f1d4295ac2d3042636ff8_153.log    │
	│                                                                                                                        │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:160: Failed to stop minikube "out/minikube-windows-amd64.exe stop -p NoKubernetes-20220921220434-5916" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestNoKubernetes/serial/Stop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect NoKubernetes-20220921220434-5916
helpers_test.go:231: (dbg) Non-zero exit: docker inspect NoKubernetes-20220921220434-5916: exit status 1 (277.2051ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: NoKubernetes-20220921220434-5916

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p NoKubernetes-20220921220434-5916 -n NoKubernetes-20220921220434-5916
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p NoKubernetes-20220921220434-5916 -n NoKubernetes-20220921220434-5916: exit status 7 (638.7122ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0921 22:08:30.483978    7464 status.go:247] status error: host: state: unknown state "NoKubernetes-20220921220434-5916": docker container inspect NoKubernetes-20220921220434-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: NoKubernetes-20220921220434-5916

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-20220921220434-5916" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestNoKubernetes/serial/Stop (20.31s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (64.84s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-windows-amd64.exe start -p NoKubernetes-20220921220434-5916 --driver=docker

                                                
                                                
=== CONT  TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p NoKubernetes-20220921220434-5916 --driver=docker: exit status 1 (1m4.0219886s)

                                                
                                                
-- stdout --
	* [NoKubernetes-20220921220434-5916] minikube v1.27.0 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	  - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=14995
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-20220921220434-5916
	* Pulling base image ...
	* docker "NoKubernetes-20220921220434-5916" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=16300MB) ...
	* docker "NoKubernetes-20220921220434-5916" container is missing, will recreate.

                                                
                                                
-- /stdout --
** stderr ** 
	    > gcr.io/k8s-minikube/kicbase:  0 B [__________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase:  0 B [__________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase:  0 B [__________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase:  0 B [__________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase:  0 B [______________________] ?% ? p/s 800ms! minikube was unable to download gcr.io/k8s-minikube/kicbase:v0.0.34, but successfully downloaded gcr.io/k8s-minikube/kicbase:v0.0.34 as a fallback image
	E0921 22:08:57.968421    6340 network_create.go:104] error while trying to create docker network NoKubernetes-20220921220434-5916 192.168.49.0/24: create docker network NoKubernetes-20220921220434-5916 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=NoKubernetes-20220921220434-5916 NoKubernetes-20220921220434-5916: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network b32064ac87a6e02b3571503ff91e2ecda8ff130e2b49c0a58e320b7ec893be9c (br-b32064ac87a6): conflicts with network a04d36bfb3cf4b1b1ab643a2ec70fadfc4a485878b32a275b4c7653037d66dd0 (br-a04d36bfb3cf): networks have overlapping IPv4
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network NoKubernetes-20220921220434-5916 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=NoKubernetes-20220921220434-5916 NoKubernetes-20220921220434-5916: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network b32064ac87a6e02b3571503ff91e2ecda8ff130e2b49c0a58e320b7ec893be9c (br-b32064ac87a6): conflicts with network a04d36bfb3cf4b1b1ab643a2ec70fadfc4a485878b32a275b4c7653037d66dd0 (br-a04d36bfb3cf): networks have overlapping IPv4
	
	! StartHost failed, but will try again: recreate: creating host: create: creating: setting up container node: creating volume for NoKubernetes-20220921220434-5916 container: docker volume create NoKubernetes-20220921220434-5916 --label name.minikube.sigs.k8s.io=NoKubernetes-20220921220434-5916 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create NoKubernetes-20220921220434-5916: error while creating volume root path '/var/lib/docker/volumes/NoKubernetes-20220921220434-5916': mkdir /var/lib/docker/volumes/NoKubernetes-20220921220434-5916: read-only file system
	

                                                
                                                
** /stderr **
no_kubernetes_test.go:193: failed to start minikube with args: "out/minikube-windows-amd64.exe start -p NoKubernetes-20220921220434-5916 --driver=docker" : exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestNoKubernetes/serial/StartNoArgs]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect NoKubernetes-20220921220434-5916
helpers_test.go:231: (dbg) Non-zero exit: docker inspect NoKubernetes-20220921220434-5916: exit status 1 (250.6376ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: NoKubernetes-20220921220434-5916

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p NoKubernetes-20220921220434-5916 -n NoKubernetes-20220921220434-5916
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p NoKubernetes-20220921220434-5916 -n NoKubernetes-20220921220434-5916: exit status 7 (554.948ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0921 22:09:35.319807    6520 status.go:247] status error: host: state: unknown state "NoKubernetes-20220921220434-5916": docker container inspect NoKubernetes-20220921220434-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: NoKubernetes-20220921220434-5916

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-20220921220434-5916" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestNoKubernetes/serial/StartNoArgs (64.84s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (51.4s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-windows-amd64.exe start -p old-k8s-version-20220921220934-5916 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --kubernetes-version=v1.16.0

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p old-k8s-version-20220921220934-5916 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --kubernetes-version=v1.16.0: exit status 60 (50.4469778s)

                                                
                                                
-- stdout --
	* [old-k8s-version-20220921220934-5916] minikube v1.27.0 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	  - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=14995
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting control plane node old-k8s-version-20220921220934-5916 in cluster old-k8s-version-20220921220934-5916
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* docker "old-k8s-version-20220921220934-5916" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0921 22:09:34.532136    7968 out.go:296] Setting OutFile to fd 1640 ...
	I0921 22:09:34.600131    7968 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0921 22:09:34.600131    7968 out.go:309] Setting ErrFile to fd 1556...
	I0921 22:09:34.600131    7968 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0921 22:09:34.624704    7968 out.go:303] Setting JSON to false
	I0921 22:09:34.628244    7968 start.go:115] hostinfo: {"hostname":"minikube2","uptime":4243,"bootTime":1663793931,"procs":152,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19044 Build 19044","kernelVersion":"10.0.19044 Build 19044","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"29dced62-21fb-45d8-a34e-472b66ced002"}
	W0921 22:09:34.628445    7968 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0921 22:09:34.634320    7968 out.go:177] * [old-k8s-version-20220921220934-5916] minikube v1.27.0 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	I0921 22:09:34.638323    7968 notify.go:214] Checking for updates...
	I0921 22:09:34.641320    7968 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	I0921 22:09:34.643336    7968 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	I0921 22:09:34.646314    7968 out.go:177]   - MINIKUBE_LOCATION=14995
	I0921 22:09:34.649437    7968 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0921 22:09:34.652943    7968 config.go:180] Loaded profile config "NoKubernetes-20220921220434-5916": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v0.0.0
	I0921 22:09:34.652943    7968 config.go:180] Loaded profile config "cert-expiration-20220921220719-5916": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.2
	I0921 22:09:34.653726    7968 config.go:180] Loaded profile config "kubernetes-upgrade-20220921220835-5916": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0921 22:09:34.653726    7968 config.go:180] Loaded profile config "multinode-20220921215635-5916-m01": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.2
	I0921 22:09:34.655067    7968 driver.go:365] Setting default libvirt URI to qemu:///system
	I0921 22:09:34.938241    7968 docker.go:137] docker version: linux-20.10.17
	I0921 22:09:34.945244    7968 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0921 22:09:35.506746    7968 info.go:265] docker info: {ID:6VFI:N2PO:PLBP:KXLR:EN7R:OR7P:564S:UY7N:ZWDA:5V2U:KGL2:KJZ7 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:43 OomKillDisable:true NGoroutines:78 SystemTime:2022-09-21 22:09:35.1001198 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.10.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.9] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-
plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.19.0]] Warnings:<nil>}}
	I0921 22:09:35.511237    7968 out.go:177] * Using the docker driver based on user configuration
	I0921 22:09:35.513276    7968 start.go:284] selected driver: docker
	I0921 22:09:35.513276    7968 start.go:808] validating driver "docker" against <nil>
	I0921 22:09:35.513276    7968 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0921 22:09:35.592395    7968 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0921 22:09:36.220183    7968 info.go:265] docker info: {ID:6VFI:N2PO:PLBP:KXLR:EN7R:OR7P:564S:UY7N:ZWDA:5V2U:KGL2:KJZ7 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:43 OomKillDisable:true NGoroutines:78 SystemTime:2022-09-21 22:09:35.7620049 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.10.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.9] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-
plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.19.0]] Warnings:<nil>}}
	I0921 22:09:36.220507    7968 start_flags.go:302] no existing cluster config was found, will generate one from the flags 
	I0921 22:09:36.221215    7968 start_flags.go:867] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0921 22:09:36.224969    7968 out.go:177] * Using Docker Desktop driver with root privileges
	I0921 22:09:36.227173    7968 cni.go:95] Creating CNI manager for ""
	I0921 22:09:36.227707    7968 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0921 22:09:36.227707    7968 start_flags.go:316] config:
	{Name:old-k8s-version-20220921220934-5916 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-20220921220934-5916 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contai
nerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I0921 22:09:36.231792    7968 out.go:177] * Starting control plane node old-k8s-version-20220921220934-5916 in cluster old-k8s-version-20220921220934-5916
	I0921 22:09:36.233840    7968 cache.go:120] Beginning downloading kic base image for docker with docker
	I0921 22:09:36.236807    7968 out.go:177] * Pulling base image ...
	I0921 22:09:36.238758    7968 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0921 22:09:36.238758    7968 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c in local docker daemon
	I0921 22:09:36.238758    7968 preload.go:148] Found local preload: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0921 22:09:36.239341    7968 cache.go:57] Caching tarball of preloaded images
	I0921 22:09:36.239390    7968 preload.go:174] Found C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0921 22:09:36.239390    7968 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0921 22:09:36.240061    7968 profile.go:148] Saving config to C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\old-k8s-version-20220921220934-5916\config.json ...
	I0921 22:09:36.240061    7968 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\old-k8s-version-20220921220934-5916\config.json: {Name:mk006512fee9254136d1ca2383c54343ecbb8ccc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0921 22:09:36.454397    7968 cache.go:147] Downloading gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c to local cache
	I0921 22:09:36.454397    7968 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase_v0.0.34@sha256_f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c.tar
	I0921 22:09:36.454397    7968 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase_v0.0.34@sha256_f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c.tar
	I0921 22:09:36.454397    7968 image.go:59] Checking for gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c in local cache directory
	I0921 22:09:36.454397    7968 image.go:62] Found gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c in local cache directory, skipping pull
	I0921 22:09:36.454397    7968 image.go:103] gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c exists in cache, skipping pull
	I0921 22:09:36.454397    7968 cache.go:150] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c as a tarball
	I0921 22:09:36.455396    7968 cache.go:161] Loading gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c from local cache
	I0921 22:09:36.455396    7968 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase_v0.0.34@sha256_f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c.tar
	I0921 22:09:38.905216    7968 cache.go:164] successfully loaded gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c from cached tarball
	I0921 22:09:38.905331    7968 cache.go:170] Downloading gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c to local daemon
	I0921 22:09:38.905371    7968 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c.d.lock -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase_v0.0.34@sha256_f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c.d.lock
	I0921 22:09:38.905754    7968 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c in local docker daemon
	I0921 22:09:39.158195    7968 image.go:243] Writing gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c to local daemon
	    > gcr.io/k8s-minikube/kicbase:  0 B [__________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase:  0 B [__________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase:  0 B [__________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase:  0 B [__________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase:  0 B [______________________] ?% ? p/s 900msI0921 22:09:40.976992    7968 cache.go:177] use image loaded from cache gcr.io/k8s-minikube/kicbase:v0.0.34
	W0921 22:09:40.976992    7968 out.go:239] ! minikube was unable to download gcr.io/k8s-minikube/kicbase:v0.0.34, but successfully downloaded gcr.io/k8s-minikube/kicbase:v0.0.34 as a fallback image
	! minikube was unable to download gcr.io/k8s-minikube/kicbase:v0.0.34, but successfully downloaded gcr.io/k8s-minikube/kicbase:v0.0.34 as a fallback image
	I0921 22:09:40.976992    7968 cache.go:208] Successfully downloaded all kic artifacts
	I0921 22:09:40.976992    7968 start.go:364] acquiring machines lock for old-k8s-version-20220921220934-5916: {Name:mka5121945d619472d3cfcf71df0e13caeaa183b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0921 22:09:40.976992    7968 start.go:368] acquired machines lock for "old-k8s-version-20220921220934-5916" in 0s
	I0921 22:09:40.977684    7968 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-20220921220934-5916 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.34 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-20220921220934-5916 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.lo
cal ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:
SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet} &{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0921 22:09:40.977737    7968 start.go:125] createHost starting for "" (driver="docker")
	I0921 22:09:40.981699    7968 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0921 22:09:40.984680    7968 start.go:159] libmachine.API.Create for "old-k8s-version-20220921220934-5916" (driver="docker")
	I0921 22:09:40.984680    7968 client.go:168] LocalClient.Create starting
	I0921 22:09:40.985520    7968 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I0921 22:09:40.985520    7968 main.go:134] libmachine: Decoding PEM data...
	I0921 22:09:40.985520    7968 main.go:134] libmachine: Parsing certificate...
	I0921 22:09:40.986300    7968 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I0921 22:09:40.986330    7968 main.go:134] libmachine: Decoding PEM data...
	I0921 22:09:40.986330    7968 main.go:134] libmachine: Parsing certificate...
	I0921 22:09:40.995456    7968 cli_runner.go:164] Run: docker network inspect old-k8s-version-20220921220934-5916 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0921 22:09:41.211444    7968 cli_runner.go:211] docker network inspect old-k8s-version-20220921220934-5916 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0921 22:09:41.214465    7968 network_create.go:272] running [docker network inspect old-k8s-version-20220921220934-5916] to gather additional debugging logs...
	I0921 22:09:41.214465    7968 cli_runner.go:164] Run: docker network inspect old-k8s-version-20220921220934-5916
	W0921 22:09:41.402851    7968 cli_runner.go:211] docker network inspect old-k8s-version-20220921220934-5916 returned with exit code 1
	I0921 22:09:41.402927    7968 network_create.go:275] error running [docker network inspect old-k8s-version-20220921220934-5916]: docker network inspect old-k8s-version-20220921220934-5916: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: old-k8s-version-20220921220934-5916
	I0921 22:09:41.402927    7968 network_create.go:277] output of [docker network inspect old-k8s-version-20220921220934-5916]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: old-k8s-version-20220921220934-5916
	
	** /stderr **
	I0921 22:09:41.411808    7968 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0921 22:09:41.627026    7968 network.go:290] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc0005be568] misses:0}
	I0921 22:09:41.627026    7968 network.go:236] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0921 22:09:41.627026    7968 network_create.go:115] attempt to create docker network old-k8s-version-20220921220934-5916 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0921 22:09:41.636220    7968 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=old-k8s-version-20220921220934-5916 old-k8s-version-20220921220934-5916
	W0921 22:09:41.840494    7968 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=old-k8s-version-20220921220934-5916 old-k8s-version-20220921220934-5916 returned with exit code 1
	E0921 22:09:41.840610    7968 network_create.go:104] error while trying to create docker network old-k8s-version-20220921220934-5916 192.168.49.0/24: create docker network old-k8s-version-20220921220934-5916 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=old-k8s-version-20220921220934-5916 old-k8s-version-20220921220934-5916: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network e69c423b8ae22e290ff42516a5fc4fdefe6d35996d86e1561acd4855e3707721 (br-e69c423b8ae2): conflicts with network a04d36bfb3cf4b1b1ab643a2ec70fadfc4a485878b32a275b4c7653037d66dd0 (br-a04d36bfb3cf): networks have overlapping IPv4
	W0921 22:09:41.840610    7968 out.go:239] ! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network old-k8s-version-20220921220934-5916 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=old-k8s-version-20220921220934-5916 old-k8s-version-20220921220934-5916: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network e69c423b8ae22e290ff42516a5fc4fdefe6d35996d86e1561acd4855e3707721 (br-e69c423b8ae2): conflicts with network a04d36bfb3cf4b1b1ab643a2ec70fadfc4a485878b32a275b4c7653037d66dd0 (br-a04d36bfb3cf): networks have overlapping IPv4
	
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network old-k8s-version-20220921220934-5916 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=old-k8s-version-20220921220934-5916 old-k8s-version-20220921220934-5916: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network e69c423b8ae22e290ff42516a5fc4fdefe6d35996d86e1561acd4855e3707721 (br-e69c423b8ae2): conflicts with network a04d36bfb3cf4b1b1ab643a2ec70fadfc4a485878b32a275b4c7653037d66dd0 (br-a04d36bfb3cf): networks have overlapping IPv4
	
	I0921 22:09:41.857499    7968 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0921 22:09:42.065821    7968 cli_runner.go:164] Run: docker volume create old-k8s-version-20220921220934-5916 --label name.minikube.sigs.k8s.io=old-k8s-version-20220921220934-5916 --label created_by.minikube.sigs.k8s.io=true
	W0921 22:09:42.260387    7968 cli_runner.go:211] docker volume create old-k8s-version-20220921220934-5916 --label name.minikube.sigs.k8s.io=old-k8s-version-20220921220934-5916 --label created_by.minikube.sigs.k8s.io=true returned with exit code 1
	I0921 22:09:42.260680    7968 client.go:171] LocalClient.Create took 1.2759908s
	I0921 22:09:44.287048    7968 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0921 22:09:44.294635    7968 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220921220934-5916
	W0921 22:09:44.520155    7968 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220921220934-5916 returned with exit code 1
	I0921 22:09:44.520208    7968 retry.go:31] will retry after 276.165072ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-20220921220934-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220921220934-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220921220934-5916
	I0921 22:09:44.808269    7968 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220921220934-5916
	W0921 22:09:45.016526    7968 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220921220934-5916 returned with exit code 1
	I0921 22:09:45.016526    7968 retry.go:31] will retry after 540.190908ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-20220921220934-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220921220934-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220921220934-5916
	I0921 22:09:45.573663    7968 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220921220934-5916
	W0921 22:09:45.780152    7968 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220921220934-5916 returned with exit code 1
	W0921 22:09:45.780520    7968 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-20220921220934-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220921220934-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220921220934-5916
	
	W0921 22:09:45.780624    7968 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-20220921220934-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220921220934-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220921220934-5916
	I0921 22:09:45.792240    7968 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0921 22:09:45.799029    7968 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220921220934-5916
	W0921 22:09:46.032585    7968 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220921220934-5916 returned with exit code 1
	I0921 22:09:46.032585    7968 retry.go:31] will retry after 234.428547ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-20220921220934-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220921220934-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220921220934-5916
	I0921 22:09:46.288212    7968 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220921220934-5916
	W0921 22:09:46.494673    7968 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220921220934-5916 returned with exit code 1
	I0921 22:09:46.494673    7968 retry.go:31] will retry after 346.739061ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-20220921220934-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220921220934-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220921220934-5916
	I0921 22:09:46.863213    7968 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220921220934-5916
	W0921 22:09:47.059036    7968 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220921220934-5916 returned with exit code 1
	I0921 22:09:47.059036    7968 retry.go:31] will retry after 667.587979ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-20220921220934-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220921220934-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220921220934-5916
	I0921 22:09:47.738277    7968 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220921220934-5916
	W0921 22:09:47.951187    7968 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220921220934-5916 returned with exit code 1
	W0921 22:09:47.951187    7968 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-20220921220934-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220921220934-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220921220934-5916
	
	W0921 22:09:47.951187    7968 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-20220921220934-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220921220934-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220921220934-5916
	I0921 22:09:47.951187    7968 start.go:128] duration metric: createHost completed in 6.9733971s
	I0921 22:09:47.951187    7968 start.go:83] releasing machines lock for "old-k8s-version-20220921220934-5916", held for 6.9736123s
	W0921 22:09:47.951187    7968 start.go:602] error starting host: creating host: create: creating: setting up container node: creating volume for old-k8s-version-20220921220934-5916 container: docker volume create old-k8s-version-20220921220934-5916 --label name.minikube.sigs.k8s.io=old-k8s-version-20220921220934-5916 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create old-k8s-version-20220921220934-5916: error while creating volume root path '/var/lib/docker/volumes/old-k8s-version-20220921220934-5916': mkdir /var/lib/docker/volumes/old-k8s-version-20220921220934-5916: read-only file system
	I0921 22:09:47.972186    7968 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220921220934-5916 --format={{.State.Status}}
	W0921 22:09:48.199482    7968 cli_runner.go:211] docker container inspect old-k8s-version-20220921220934-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:09:48.199482    7968 delete.go:82] Unable to get host status for old-k8s-version-20220921220934-5916, assuming it has already been deleted: state: unknown state "old-k8s-version-20220921220934-5916": docker container inspect old-k8s-version-20220921220934-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220921220934-5916
	W0921 22:09:48.199482    7968 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: setting up container node: creating volume for old-k8s-version-20220921220934-5916 container: docker volume create old-k8s-version-20220921220934-5916 --label name.minikube.sigs.k8s.io=old-k8s-version-20220921220934-5916 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create old-k8s-version-20220921220934-5916: error while creating volume root path '/var/lib/docker/volumes/old-k8s-version-20220921220934-5916': mkdir /var/lib/docker/volumes/old-k8s-version-20220921220934-5916: read-only file system
	
	! StartHost failed, but will try again: creating host: create: creating: setting up container node: creating volume for old-k8s-version-20220921220934-5916 container: docker volume create old-k8s-version-20220921220934-5916 --label name.minikube.sigs.k8s.io=old-k8s-version-20220921220934-5916 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create old-k8s-version-20220921220934-5916: error while creating volume root path '/var/lib/docker/volumes/old-k8s-version-20220921220934-5916': mkdir /var/lib/docker/volumes/old-k8s-version-20220921220934-5916: read-only file system
	
	I0921 22:09:48.199482    7968 start.go:617] Will try again in 5 seconds ...
	I0921 22:09:53.211498    7968 start.go:364] acquiring machines lock for old-k8s-version-20220921220934-5916: {Name:mka5121945d619472d3cfcf71df0e13caeaa183b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0921 22:09:53.211498    7968 start.go:368] acquired machines lock for "old-k8s-version-20220921220934-5916" in 0s
	I0921 22:09:53.211498    7968 start.go:96] Skipping create...Using existing machine configuration
	I0921 22:09:53.211498    7968 fix.go:55] fixHost starting: 
	I0921 22:09:53.226502    7968 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220921220934-5916 --format={{.State.Status}}
	W0921 22:09:53.433377    7968 cli_runner.go:211] docker container inspect old-k8s-version-20220921220934-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:09:53.433627    7968 fix.go:103] recreateIfNeeded on old-k8s-version-20220921220934-5916: state= err=unknown state "old-k8s-version-20220921220934-5916": docker container inspect old-k8s-version-20220921220934-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220921220934-5916
	I0921 22:09:53.433668    7968 fix.go:108] machineExists: false. err=machine does not exist
	I0921 22:09:53.437731    7968 out.go:177] * docker "old-k8s-version-20220921220934-5916" container is missing, will recreate.
	I0921 22:09:53.440371    7968 delete.go:124] DEMOLISHING old-k8s-version-20220921220934-5916 ...
	I0921 22:09:53.455544    7968 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220921220934-5916 --format={{.State.Status}}
	W0921 22:09:53.649228    7968 cli_runner.go:211] docker container inspect old-k8s-version-20220921220934-5916 --format={{.State.Status}} returned with exit code 1
	W0921 22:09:53.649337    7968 stop.go:75] unable to get state: unknown state "old-k8s-version-20220921220934-5916": docker container inspect old-k8s-version-20220921220934-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220921220934-5916
	I0921 22:09:53.649337    7968 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "old-k8s-version-20220921220934-5916": docker container inspect old-k8s-version-20220921220934-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220921220934-5916
	I0921 22:09:53.666665    7968 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220921220934-5916 --format={{.State.Status}}
	W0921 22:09:53.852504    7968 cli_runner.go:211] docker container inspect old-k8s-version-20220921220934-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:09:53.852504    7968 delete.go:82] Unable to get host status for old-k8s-version-20220921220934-5916, assuming it has already been deleted: state: unknown state "old-k8s-version-20220921220934-5916": docker container inspect old-k8s-version-20220921220934-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220921220934-5916
	I0921 22:09:53.860503    7968 cli_runner.go:164] Run: docker container inspect -f {{.Id}} old-k8s-version-20220921220934-5916
	W0921 22:09:54.039951    7968 cli_runner.go:211] docker container inspect -f {{.Id}} old-k8s-version-20220921220934-5916 returned with exit code 1
	I0921 22:09:54.040044    7968 kic.go:356] could not find the container old-k8s-version-20220921220934-5916 to remove it. will try anyways
	I0921 22:09:54.049637    7968 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220921220934-5916 --format={{.State.Status}}
	W0921 22:09:54.244051    7968 cli_runner.go:211] docker container inspect old-k8s-version-20220921220934-5916 --format={{.State.Status}} returned with exit code 1
	W0921 22:09:54.244051    7968 oci.go:84] error getting container status, will try to delete anyways: unknown state "old-k8s-version-20220921220934-5916": docker container inspect old-k8s-version-20220921220934-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220921220934-5916
	I0921 22:09:54.260053    7968 cli_runner.go:164] Run: docker exec --privileged -t old-k8s-version-20220921220934-5916 /bin/bash -c "sudo init 0"
	W0921 22:09:54.463155    7968 cli_runner.go:211] docker exec --privileged -t old-k8s-version-20220921220934-5916 /bin/bash -c "sudo init 0" returned with exit code 1
	I0921 22:09:54.463155    7968 oci.go:646] error shutdown old-k8s-version-20220921220934-5916: docker exec --privileged -t old-k8s-version-20220921220934-5916 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: old-k8s-version-20220921220934-5916
	I0921 22:09:55.476628    7968 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220921220934-5916 --format={{.State.Status}}
	W0921 22:09:55.655309    7968 cli_runner.go:211] docker container inspect old-k8s-version-20220921220934-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:09:55.655309    7968 oci.go:658] temporary error verifying shutdown: unknown state "old-k8s-version-20220921220934-5916": docker container inspect old-k8s-version-20220921220934-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220921220934-5916
	I0921 22:09:55.655309    7968 oci.go:660] temporary error: container old-k8s-version-20220921220934-5916 status is  but expect it to be exited
	I0921 22:09:55.655309    7968 retry.go:31] will retry after 328.259627ms: couldn't verify container is exited. %v: unknown state "old-k8s-version-20220921220934-5916": docker container inspect old-k8s-version-20220921220934-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220921220934-5916
	I0921 22:09:55.994860    7968 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220921220934-5916 --format={{.State.Status}}
	W0921 22:09:56.201925    7968 cli_runner.go:211] docker container inspect old-k8s-version-20220921220934-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:09:56.201925    7968 oci.go:658] temporary error verifying shutdown: unknown state "old-k8s-version-20220921220934-5916": docker container inspect old-k8s-version-20220921220934-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220921220934-5916
	I0921 22:09:56.201925    7968 oci.go:660] temporary error: container old-k8s-version-20220921220934-5916 status is  but expect it to be exited
	I0921 22:09:56.201925    7968 retry.go:31] will retry after 447.727139ms: couldn't verify container is exited. %v: unknown state "old-k8s-version-20220921220934-5916": docker container inspect old-k8s-version-20220921220934-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220921220934-5916
	I0921 22:09:56.662163    7968 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220921220934-5916 --format={{.State.Status}}
	W0921 22:09:56.862595    7968 cli_runner.go:211] docker container inspect old-k8s-version-20220921220934-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:09:56.862595    7968 oci.go:658] temporary error verifying shutdown: unknown state "old-k8s-version-20220921220934-5916": docker container inspect old-k8s-version-20220921220934-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220921220934-5916
	I0921 22:09:56.862595    7968 oci.go:660] temporary error: container old-k8s-version-20220921220934-5916 status is  but expect it to be exited
	I0921 22:09:56.862595    7968 retry.go:31] will retry after 901.025843ms: couldn't verify container is exited. %v: unknown state "old-k8s-version-20220921220934-5916": docker container inspect old-k8s-version-20220921220934-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220921220934-5916
	I0921 22:09:57.778867    7968 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220921220934-5916 --format={{.State.Status}}
	W0921 22:09:58.003977    7968 cli_runner.go:211] docker container inspect old-k8s-version-20220921220934-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:09:58.004104    7968 oci.go:658] temporary error verifying shutdown: unknown state "old-k8s-version-20220921220934-5916": docker container inspect old-k8s-version-20220921220934-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220921220934-5916
	I0921 22:09:58.004104    7968 oci.go:660] temporary error: container old-k8s-version-20220921220934-5916 status is  but expect it to be exited
	I0921 22:09:58.004104    7968 retry.go:31] will retry after 1.713171311s: couldn't verify container is exited. %v: unknown state "old-k8s-version-20220921220934-5916": docker container inspect old-k8s-version-20220921220934-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220921220934-5916
	I0921 22:09:59.738124    7968 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220921220934-5916 --format={{.State.Status}}
	W0921 22:09:59.929210    7968 cli_runner.go:211] docker container inspect old-k8s-version-20220921220934-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:09:59.929579    7968 oci.go:658] temporary error verifying shutdown: unknown state "old-k8s-version-20220921220934-5916": docker container inspect old-k8s-version-20220921220934-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220921220934-5916
	I0921 22:09:59.929579    7968 oci.go:660] temporary error: container old-k8s-version-20220921220934-5916 status is  but expect it to be exited
	I0921 22:09:59.929579    7968 retry.go:31] will retry after 3.325151152s: couldn't verify container is exited. %v: unknown state "old-k8s-version-20220921220934-5916": docker container inspect old-k8s-version-20220921220934-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220921220934-5916
	I0921 22:10:03.268457    7968 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220921220934-5916 --format={{.State.Status}}
	W0921 22:10:03.488730    7968 cli_runner.go:211] docker container inspect old-k8s-version-20220921220934-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:10:03.488823    7968 oci.go:658] temporary error verifying shutdown: unknown state "old-k8s-version-20220921220934-5916": docker container inspect old-k8s-version-20220921220934-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220921220934-5916
	I0921 22:10:03.488947    7968 oci.go:660] temporary error: container old-k8s-version-20220921220934-5916 status is  but expect it to be exited
	I0921 22:10:03.488981    7968 retry.go:31] will retry after 2.711970641s: couldn't verify container is exited. %v: unknown state "old-k8s-version-20220921220934-5916": docker container inspect old-k8s-version-20220921220934-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220921220934-5916
	I0921 22:10:06.222936    7968 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220921220934-5916 --format={{.State.Status}}
	W0921 22:10:06.444502    7968 cli_runner.go:211] docker container inspect old-k8s-version-20220921220934-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:10:06.444502    7968 oci.go:658] temporary error verifying shutdown: unknown state "old-k8s-version-20220921220934-5916": docker container inspect old-k8s-version-20220921220934-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220921220934-5916
	I0921 22:10:06.444502    7968 oci.go:660] temporary error: container old-k8s-version-20220921220934-5916 status is  but expect it to be exited
	I0921 22:10:06.444502    7968 retry.go:31] will retry after 5.015617898s: couldn't verify container is exited. %v: unknown state "old-k8s-version-20220921220934-5916": docker container inspect old-k8s-version-20220921220934-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220921220934-5916
	I0921 22:10:11.480862    7968 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220921220934-5916 --format={{.State.Status}}
	W0921 22:10:11.689623    7968 cli_runner.go:211] docker container inspect old-k8s-version-20220921220934-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:10:11.689623    7968 oci.go:658] temporary error verifying shutdown: unknown state "old-k8s-version-20220921220934-5916": docker container inspect old-k8s-version-20220921220934-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220921220934-5916
	I0921 22:10:11.689623    7968 oci.go:660] temporary error: container old-k8s-version-20220921220934-5916 status is  but expect it to be exited
	I0921 22:10:11.689623    7968 oci.go:88] couldn't shut down old-k8s-version-20220921220934-5916 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "old-k8s-version-20220921220934-5916": docker container inspect old-k8s-version-20220921220934-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220921220934-5916
	 
	I0921 22:10:11.697547    7968 cli_runner.go:164] Run: docker rm -f -v old-k8s-version-20220921220934-5916
	I0921 22:10:11.913574    7968 cli_runner.go:164] Run: docker container inspect -f {{.Id}} old-k8s-version-20220921220934-5916
	W0921 22:10:12.108997    7968 cli_runner.go:211] docker container inspect -f {{.Id}} old-k8s-version-20220921220934-5916 returned with exit code 1
	I0921 22:10:12.115008    7968 cli_runner.go:164] Run: docker network inspect old-k8s-version-20220921220934-5916 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0921 22:10:12.343182    7968 cli_runner.go:211] docker network inspect old-k8s-version-20220921220934-5916 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0921 22:10:12.350829    7968 network_create.go:272] running [docker network inspect old-k8s-version-20220921220934-5916] to gather additional debugging logs...
	I0921 22:10:12.350829    7968 cli_runner.go:164] Run: docker network inspect old-k8s-version-20220921220934-5916
	W0921 22:10:12.560487    7968 cli_runner.go:211] docker network inspect old-k8s-version-20220921220934-5916 returned with exit code 1
	I0921 22:10:12.560629    7968 network_create.go:275] error running [docker network inspect old-k8s-version-20220921220934-5916]: docker network inspect old-k8s-version-20220921220934-5916: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: old-k8s-version-20220921220934-5916
	I0921 22:10:12.560629    7968 network_create.go:277] output of [docker network inspect old-k8s-version-20220921220934-5916]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: old-k8s-version-20220921220934-5916
	
	** /stderr **
	W0921 22:10:12.561805    7968 delete.go:139] delete failed (probably ok) <nil>
	I0921 22:10:12.561805    7968 fix.go:115] Sleeping 1 second for extra luck!
	I0921 22:10:13.576900    7968 start.go:125] createHost starting for "" (driver="docker")
	I0921 22:10:13.581114    7968 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0921 22:10:13.581478    7968 start.go:159] libmachine.API.Create for "old-k8s-version-20220921220934-5916" (driver="docker")
	I0921 22:10:13.581478    7968 client.go:168] LocalClient.Create starting
	I0921 22:10:13.581761    7968 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I0921 22:10:13.582388    7968 main.go:134] libmachine: Decoding PEM data...
	I0921 22:10:13.582388    7968 main.go:134] libmachine: Parsing certificate...
	I0921 22:10:13.582649    7968 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I0921 22:10:13.582837    7968 main.go:134] libmachine: Decoding PEM data...
	I0921 22:10:13.582837    7968 main.go:134] libmachine: Parsing certificate...
	I0921 22:10:13.590558    7968 cli_runner.go:164] Run: docker network inspect old-k8s-version-20220921220934-5916 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0921 22:10:13.794643    7968 cli_runner.go:211] docker network inspect old-k8s-version-20220921220934-5916 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0921 22:10:13.801767    7968 network_create.go:272] running [docker network inspect old-k8s-version-20220921220934-5916] to gather additional debugging logs...
	I0921 22:10:13.801767    7968 cli_runner.go:164] Run: docker network inspect old-k8s-version-20220921220934-5916
	W0921 22:10:13.996000    7968 cli_runner.go:211] docker network inspect old-k8s-version-20220921220934-5916 returned with exit code 1
	I0921 22:10:13.996000    7968 network_create.go:275] error running [docker network inspect old-k8s-version-20220921220934-5916]: docker network inspect old-k8s-version-20220921220934-5916: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: old-k8s-version-20220921220934-5916
	I0921 22:10:13.996000    7968 network_create.go:277] output of [docker network inspect old-k8s-version-20220921220934-5916]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: old-k8s-version-20220921220934-5916
	
	** /stderr **
	I0921 22:10:14.004303    7968 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0921 22:10:14.213146    7968 network.go:281] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0005be568] amended:false}} dirty:map[] misses:0}
	I0921 22:10:14.213743    7968 network.go:239] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0921 22:10:14.228494    7968 network.go:290] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0005be568] amended:true}} dirty:map[192.168.49.0:0xc0005be568 192.168.58.0:0xc0005be6b0] misses:0}
	I0921 22:10:14.228494    7968 network.go:236] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0921 22:10:14.228494    7968 network_create.go:115] attempt to create docker network old-k8s-version-20220921220934-5916 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0921 22:10:14.235279    7968 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=old-k8s-version-20220921220934-5916 old-k8s-version-20220921220934-5916
	W0921 22:10:14.444806    7968 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=old-k8s-version-20220921220934-5916 old-k8s-version-20220921220934-5916 returned with exit code 1
	E0921 22:10:14.444806    7968 network_create.go:104] error while trying to create docker network old-k8s-version-20220921220934-5916 192.168.58.0/24: create docker network old-k8s-version-20220921220934-5916 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=old-k8s-version-20220921220934-5916 old-k8s-version-20220921220934-5916: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 2514ae223821c312db58248949088f85b451e90c7ff7742ddb0507cbc1b1a75a (br-2514ae223821): conflicts with network 8a3cd8d165a4af032276277dbee93b52b8ee4aed7ca6d8aceee6814901b31792 (br-8a3cd8d165a4): networks have overlapping IPv4
	W0921 22:10:14.444806    7968 out.go:239] ! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network old-k8s-version-20220921220934-5916 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=old-k8s-version-20220921220934-5916 old-k8s-version-20220921220934-5916: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 2514ae223821c312db58248949088f85b451e90c7ff7742ddb0507cbc1b1a75a (br-2514ae223821): conflicts with network 8a3cd8d165a4af032276277dbee93b52b8ee4aed7ca6d8aceee6814901b31792 (br-8a3cd8d165a4): networks have overlapping IPv4
	
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network old-k8s-version-20220921220934-5916 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=old-k8s-version-20220921220934-5916 old-k8s-version-20220921220934-5916: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 2514ae223821c312db58248949088f85b451e90c7ff7742ddb0507cbc1b1a75a (br-2514ae223821): conflicts with network 8a3cd8d165a4af032276277dbee93b52b8ee4aed7ca6d8aceee6814901b31792 (br-8a3cd8d165a4): networks have overlapping IPv4
	
	I0921 22:10:14.458574    7968 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0921 22:10:14.673885    7968 cli_runner.go:164] Run: docker volume create old-k8s-version-20220921220934-5916 --label name.minikube.sigs.k8s.io=old-k8s-version-20220921220934-5916 --label created_by.minikube.sigs.k8s.io=true
	W0921 22:10:14.878345    7968 cli_runner.go:211] docker volume create old-k8s-version-20220921220934-5916 --label name.minikube.sigs.k8s.io=old-k8s-version-20220921220934-5916 --label created_by.minikube.sigs.k8s.io=true returned with exit code 1
	I0921 22:10:14.878683    7968 client.go:171] LocalClient.Create took 1.2971957s
	I0921 22:10:16.907064    7968 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0921 22:10:16.915711    7968 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220921220934-5916
	W0921 22:10:17.103898    7968 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220921220934-5916 returned with exit code 1
	I0921 22:10:17.103898    7968 retry.go:31] will retry after 242.222461ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-20220921220934-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220921220934-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220921220934-5916
	I0921 22:10:17.365163    7968 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220921220934-5916
	W0921 22:10:17.570618    7968 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220921220934-5916 returned with exit code 1
	I0921 22:10:17.570782    7968 retry.go:31] will retry after 293.637806ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-20220921220934-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220921220934-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220921220934-5916
	I0921 22:10:17.874905    7968 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220921220934-5916
	W0921 22:10:18.078721    7968 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220921220934-5916 returned with exit code 1
	I0921 22:10:18.078721    7968 retry.go:31] will retry after 446.119795ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-20220921220934-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220921220934-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220921220934-5916
	I0921 22:10:18.545258    7968 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220921220934-5916
	W0921 22:10:18.726411    7968 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220921220934-5916 returned with exit code 1
	W0921 22:10:18.726689    7968 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-20220921220934-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220921220934-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220921220934-5916
	
	W0921 22:10:18.726689    7968 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-20220921220934-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220921220934-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220921220934-5916
	I0921 22:10:18.737297    7968 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0921 22:10:18.743145    7968 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220921220934-5916
	W0921 22:10:18.929616    7968 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220921220934-5916 returned with exit code 1
	I0921 22:10:18.929616    7968 retry.go:31] will retry after 179.638263ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-20220921220934-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220921220934-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220921220934-5916
	I0921 22:10:19.128289    7968 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220921220934-5916
	W0921 22:10:19.321054    7968 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220921220934-5916 returned with exit code 1
	I0921 22:10:19.321369    7968 retry.go:31] will retry after 263.695078ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-20220921220934-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220921220934-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220921220934-5916
	I0921 22:10:19.602800    7968 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220921220934-5916
	W0921 22:10:19.804428    7968 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220921220934-5916 returned with exit code 1
	I0921 22:10:19.804491    7968 retry.go:31] will retry after 484.240172ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-20220921220934-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220921220934-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220921220934-5916
	I0921 22:10:20.311928    7968 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220921220934-5916
	W0921 22:10:20.504498    7968 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220921220934-5916 returned with exit code 1
	W0921 22:10:20.504498    7968 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-20220921220934-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220921220934-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220921220934-5916
	
	W0921 22:10:20.504498    7968 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-20220921220934-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220921220934-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220921220934-5916
	I0921 22:10:20.504498    7968 start.go:128] duration metric: createHost completed in 6.9275131s
	I0921 22:10:20.519368    7968 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0921 22:10:20.527683    7968 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220921220934-5916
	W0921 22:10:20.706731    7968 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220921220934-5916 returned with exit code 1
	I0921 22:10:20.707175    7968 retry.go:31] will retry after 340.62286ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-20220921220934-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220921220934-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220921220934-5916
	I0921 22:10:21.061470    7968 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220921220934-5916
	W0921 22:10:21.268796    7968 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220921220934-5916 returned with exit code 1
	I0921 22:10:21.269063    7968 retry.go:31] will retry after 297.417842ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-20220921220934-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220921220934-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220921220934-5916
	I0921 22:10:21.590192    7968 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220921220934-5916
	W0921 22:10:21.796638    7968 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220921220934-5916 returned with exit code 1
	I0921 22:10:21.796696    7968 retry.go:31] will retry after 448.358942ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-20220921220934-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220921220934-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220921220934-5916
	I0921 22:10:22.257913    7968 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220921220934-5916
	W0921 22:10:22.465901    7968 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220921220934-5916 returned with exit code 1
	W0921 22:10:22.465901    7968 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-20220921220934-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220921220934-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220921220934-5916
	
	W0921 22:10:22.465901    7968 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-20220921220934-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220921220934-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220921220934-5916
	I0921 22:10:22.476707    7968 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0921 22:10:22.482875    7968 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220921220934-5916
	W0921 22:10:22.684300    7968 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220921220934-5916 returned with exit code 1
	I0921 22:10:22.684300    7968 retry.go:31] will retry after 176.645665ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-20220921220934-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220921220934-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220921220934-5916
	I0921 22:10:22.879469    7968 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220921220934-5916
	W0921 22:10:23.057540    7968 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220921220934-5916 returned with exit code 1
	I0921 22:10:23.057540    7968 retry.go:31] will retry after 512.00063ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-20220921220934-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220921220934-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220921220934-5916
	I0921 22:10:23.578691    7968 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220921220934-5916
	W0921 22:10:23.770844    7968 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220921220934-5916 returned with exit code 1
	I0921 22:10:23.770844    7968 retry.go:31] will retry after 673.154531ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-20220921220934-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220921220934-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220921220934-5916
	I0921 22:10:24.460971    7968 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220921220934-5916
	W0921 22:10:24.671246    7968 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220921220934-5916 returned with exit code 1
	W0921 22:10:24.671246    7968 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-20220921220934-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220921220934-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220921220934-5916
	
	W0921 22:10:24.671246    7968 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-20220921220934-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220921220934-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220921220934-5916
	I0921 22:10:24.671246    7968 fix.go:57] fixHost completed within 31.4595073s
	I0921 22:10:24.671246    7968 start.go:83] releasing machines lock for "old-k8s-version-20220921220934-5916", held for 31.4595073s
	W0921 22:10:24.671246    7968 out.go:239] * Failed to start docker container. Running "minikube delete -p old-k8s-version-20220921220934-5916" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for old-k8s-version-20220921220934-5916 container: docker volume create old-k8s-version-20220921220934-5916 --label name.minikube.sigs.k8s.io=old-k8s-version-20220921220934-5916 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create old-k8s-version-20220921220934-5916: error while creating volume root path '/var/lib/docker/volumes/old-k8s-version-20220921220934-5916': mkdir /var/lib/docker/volumes/old-k8s-version-20220921220934-5916: read-only file system
	
	* Failed to start docker container. Running "minikube delete -p old-k8s-version-20220921220934-5916" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for old-k8s-version-20220921220934-5916 container: docker volume create old-k8s-version-20220921220934-5916 --label name.minikube.sigs.k8s.io=old-k8s-version-20220921220934-5916 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create old-k8s-version-20220921220934-5916: error while creating volume root path '/var/lib/docker/volumes/old-k8s-version-20220921220934-5916': mkdir /var/lib/docker/volumes/old-k8s-version-20220921220934-5916: read-only file system
	
	I0921 22:10:24.676323    7968 out.go:177] 
	W0921 22:10:24.678394    7968 out.go:239] X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for old-k8s-version-20220921220934-5916 container: docker volume create old-k8s-version-20220921220934-5916 --label name.minikube.sigs.k8s.io=old-k8s-version-20220921220934-5916 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create old-k8s-version-20220921220934-5916: error while creating volume root path '/var/lib/docker/volumes/old-k8s-version-20220921220934-5916': mkdir /var/lib/docker/volumes/old-k8s-version-20220921220934-5916: read-only file system
	
	X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for old-k8s-version-20220921220934-5916 container: docker volume create old-k8s-version-20220921220934-5916 --label name.minikube.sigs.k8s.io=old-k8s-version-20220921220934-5916 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create old-k8s-version-20220921220934-5916: error while creating volume root path '/var/lib/docker/volumes/old-k8s-version-20220921220934-5916': mkdir /var/lib/docker/volumes/old-k8s-version-20220921220934-5916: read-only file system
	
	W0921 22:10:24.678394    7968 out.go:239] * Suggestion: Restart Docker
	* Suggestion: Restart Docker
	W0921 22:10:24.679020    7968 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/6825
	* Related issue: https://github.com/kubernetes/minikube/issues/6825
	I0921 22:10:24.682373    7968 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-windows-amd64.exe start -p old-k8s-version-20220921220934-5916 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --kubernetes-version=v1.16.0": exit status 60
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/FirstStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-20220921220934-5916
helpers_test.go:231: (dbg) Non-zero exit: docker inspect old-k8s-version-20220921220934-5916: exit status 1 (250.7154ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: old-k8s-version-20220921220934-5916

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-20220921220934-5916 -n old-k8s-version-20220921220934-5916
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-20220921220934-5916 -n old-k8s-version-20220921220934-5916: exit status 7 (593.0244ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0921 22:10:25.635479    8144 status.go:247] status error: host: state: unknown state "old-k8s-version-20220921220934-5916": docker container inspect old-k8s-version-20220921220934-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220921220934-5916

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-20220921220934-5916" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (51.40s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (50.7s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-windows-amd64.exe start -p no-preload-20220921220937-5916 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker --kubernetes-version=v1.25.2

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p no-preload-20220921220937-5916 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker --kubernetes-version=v1.25.2: exit status 60 (49.7338714s)

                                                
                                                
-- stdout --
	* [no-preload-20220921220937-5916] minikube v1.27.0 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	  - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=14995
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting control plane node no-preload-20220921220937-5916 in cluster no-preload-20220921220937-5916
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* docker "no-preload-20220921220937-5916" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0921 22:09:37.267086    1480 out.go:296] Setting OutFile to fd 928 ...
	I0921 22:09:37.332128    1480 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0921 22:09:37.332128    1480 out.go:309] Setting ErrFile to fd 1908...
	I0921 22:09:37.332128    1480 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0921 22:09:37.341906    1480 out.go:303] Setting JSON to false
	I0921 22:09:37.365813    1480 start.go:115] hostinfo: {"hostname":"minikube2","uptime":4245,"bootTime":1663793932,"procs":151,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19044 Build 19044","kernelVersion":"10.0.19044 Build 19044","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"29dced62-21fb-45d8-a34e-472b66ced002"}
	W0921 22:09:37.365941    1480 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0921 22:09:37.372424    1480 out.go:177] * [no-preload-20220921220937-5916] minikube v1.27.0 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	I0921 22:09:37.376456    1480 notify.go:214] Checking for updates...
	I0921 22:09:37.378935    1480 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	I0921 22:09:37.381749    1480 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	I0921 22:09:37.384410    1480 out.go:177]   - MINIKUBE_LOCATION=14995
	I0921 22:09:37.388988    1480 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0921 22:09:37.393699    1480 config.go:180] Loaded profile config "cert-expiration-20220921220719-5916": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.2
	I0921 22:09:37.393699    1480 config.go:180] Loaded profile config "kubernetes-upgrade-20220921220835-5916": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0921 22:09:37.394654    1480 config.go:180] Loaded profile config "multinode-20220921215635-5916-m01": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.2
	I0921 22:09:37.394654    1480 config.go:180] Loaded profile config "old-k8s-version-20220921220934-5916": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0921 22:09:37.394654    1480 driver.go:365] Setting default libvirt URI to qemu:///system
	I0921 22:09:37.713234    1480 docker.go:137] docker version: linux-20.10.17
	I0921 22:09:37.721729    1480 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0921 22:09:38.289939    1480 info.go:265] docker info: {ID:6VFI:N2PO:PLBP:KXLR:EN7R:OR7P:564S:UY7N:ZWDA:5V2U:KGL2:KJZ7 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:44 OomKillDisable:true NGoroutines:79 SystemTime:2022-09-21 22:09:37.8944794 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.10.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.9] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-
plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.19.0]] Warnings:<nil>}}
	I0921 22:09:38.294070    1480 out.go:177] * Using the docker driver based on user configuration
	I0921 22:09:38.297612    1480 start.go:284] selected driver: docker
	I0921 22:09:38.297612    1480 start.go:808] validating driver "docker" against <nil>
	I0921 22:09:38.297612    1480 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0921 22:09:38.359348    1480 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0921 22:09:38.907575    1480 info.go:265] docker info: {ID:6VFI:N2PO:PLBP:KXLR:EN7R:OR7P:564S:UY7N:ZWDA:5V2U:KGL2:KJZ7 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:44 OomKillDisable:true NGoroutines:79 SystemTime:2022-09-21 22:09:38.5222769 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.10.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.9] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-
plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.19.0]] Warnings:<nil>}}
	I0921 22:09:38.908632    1480 start_flags.go:302] no existing cluster config was found, will generate one from the flags 
	I0921 22:09:38.909779    1480 start_flags.go:867] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0921 22:09:38.912811    1480 out.go:177] * Using Docker Desktop driver with root privileges
	I0921 22:09:38.915284    1480 cni.go:95] Creating CNI manager for ""
	I0921 22:09:38.915322    1480 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0921 22:09:38.915322    1480 start_flags.go:316] config:
	{Name:no-preload-20220921220937-5916 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.2 ClusterName:no-preload-20220921220937-5916 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I0921 22:09:38.918873    1480 out.go:177] * Starting control plane node no-preload-20220921220937-5916 in cluster no-preload-20220921220937-5916
	I0921 22:09:38.920304    1480 cache.go:120] Beginning downloading kic base image for docker with docker
	I0921 22:09:38.923115    1480 out.go:177] * Pulling base image ...
	I0921 22:09:38.925255    1480 preload.go:132] Checking if preload exists for k8s version v1.25.2 and runtime docker
	I0921 22:09:38.925255    1480 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c in local docker daemon
	I0921 22:09:38.925255    1480 profile.go:148] Saving config to C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\no-preload-20220921220937-5916\config.json ...
	I0921 22:09:38.925255    1480 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner:v5 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5
	I0921 22:09:38.925255    1480 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver:v1.25.2 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.25.2
	I0921 22:09:38.925255    1480 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager:v1.25.2 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.25.2
	I0921 22:09:38.925255    1480 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler:v1.25.2 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.25.2
	I0921 22:09:38.925255    1480 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\no-preload-20220921220937-5916\config.json: {Name:mke66fa57f8eff8ffec72e5c6781f913673f281d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0921 22:09:38.926502    1480 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd:3.5.4-0 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.5.4-0
	I0921 22:09:38.926502    1480 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns:v1.9.3 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns_v1.9.3
	I0921 22:09:38.925255    1480 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy:v1.25.2 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.25.2
	I0921 22:09:38.926619    1480 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause:3.8 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.8
	I0921 22:09:39.096763    1480 cache.go:107] acquiring lock: {Name:mk93ccdec90972c05247bea23df9b97c54ef0291 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0921 22:09:39.097763    1480 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5 exists
	I0921 22:09:39.097763    1480 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\cache\\images\\amd64\\gcr.io\\k8s-minikube\\storage-provisioner_v5" took 172.506ms
	I0921 22:09:39.097763    1480 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5 succeeded
	I0921 22:09:39.106808    1480 cache.go:107] acquiring lock: {Name:mk42e25c67b04a7be621dff66042769c9efcef51 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0921 22:09:39.106808    1480 cache.go:107] acquiring lock: {Name:mk8be8007302f2b8b3da1dd98caf592762225a91 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0921 22:09:39.106808    1480 cache.go:107] acquiring lock: {Name:mk23bc57c381d093082940a5c180cc32b71f6590 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0921 22:09:39.106808    1480 cache.go:107] acquiring lock: {Name:mk0ca2aa3958827f29fbc172907397ae8c50da6b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0921 22:09:39.106808    1480 cache.go:107] acquiring lock: {Name:mkab3ed6e795d07d8ef34d153242f0555bd2990e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0921 22:09:39.106808    1480 cache.go:107] acquiring lock: {Name:mk0addad2b04152bfd63161db235c11568b39fe8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0921 22:09:39.106808    1480 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.25.2 exists
	I0921 22:09:39.106808    1480 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns_v1.9.3 exists
	I0921 22:09:39.106808    1480 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.25.2 exists
	I0921 22:09:39.107781    1480 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.5.4-0 exists
	I0921 22:09:39.107781    1480 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.25.2" -> "C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\kube-controller-manager_v1.25.2" took 182.5241ms
	I0921 22:09:39.107781    1480 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.25.2 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.25.2 succeeded
	I0921 22:09:39.107781    1480 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.8 exists
	I0921 22:09:39.107781    1480 cache.go:96] cache image "registry.k8s.io/etcd:3.5.4-0" -> "C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\etcd_3.5.4-0" took 181.2771ms
	I0921 22:09:39.107781    1480 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.4-0 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.5.4-0 succeeded
	I0921 22:09:39.107781    1480 cache.go:96] cache image "registry.k8s.io/pause:3.8" -> "C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\pause_3.8" took 180.9063ms
	I0921 22:09:39.107781    1480 cache.go:80] save to tar file registry.k8s.io/pause:3.8 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.8 succeeded
	I0921 22:09:39.107781    1480 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.25.2" -> "C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\kube-scheduler_v1.25.2" took 182.5241ms
	I0921 22:09:39.107781    1480 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.25.2 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.25.2 succeeded
	I0921 22:09:39.107781    1480 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.9.3" -> "C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\coredns\\coredns_v1.9.3" took 181.0419ms
	I0921 22:09:39.107781    1480 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.9.3 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns_v1.9.3 succeeded
	I0921 22:09:39.107781    1480 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.25.2 exists
	I0921 22:09:39.108767    1480 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.25.2" -> "C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\kube-apiserver_v1.25.2" took 183.5103ms
	I0921 22:09:39.108767    1480 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.25.2 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.25.2 succeeded
	I0921 22:09:39.126769    1480 cache.go:107] acquiring lock: {Name:mkb326da2140b0ae2e00a2988d7409604e21ee2c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0921 22:09:39.126769    1480 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.25.2 exists
	I0921 22:09:39.126769    1480 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.25.2" -> "C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\kube-proxy_v1.25.2" took 199.9884ms
	I0921 22:09:39.126769    1480 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.25.2 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.25.2 succeeded
	I0921 22:09:39.126769    1480 cache.go:87] Successfully saved all images to host disk.
	I0921 22:09:39.174273    1480 cache.go:147] Downloading gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c to local cache
	I0921 22:09:39.174273    1480 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase_v0.0.34@sha256_f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c.tar
	I0921 22:09:39.174273    1480 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase_v0.0.34@sha256_f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c.tar
	I0921 22:09:39.174273    1480 image.go:59] Checking for gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c in local cache directory
	I0921 22:09:39.174273    1480 image.go:62] Found gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c in local cache directory, skipping pull
	I0921 22:09:39.174273    1480 image.go:103] gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c exists in cache, skipping pull
	I0921 22:09:39.174273    1480 cache.go:150] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c as a tarball
	I0921 22:09:39.174273    1480 cache.go:161] Loading gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c from local cache
	I0921 22:09:39.174273    1480 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase_v0.0.34@sha256_f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c.tar
	I0921 22:09:41.424618    1480 cache.go:164] successfully loaded gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c from cached tarball
	I0921 22:09:41.424618    1480 cache.go:170] Downloading gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c to local daemon
	I0921 22:09:41.424618    1480 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c.d.lock -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase_v0.0.34@sha256_f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c.d.lock
	I0921 22:09:41.425581    1480 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c in local docker daemon
	I0921 22:09:41.623006    1480 image.go:243] Writing gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c to local daemon
	    > gcr.io/k8s-minikube/kicbase:  0 B [__________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase:  0 B [__________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase:  0 B [__________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase:  0 B [__________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase:  0 B [______________________] ?% ? p/s 900msI0921 22:09:43.800779    1480 cache.go:177] use image loaded from cache gcr.io/k8s-minikube/kicbase:v0.0.34
	W0921 22:09:43.800840    1480 out.go:239] ! minikube was unable to download gcr.io/k8s-minikube/kicbase:v0.0.34, but successfully downloaded gcr.io/k8s-minikube/kicbase:v0.0.34 as a fallback image
	! minikube was unable to download gcr.io/k8s-minikube/kicbase:v0.0.34, but successfully downloaded gcr.io/k8s-minikube/kicbase:v0.0.34 as a fallback image
	I0921 22:09:43.800840    1480 cache.go:208] Successfully downloaded all kic artifacts
	I0921 22:09:43.800840    1480 start.go:364] acquiring machines lock for no-preload-20220921220937-5916: {Name:mk5ebebabfef01f6dc67af3c2b2ec3d91e957a4d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0921 22:09:43.800840    1480 start.go:368] acquired machines lock for "no-preload-20220921220937-5916" in 0s
	I0921 22:09:43.800840    1480 start.go:93] Provisioning new machine with config: &{Name:no-preload-20220921220937-5916 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.34 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.2 ClusterName:no-preload-20220921220937-5916 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Conta
inerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.25.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMn
etClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet} &{Name: IP: Port:8443 KubernetesVersion:v1.25.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0921 22:09:43.801670    1480 start.go:125] createHost starting for "" (driver="docker")
	I0921 22:09:43.810215    1480 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0921 22:09:43.811116    1480 start.go:159] libmachine.API.Create for "no-preload-20220921220937-5916" (driver="docker")
	I0921 22:09:43.811116    1480 client.go:168] LocalClient.Create starting
	I0921 22:09:43.811311    1480 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I0921 22:09:43.811311    1480 main.go:134] libmachine: Decoding PEM data...
	I0921 22:09:43.811881    1480 main.go:134] libmachine: Parsing certificate...
	I0921 22:09:43.812019    1480 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I0921 22:09:43.812019    1480 main.go:134] libmachine: Decoding PEM data...
	I0921 22:09:43.812019    1480 main.go:134] libmachine: Parsing certificate...
	I0921 22:09:43.820990    1480 cli_runner.go:164] Run: docker network inspect no-preload-20220921220937-5916 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0921 22:09:44.018619    1480 cli_runner.go:211] docker network inspect no-preload-20220921220937-5916 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0921 22:09:44.027200    1480 network_create.go:272] running [docker network inspect no-preload-20220921220937-5916] to gather additional debugging logs...
	I0921 22:09:44.027326    1480 cli_runner.go:164] Run: docker network inspect no-preload-20220921220937-5916
	W0921 22:09:44.241591    1480 cli_runner.go:211] docker network inspect no-preload-20220921220937-5916 returned with exit code 1
	I0921 22:09:44.241799    1480 network_create.go:275] error running [docker network inspect no-preload-20220921220937-5916]: docker network inspect no-preload-20220921220937-5916: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: no-preload-20220921220937-5916
	I0921 22:09:44.241799    1480 network_create.go:277] output of [docker network inspect no-preload-20220921220937-5916]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: no-preload-20220921220937-5916
	
	** /stderr **
	I0921 22:09:44.242338    1480 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0921 22:09:44.492781    1480 network.go:290] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc00014acc8] misses:0}
	I0921 22:09:44.493796    1480 network.go:236] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0921 22:09:44.493796    1480 network_create.go:115] attempt to create docker network no-preload-20220921220937-5916 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0921 22:09:44.501752    1480 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=no-preload-20220921220937-5916 no-preload-20220921220937-5916
	W0921 22:09:44.721006    1480 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=no-preload-20220921220937-5916 no-preload-20220921220937-5916 returned with exit code 1
	E0921 22:09:44.721037    1480 network_create.go:104] error while trying to create docker network no-preload-20220921220937-5916 192.168.49.0/24: create docker network no-preload-20220921220937-5916 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=no-preload-20220921220937-5916 no-preload-20220921220937-5916: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network e05cef0fef095a11453c7d036bd8fc4892992e051690fde9174e8afb7955aa63 (br-e05cef0fef09): conflicts with network a04d36bfb3cf4b1b1ab643a2ec70fadfc4a485878b32a275b4c7653037d66dd0 (br-a04d36bfb3cf): networks have overlapping IPv4
	W0921 22:09:44.721037    1480 out.go:239] ! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network no-preload-20220921220937-5916 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=no-preload-20220921220937-5916 no-preload-20220921220937-5916: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network e05cef0fef095a11453c7d036bd8fc4892992e051690fde9174e8afb7955aa63 (br-e05cef0fef09): conflicts with network a04d36bfb3cf4b1b1ab643a2ec70fadfc4a485878b32a275b4c7653037d66dd0 (br-a04d36bfb3cf): networks have overlapping IPv4
	
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network no-preload-20220921220937-5916 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=no-preload-20220921220937-5916 no-preload-20220921220937-5916: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network e05cef0fef095a11453c7d036bd8fc4892992e051690fde9174e8afb7955aa63 (br-e05cef0fef09): conflicts with network a04d36bfb3cf4b1b1ab643a2ec70fadfc4a485878b32a275b4c7653037d66dd0 (br-a04d36bfb3cf): networks have overlapping IPv4
	
	I0921 22:09:44.735860    1480 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0921 22:09:45.007520    1480 cli_runner.go:164] Run: docker volume create no-preload-20220921220937-5916 --label name.minikube.sigs.k8s.io=no-preload-20220921220937-5916 --label created_by.minikube.sigs.k8s.io=true
	W0921 22:09:45.234089    1480 cli_runner.go:211] docker volume create no-preload-20220921220937-5916 --label name.minikube.sigs.k8s.io=no-preload-20220921220937-5916 --label created_by.minikube.sigs.k8s.io=true returned with exit code 1
	I0921 22:09:45.234477    1480 client.go:171] LocalClient.Create took 1.4233502s
	I0921 22:09:47.256408    1480 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0921 22:09:47.263603    1480 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220921220937-5916
	W0921 22:09:47.462858    1480 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220921220937-5916 returned with exit code 1
	I0921 22:09:47.462858    1480 retry.go:31] will retry after 276.165072ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-20220921220937-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220921220937-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220921220937-5916
	I0921 22:09:47.752373    1480 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220921220937-5916
	W0921 22:09:47.967187    1480 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220921220937-5916 returned with exit code 1
	I0921 22:09:47.967187    1480 retry.go:31] will retry after 540.190908ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-20220921220937-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220921220937-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220921220937-5916
	I0921 22:09:48.517459    1480 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220921220937-5916
	W0921 22:09:48.726021    1480 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220921220937-5916 returned with exit code 1
	W0921 22:09:48.726021    1480 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-20220921220937-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220921220937-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220921220937-5916
	
	W0921 22:09:48.726021    1480 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-20220921220937-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220921220937-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220921220937-5916
	I0921 22:09:48.736022    1480 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0921 22:09:48.742022    1480 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220921220937-5916
	W0921 22:09:48.961435    1480 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220921220937-5916 returned with exit code 1
	I0921 22:09:48.961435    1480 retry.go:31] will retry after 234.428547ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-20220921220937-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220921220937-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220921220937-5916
	I0921 22:09:49.217835    1480 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220921220937-5916
	W0921 22:09:49.427421    1480 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220921220937-5916 returned with exit code 1
	I0921 22:09:49.427421    1480 retry.go:31] will retry after 346.739061ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-20220921220937-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220921220937-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220921220937-5916
	I0921 22:09:49.797951    1480 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220921220937-5916
	W0921 22:09:50.003422    1480 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220921220937-5916 returned with exit code 1
	I0921 22:09:50.003422    1480 retry.go:31] will retry after 667.587979ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-20220921220937-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220921220937-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220921220937-5916
	I0921 22:09:50.680751    1480 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220921220937-5916
	W0921 22:09:50.888029    1480 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220921220937-5916 returned with exit code 1
	W0921 22:09:50.888029    1480 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-20220921220937-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220921220937-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220921220937-5916
	
	W0921 22:09:50.888029    1480 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-20220921220937-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220921220937-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220921220937-5916
	I0921 22:09:50.888029    1480 start.go:128] duration metric: createHost completed in 7.086305s
	I0921 22:09:50.888029    1480 start.go:83] releasing machines lock for "no-preload-20220921220937-5916", held for 7.0871352s
	W0921 22:09:50.888029    1480 start.go:602] error starting host: creating host: create: creating: setting up container node: creating volume for no-preload-20220921220937-5916 container: docker volume create no-preload-20220921220937-5916 --label name.minikube.sigs.k8s.io=no-preload-20220921220937-5916 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create no-preload-20220921220937-5916: error while creating volume root path '/var/lib/docker/volumes/no-preload-20220921220937-5916': mkdir /var/lib/docker/volumes/no-preload-20220921220937-5916: read-only file system
	I0921 22:09:50.904392    1480 cli_runner.go:164] Run: docker container inspect no-preload-20220921220937-5916 --format={{.State.Status}}
	W0921 22:09:51.105105    1480 cli_runner.go:211] docker container inspect no-preload-20220921220937-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:09:51.105250    1480 delete.go:82] Unable to get host status for no-preload-20220921220937-5916, assuming it has already been deleted: state: unknown state "no-preload-20220921220937-5916": docker container inspect no-preload-20220921220937-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220921220937-5916
	W0921 22:09:51.105250    1480 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: setting up container node: creating volume for no-preload-20220921220937-5916 container: docker volume create no-preload-20220921220937-5916 --label name.minikube.sigs.k8s.io=no-preload-20220921220937-5916 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create no-preload-20220921220937-5916: error while creating volume root path '/var/lib/docker/volumes/no-preload-20220921220937-5916': mkdir /var/lib/docker/volumes/no-preload-20220921220937-5916: read-only file system
	
	! StartHost failed, but will try again: creating host: create: creating: setting up container node: creating volume for no-preload-20220921220937-5916 container: docker volume create no-preload-20220921220937-5916 --label name.minikube.sigs.k8s.io=no-preload-20220921220937-5916 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create no-preload-20220921220937-5916: error while creating volume root path '/var/lib/docker/volumes/no-preload-20220921220937-5916': mkdir /var/lib/docker/volumes/no-preload-20220921220937-5916: read-only file system
	
	I0921 22:09:51.105250    1480 start.go:617] Will try again in 5 seconds ...
	I0921 22:09:56.109886    1480 start.go:364] acquiring machines lock for no-preload-20220921220937-5916: {Name:mk5ebebabfef01f6dc67af3c2b2ec3d91e957a4d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0921 22:09:56.109886    1480 start.go:368] acquired machines lock for "no-preload-20220921220937-5916" in 0s
	I0921 22:09:56.109886    1480 start.go:96] Skipping create...Using existing machine configuration
	I0921 22:09:56.109886    1480 fix.go:55] fixHost starting: 
	I0921 22:09:56.125099    1480 cli_runner.go:164] Run: docker container inspect no-preload-20220921220937-5916 --format={{.State.Status}}
	W0921 22:09:56.358232    1480 cli_runner.go:211] docker container inspect no-preload-20220921220937-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:09:56.358611    1480 fix.go:103] recreateIfNeeded on no-preload-20220921220937-5916: state= err=unknown state "no-preload-20220921220937-5916": docker container inspect no-preload-20220921220937-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220921220937-5916
	I0921 22:09:56.358693    1480 fix.go:108] machineExists: false. err=machine does not exist
	I0921 22:09:56.367908    1480 out.go:177] * docker "no-preload-20220921220937-5916" container is missing, will recreate.
	I0921 22:09:56.370059    1480 delete.go:124] DEMOLISHING no-preload-20220921220937-5916 ...
	I0921 22:09:56.383068    1480 cli_runner.go:164] Run: docker container inspect no-preload-20220921220937-5916 --format={{.State.Status}}
	W0921 22:09:56.559197    1480 cli_runner.go:211] docker container inspect no-preload-20220921220937-5916 --format={{.State.Status}} returned with exit code 1
	W0921 22:09:56.559197    1480 stop.go:75] unable to get state: unknown state "no-preload-20220921220937-5916": docker container inspect no-preload-20220921220937-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220921220937-5916
	I0921 22:09:56.559197    1480 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "no-preload-20220921220937-5916": docker container inspect no-preload-20220921220937-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220921220937-5916
	I0921 22:09:56.572869    1480 cli_runner.go:164] Run: docker container inspect no-preload-20220921220937-5916 --format={{.State.Status}}
	W0921 22:09:56.783487    1480 cli_runner.go:211] docker container inspect no-preload-20220921220937-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:09:56.783487    1480 delete.go:82] Unable to get host status for no-preload-20220921220937-5916, assuming it has already been deleted: state: unknown state "no-preload-20220921220937-5916": docker container inspect no-preload-20220921220937-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220921220937-5916
	I0921 22:09:56.791645    1480 cli_runner.go:164] Run: docker container inspect -f {{.Id}} no-preload-20220921220937-5916
	W0921 22:09:56.977502    1480 cli_runner.go:211] docker container inspect -f {{.Id}} no-preload-20220921220937-5916 returned with exit code 1
	I0921 22:09:56.977502    1480 kic.go:356] could not find the container no-preload-20220921220937-5916 to remove it. will try anyways
	I0921 22:09:56.985163    1480 cli_runner.go:164] Run: docker container inspect no-preload-20220921220937-5916 --format={{.State.Status}}
	W0921 22:09:57.192064    1480 cli_runner.go:211] docker container inspect no-preload-20220921220937-5916 --format={{.State.Status}} returned with exit code 1
	W0921 22:09:57.192064    1480 oci.go:84] error getting container status, will try to delete anyways: unknown state "no-preload-20220921220937-5916": docker container inspect no-preload-20220921220937-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220921220937-5916
	I0921 22:09:57.201789    1480 cli_runner.go:164] Run: docker exec --privileged -t no-preload-20220921220937-5916 /bin/bash -c "sudo init 0"
	W0921 22:09:57.409869    1480 cli_runner.go:211] docker exec --privileged -t no-preload-20220921220937-5916 /bin/bash -c "sudo init 0" returned with exit code 1
	I0921 22:09:57.409920    1480 oci.go:646] error shutdown no-preload-20220921220937-5916: docker exec --privileged -t no-preload-20220921220937-5916 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: no-preload-20220921220937-5916
	I0921 22:09:58.433397    1480 cli_runner.go:164] Run: docker container inspect no-preload-20220921220937-5916 --format={{.State.Status}}
	W0921 22:09:58.612800    1480 cli_runner.go:211] docker container inspect no-preload-20220921220937-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:09:58.613010    1480 oci.go:658] temporary error verifying shutdown: unknown state "no-preload-20220921220937-5916": docker container inspect no-preload-20220921220937-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220921220937-5916
	I0921 22:09:58.613036    1480 oci.go:660] temporary error: container no-preload-20220921220937-5916 status is  but expect it to be exited
	I0921 22:09:58.613093    1480 retry.go:31] will retry after 328.259627ms: couldn't verify container is exited. %v: unknown state "no-preload-20220921220937-5916": docker container inspect no-preload-20220921220937-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220921220937-5916
	I0921 22:09:58.948838    1480 cli_runner.go:164] Run: docker container inspect no-preload-20220921220937-5916 --format={{.State.Status}}
	W0921 22:09:59.160765    1480 cli_runner.go:211] docker container inspect no-preload-20220921220937-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:09:59.160765    1480 oci.go:658] temporary error verifying shutdown: unknown state "no-preload-20220921220937-5916": docker container inspect no-preload-20220921220937-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220921220937-5916
	I0921 22:09:59.160765    1480 oci.go:660] temporary error: container no-preload-20220921220937-5916 status is  but expect it to be exited
	I0921 22:09:59.160765    1480 retry.go:31] will retry after 447.727139ms: couldn't verify container is exited. %v: unknown state "no-preload-20220921220937-5916": docker container inspect no-preload-20220921220937-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220921220937-5916
	I0921 22:09:59.626156    1480 cli_runner.go:164] Run: docker container inspect no-preload-20220921220937-5916 --format={{.State.Status}}
	W0921 22:09:59.836065    1480 cli_runner.go:211] docker container inspect no-preload-20220921220937-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:09:59.836065    1480 oci.go:658] temporary error verifying shutdown: unknown state "no-preload-20220921220937-5916": docker container inspect no-preload-20220921220937-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220921220937-5916
	I0921 22:09:59.836065    1480 oci.go:660] temporary error: container no-preload-20220921220937-5916 status is  but expect it to be exited
	I0921 22:09:59.836065    1480 retry.go:31] will retry after 901.025843ms: couldn't verify container is exited. %v: unknown state "no-preload-20220921220937-5916": docker container inspect no-preload-20220921220937-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220921220937-5916
	I0921 22:10:00.760072    1480 cli_runner.go:164] Run: docker container inspect no-preload-20220921220937-5916 --format={{.State.Status}}
	W0921 22:10:00.954454    1480 cli_runner.go:211] docker container inspect no-preload-20220921220937-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:10:00.954486    1480 oci.go:658] temporary error verifying shutdown: unknown state "no-preload-20220921220937-5916": docker container inspect no-preload-20220921220937-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220921220937-5916
	I0921 22:10:00.954486    1480 oci.go:660] temporary error: container no-preload-20220921220937-5916 status is  but expect it to be exited
	I0921 22:10:00.954486    1480 retry.go:31] will retry after 1.713171311s: couldn't verify container is exited. %v: unknown state "no-preload-20220921220937-5916": docker container inspect no-preload-20220921220937-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220921220937-5916
	I0921 22:10:02.691317    1480 cli_runner.go:164] Run: docker container inspect no-preload-20220921220937-5916 --format={{.State.Status}}
	W0921 22:10:02.883774    1480 cli_runner.go:211] docker container inspect no-preload-20220921220937-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:10:02.883774    1480 oci.go:658] temporary error verifying shutdown: unknown state "no-preload-20220921220937-5916": docker container inspect no-preload-20220921220937-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220921220937-5916
	I0921 22:10:02.883774    1480 oci.go:660] temporary error: container no-preload-20220921220937-5916 status is  but expect it to be exited
	I0921 22:10:02.883774    1480 retry.go:31] will retry after 3.325151152s: couldn't verify container is exited. %v: unknown state "no-preload-20220921220937-5916": docker container inspect no-preload-20220921220937-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220921220937-5916
	I0921 22:10:06.222936    1480 cli_runner.go:164] Run: docker container inspect no-preload-20220921220937-5916 --format={{.State.Status}}
	W0921 22:10:06.444502    1480 cli_runner.go:211] docker container inspect no-preload-20220921220937-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:10:06.444502    1480 oci.go:658] temporary error verifying shutdown: unknown state "no-preload-20220921220937-5916": docker container inspect no-preload-20220921220937-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220921220937-5916
	I0921 22:10:06.444502    1480 oci.go:660] temporary error: container no-preload-20220921220937-5916 status is  but expect it to be exited
	I0921 22:10:06.444502    1480 retry.go:31] will retry after 2.711970641s: couldn't verify container is exited. %v: unknown state "no-preload-20220921220937-5916": docker container inspect no-preload-20220921220937-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220921220937-5916
	I0921 22:10:09.178133    1480 cli_runner.go:164] Run: docker container inspect no-preload-20220921220937-5916 --format={{.State.Status}}
	W0921 22:10:09.372861    1480 cli_runner.go:211] docker container inspect no-preload-20220921220937-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:10:09.372861    1480 oci.go:658] temporary error verifying shutdown: unknown state "no-preload-20220921220937-5916": docker container inspect no-preload-20220921220937-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220921220937-5916
	I0921 22:10:09.372861    1480 oci.go:660] temporary error: container no-preload-20220921220937-5916 status is  but expect it to be exited
	I0921 22:10:09.372861    1480 retry.go:31] will retry after 5.015617898s: couldn't verify container is exited. %v: unknown state "no-preload-20220921220937-5916": docker container inspect no-preload-20220921220937-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220921220937-5916
	I0921 22:10:14.408450    1480 cli_runner.go:164] Run: docker container inspect no-preload-20220921220937-5916 --format={{.State.Status}}
	W0921 22:10:14.614905    1480 cli_runner.go:211] docker container inspect no-preload-20220921220937-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:10:14.614905    1480 oci.go:658] temporary error verifying shutdown: unknown state "no-preload-20220921220937-5916": docker container inspect no-preload-20220921220937-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220921220937-5916
	I0921 22:10:14.614905    1480 oci.go:660] temporary error: container no-preload-20220921220937-5916 status is  but expect it to be exited
	I0921 22:10:14.614905    1480 oci.go:88] couldn't shut down no-preload-20220921220937-5916 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "no-preload-20220921220937-5916": docker container inspect no-preload-20220921220937-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220921220937-5916
	 
	I0921 22:10:14.621883    1480 cli_runner.go:164] Run: docker rm -f -v no-preload-20220921220937-5916
	I0921 22:10:14.826498    1480 cli_runner.go:164] Run: docker container inspect -f {{.Id}} no-preload-20220921220937-5916
	W0921 22:10:15.016292    1480 cli_runner.go:211] docker container inspect -f {{.Id}} no-preload-20220921220937-5916 returned with exit code 1
	I0921 22:10:15.024190    1480 cli_runner.go:164] Run: docker network inspect no-preload-20220921220937-5916 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0921 22:10:15.236519    1480 cli_runner.go:211] docker network inspect no-preload-20220921220937-5916 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0921 22:10:15.244172    1480 network_create.go:272] running [docker network inspect no-preload-20220921220937-5916] to gather additional debugging logs...
	I0921 22:10:15.244172    1480 cli_runner.go:164] Run: docker network inspect no-preload-20220921220937-5916
	W0921 22:10:15.453495    1480 cli_runner.go:211] docker network inspect no-preload-20220921220937-5916 returned with exit code 1
	I0921 22:10:15.453495    1480 network_create.go:275] error running [docker network inspect no-preload-20220921220937-5916]: docker network inspect no-preload-20220921220937-5916: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: no-preload-20220921220937-5916
	I0921 22:10:15.453495    1480 network_create.go:277] output of [docker network inspect no-preload-20220921220937-5916]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: no-preload-20220921220937-5916
	
	** /stderr **
	W0921 22:10:15.454984    1480 delete.go:139] delete failed (probably ok) <nil>
	I0921 22:10:15.455064    1480 fix.go:115] Sleeping 1 second for extra luck!
	I0921 22:10:16.455438    1480 start.go:125] createHost starting for "" (driver="docker")
	I0921 22:10:16.461929    1480 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0921 22:10:16.461929    1480 start.go:159] libmachine.API.Create for "no-preload-20220921220937-5916" (driver="docker")
	I0921 22:10:16.461929    1480 client.go:168] LocalClient.Create starting
	I0921 22:10:16.462553    1480 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I0921 22:10:16.463089    1480 main.go:134] libmachine: Decoding PEM data...
	I0921 22:10:16.463089    1480 main.go:134] libmachine: Parsing certificate...
	I0921 22:10:16.463319    1480 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I0921 22:10:16.463319    1480 main.go:134] libmachine: Decoding PEM data...
	I0921 22:10:16.463319    1480 main.go:134] libmachine: Parsing certificate...
	I0921 22:10:16.471903    1480 cli_runner.go:164] Run: docker network inspect no-preload-20220921220937-5916 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0921 22:10:16.659255    1480 cli_runner.go:211] docker network inspect no-preload-20220921220937-5916 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0921 22:10:16.666813    1480 network_create.go:272] running [docker network inspect no-preload-20220921220937-5916] to gather additional debugging logs...
	I0921 22:10:16.666813    1480 cli_runner.go:164] Run: docker network inspect no-preload-20220921220937-5916
	W0921 22:10:16.846704    1480 cli_runner.go:211] docker network inspect no-preload-20220921220937-5916 returned with exit code 1
	I0921 22:10:16.846740    1480 network_create.go:275] error running [docker network inspect no-preload-20220921220937-5916]: docker network inspect no-preload-20220921220937-5916: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: no-preload-20220921220937-5916
	I0921 22:10:16.846829    1480 network_create.go:277] output of [docker network inspect no-preload-20220921220937-5916]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: no-preload-20220921220937-5916
	
	** /stderr **
	I0921 22:10:16.855024    1480 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0921 22:10:17.056844    1480 network.go:281] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00014acc8] amended:false}} dirty:map[] misses:0}
	I0921 22:10:17.056844    1480 network.go:239] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0921 22:10:17.072877    1480 network.go:290] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00014acc8] amended:true}} dirty:map[192.168.49.0:0xc00014acc8 192.168.58.0:0xc0004582c0] misses:0}
	I0921 22:10:17.073853    1480 network.go:236] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0921 22:10:17.073853    1480 network_create.go:115] attempt to create docker network no-preload-20220921220937-5916 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0921 22:10:17.082308    1480 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=no-preload-20220921220937-5916 no-preload-20220921220937-5916
	W0921 22:10:17.277127    1480 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=no-preload-20220921220937-5916 no-preload-20220921220937-5916 returned with exit code 1
	E0921 22:10:17.277237    1480 network_create.go:104] error while trying to create docker network no-preload-20220921220937-5916 192.168.58.0/24: create docker network no-preload-20220921220937-5916 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=no-preload-20220921220937-5916 no-preload-20220921220937-5916: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network ef9eccd953357a575f13e4dd6ebabe13cedfc129485450062983c8b455b498c1 (br-ef9eccd95335): conflicts with network 8a3cd8d165a4af032276277dbee93b52b8ee4aed7ca6d8aceee6814901b31792 (br-8a3cd8d165a4): networks have overlapping IPv4
	W0921 22:10:17.277636    1480 out.go:239] ! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network no-preload-20220921220937-5916 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=no-preload-20220921220937-5916 no-preload-20220921220937-5916: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network ef9eccd953357a575f13e4dd6ebabe13cedfc129485450062983c8b455b498c1 (br-ef9eccd95335): conflicts with network 8a3cd8d165a4af032276277dbee93b52b8ee4aed7ca6d8aceee6814901b31792 (br-8a3cd8d165a4): networks have overlapping IPv4
	
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network no-preload-20220921220937-5916 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=no-preload-20220921220937-5916 no-preload-20220921220937-5916: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network ef9eccd953357a575f13e4dd6ebabe13cedfc129485450062983c8b455b498c1 (br-ef9eccd95335): conflicts with network 8a3cd8d165a4af032276277dbee93b52b8ee4aed7ca6d8aceee6814901b31792 (br-8a3cd8d165a4): networks have overlapping IPv4
	
	I0921 22:10:17.291178    1480 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0921 22:10:17.516029    1480 cli_runner.go:164] Run: docker volume create no-preload-20220921220937-5916 --label name.minikube.sigs.k8s.io=no-preload-20220921220937-5916 --label created_by.minikube.sigs.k8s.io=true
	W0921 22:10:17.710208    1480 cli_runner.go:211] docker volume create no-preload-20220921220937-5916 --label name.minikube.sigs.k8s.io=no-preload-20220921220937-5916 --label created_by.minikube.sigs.k8s.io=true returned with exit code 1
	I0921 22:10:17.710208    1480 client.go:171] LocalClient.Create took 1.2482696s
	I0921 22:10:19.724445    1480 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0921 22:10:19.732203    1480 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220921220937-5916
	W0921 22:10:19.942039    1480 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220921220937-5916 returned with exit code 1
	I0921 22:10:19.942039    1480 retry.go:31] will retry after 242.222461ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-20220921220937-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220921220937-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220921220937-5916
	I0921 22:10:20.194786    1480 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220921220937-5916
	W0921 22:10:20.411342    1480 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220921220937-5916 returned with exit code 1
	I0921 22:10:20.411690    1480 retry.go:31] will retry after 293.637806ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-20220921220937-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220921220937-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220921220937-5916
	I0921 22:10:20.714274    1480 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220921220937-5916
	W0921 22:10:20.893137    1480 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220921220937-5916 returned with exit code 1
	I0921 22:10:20.893236    1480 retry.go:31] will retry after 446.119795ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-20220921220937-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220921220937-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220921220937-5916
	I0921 22:10:21.355869    1480 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220921220937-5916
	W0921 22:10:21.548374    1480 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220921220937-5916 returned with exit code 1
	W0921 22:10:21.548656    1480 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-20220921220937-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220921220937-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220921220937-5916
	
	W0921 22:10:21.548656    1480 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-20220921220937-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220921220937-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220921220937-5916
	I0921 22:10:21.560006    1480 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0921 22:10:21.565664    1480 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220921220937-5916
	W0921 22:10:21.765216    1480 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220921220937-5916 returned with exit code 1
	I0921 22:10:21.765216    1480 retry.go:31] will retry after 179.638263ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-20220921220937-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220921220937-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220921220937-5916
	I0921 22:10:21.960970    1480 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220921220937-5916
	W0921 22:10:22.169878    1480 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220921220937-5916 returned with exit code 1
	I0921 22:10:22.169878    1480 retry.go:31] will retry after 263.695078ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-20220921220937-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220921220937-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220921220937-5916
	I0921 22:10:22.444524    1480 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220921220937-5916
	W0921 22:10:22.638195    1480 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220921220937-5916 returned with exit code 1
	I0921 22:10:22.638449    1480 retry.go:31] will retry after 484.240172ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-20220921220937-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220921220937-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220921220937-5916
	I0921 22:10:23.144721    1480 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220921220937-5916
	W0921 22:10:23.336824    1480 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220921220937-5916 returned with exit code 1
	W0921 22:10:23.356964    1480 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-20220921220937-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220921220937-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220921220937-5916
	
	W0921 22:10:23.356964    1480 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-20220921220937-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220921220937-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220921220937-5916
	I0921 22:10:23.356964    1480 start.go:128] duration metric: createHost completed in 6.9012638s
	I0921 22:10:23.367039    1480 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0921 22:10:23.373117    1480 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220921220937-5916
	W0921 22:10:23.554216    1480 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220921220937-5916 returned with exit code 1
	I0921 22:10:23.554566    1480 retry.go:31] will retry after 340.62286ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-20220921220937-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220921220937-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220921220937-5916
	I0921 22:10:23.919415    1480 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220921220937-5916
	W0921 22:10:24.113019    1480 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220921220937-5916 returned with exit code 1
	I0921 22:10:24.113126    1480 retry.go:31] will retry after 297.417842ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-20220921220937-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220921220937-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220921220937-5916
	I0921 22:10:24.430087    1480 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220921220937-5916
	W0921 22:10:24.623605    1480 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220921220937-5916 returned with exit code 1
	I0921 22:10:24.623605    1480 retry.go:31] will retry after 448.358942ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-20220921220937-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220921220937-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220921220937-5916
	I0921 22:10:25.083787    1480 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220921220937-5916
	W0921 22:10:25.280017    1480 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220921220937-5916 returned with exit code 1
	W0921 22:10:25.280301    1480 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-20220921220937-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220921220937-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220921220937-5916
	
	W0921 22:10:25.280390    1480 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-20220921220937-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220921220937-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220921220937-5916
	I0921 22:10:25.292311    1480 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0921 22:10:25.299317    1480 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220921220937-5916
	W0921 22:10:25.496317    1480 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220921220937-5916 returned with exit code 1
	I0921 22:10:25.496857    1480 retry.go:31] will retry after 176.645665ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-20220921220937-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220921220937-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220921220937-5916
	I0921 22:10:25.692499    1480 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220921220937-5916
	W0921 22:10:25.914130    1480 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220921220937-5916 returned with exit code 1
	I0921 22:10:25.914130    1480 retry.go:31] will retry after 512.00063ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-20220921220937-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220921220937-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220921220937-5916
	I0921 22:10:26.440609    1480 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220921220937-5916
	W0921 22:10:26.674060    1480 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220921220937-5916 returned with exit code 1
	W0921 22:10:26.674060    1480 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-20220921220937-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220921220937-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220921220937-5916
	
	W0921 22:10:26.674060    1480 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-20220921220937-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220921220937-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220921220937-5916
	I0921 22:10:26.674060    1480 fix.go:57] fixHost completed within 30.5639401s
	I0921 22:10:26.674060    1480 start.go:83] releasing machines lock for "no-preload-20220921220937-5916", held for 30.5639401s
	W0921 22:10:26.674060    1480 out.go:239] * Failed to start docker container. Running "minikube delete -p no-preload-20220921220937-5916" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for no-preload-20220921220937-5916 container: docker volume create no-preload-20220921220937-5916 --label name.minikube.sigs.k8s.io=no-preload-20220921220937-5916 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create no-preload-20220921220937-5916: error while creating volume root path '/var/lib/docker/volumes/no-preload-20220921220937-5916': mkdir /var/lib/docker/volumes/no-preload-20220921220937-5916: read-only file system
	
	* Failed to start docker container. Running "minikube delete -p no-preload-20220921220937-5916" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for no-preload-20220921220937-5916 container: docker volume create no-preload-20220921220937-5916 --label name.minikube.sigs.k8s.io=no-preload-20220921220937-5916 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create no-preload-20220921220937-5916: error while creating volume root path '/var/lib/docker/volumes/no-preload-20220921220937-5916': mkdir /var/lib/docker/volumes/no-preload-20220921220937-5916: read-only file system
	
	I0921 22:10:26.682058    1480 out.go:177] 
	W0921 22:10:26.684067    1480 out.go:239] X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for no-preload-20220921220937-5916 container: docker volume create no-preload-20220921220937-5916 --label name.minikube.sigs.k8s.io=no-preload-20220921220937-5916 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create no-preload-20220921220937-5916: error while creating volume root path '/var/lib/docker/volumes/no-preload-20220921220937-5916': mkdir /var/lib/docker/volumes/no-preload-20220921220937-5916: read-only file system
	
	X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for no-preload-20220921220937-5916 container: docker volume create no-preload-20220921220937-5916 --label name.minikube.sigs.k8s.io=no-preload-20220921220937-5916 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create no-preload-20220921220937-5916: error while creating volume root path '/var/lib/docker/volumes/no-preload-20220921220937-5916': mkdir /var/lib/docker/volumes/no-preload-20220921220937-5916: read-only file system
	
	W0921 22:10:26.684067    1480 out.go:239] * Suggestion: Restart Docker
	* Suggestion: Restart Docker
	W0921 22:10:26.684067    1480 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/6825
	* Related issue: https://github.com/kubernetes/minikube/issues/6825
	I0921 22:10:26.687063    1480 out.go:177] 

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-windows-amd64.exe start -p no-preload-20220921220937-5916 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker --kubernetes-version=v1.25.2": exit status 60

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/FirstStart
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/no-preload/serial/FirstStart]: docker inspect <======

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/FirstStart
helpers_test.go:231: (dbg) Run:  docker inspect no-preload-20220921220937-5916
helpers_test.go:231: (dbg) Non-zero exit: docker inspect no-preload-20220921220937-5916: exit status 1 (266.8378ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: no-preload-20220921220937-5916

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-20220921220937-5916 -n no-preload-20220921220937-5916

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/FirstStart
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-20220921220937-5916 -n no-preload-20220921220937-5916: exit status 7 (606.5517ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0921 22:10:27.675356    1716 status.go:247] status error: host: state: unknown state "no-preload-20220921220937-5916": docker container inspect no-preload-20220921220937-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220921220937-5916

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-20220921220937-5916" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/no-preload/serial/FirstStart (50.70s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (50.57s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-windows-amd64.exe start -p embed-certs-20220921220947-5916 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker --kubernetes-version=v1.25.2

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p embed-certs-20220921220947-5916 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker --kubernetes-version=v1.25.2: exit status 60 (49.6366423s)

                                                
                                                
-- stdout --
	* [embed-certs-20220921220947-5916] minikube v1.27.0 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	  - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=14995
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting control plane node embed-certs-20220921220947-5916 in cluster embed-certs-20220921220947-5916
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* docker "embed-certs-20220921220947-5916" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0921 22:09:47.653088    8864 out.go:296] Setting OutFile to fd 1752 ...
	I0921 22:09:47.714346    8864 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0921 22:09:47.714346    8864 out.go:309] Setting ErrFile to fd 1480...
	I0921 22:09:47.714346    8864 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0921 22:09:47.733336    8864 out.go:303] Setting JSON to false
	I0921 22:09:47.737146    8864 start.go:115] hostinfo: {"hostname":"minikube2","uptime":4256,"bootTime":1663793931,"procs":151,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19044 Build 19044","kernelVersion":"10.0.19044 Build 19044","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"29dced62-21fb-45d8-a34e-472b66ced002"}
	W0921 22:09:47.737309    8864 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0921 22:09:47.742867    8864 out.go:177] * [embed-certs-20220921220947-5916] minikube v1.27.0 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	I0921 22:09:47.746163    8864 notify.go:214] Checking for updates...
	I0921 22:09:47.752617    8864 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	I0921 22:09:47.756869    8864 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	I0921 22:09:47.759501    8864 out.go:177]   - MINIKUBE_LOCATION=14995
	I0921 22:09:47.761718    8864 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0921 22:09:47.765305    8864 config.go:180] Loaded profile config "cert-expiration-20220921220719-5916": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.2
	I0921 22:09:47.766225    8864 config.go:180] Loaded profile config "multinode-20220921215635-5916-m01": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.2
	I0921 22:09:47.766656    8864 config.go:180] Loaded profile config "no-preload-20220921220937-5916": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.2
	I0921 22:09:47.767518    8864 config.go:180] Loaded profile config "old-k8s-version-20220921220934-5916": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0921 22:09:47.767670    8864 driver.go:365] Setting default libvirt URI to qemu:///system
	I0921 22:09:48.060507    8864 docker.go:137] docker version: linux-20.10.17
	I0921 22:09:48.068427    8864 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0921 22:09:48.588104    8864 info.go:265] docker info: {ID:6VFI:N2PO:PLBP:KXLR:EN7R:OR7P:564S:UY7N:ZWDA:5V2U:KGL2:KJZ7 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:43 OomKillDisable:true NGoroutines:80 SystemTime:2022-09-21 22:09:48.2320488 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.10.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.9] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-
plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.19.0]] Warnings:<nil>}}
	I0921 22:09:48.598835    8864 out.go:177] * Using the docker driver based on user configuration
	I0921 22:09:48.603676    8864 start.go:284] selected driver: docker
	I0921 22:09:48.603676    8864 start.go:808] validating driver "docker" against <nil>
	I0921 22:09:48.604312    8864 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0921 22:09:48.672098    8864 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0921 22:09:49.255339    8864 info.go:265] docker info: {ID:6VFI:N2PO:PLBP:KXLR:EN7R:OR7P:564S:UY7N:ZWDA:5V2U:KGL2:KJZ7 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:43 OomKillDisable:true NGoroutines:80 SystemTime:2022-09-21 22:09:48.8427347 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.10.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.9] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-
plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.19.0]] Warnings:<nil>}}
	I0921 22:09:49.255339    8864 start_flags.go:302] no existing cluster config was found, will generate one from the flags 
	I0921 22:09:49.256075    8864 start_flags.go:867] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0921 22:09:49.259440    8864 out.go:177] * Using Docker Desktop driver with root privileges
	I0921 22:09:49.261162    8864 cni.go:95] Creating CNI manager for ""
	I0921 22:09:49.261162    8864 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0921 22:09:49.261162    8864 start_flags.go:316] config:
	{Name:embed-certs-20220921220947-5916 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.2 ClusterName:embed-certs-20220921220947-5916 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunti
me:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I0921 22:09:49.264939    8864 out.go:177] * Starting control plane node embed-certs-20220921220947-5916 in cluster embed-certs-20220921220947-5916
	I0921 22:09:49.267286    8864 cache.go:120] Beginning downloading kic base image for docker with docker
	I0921 22:09:49.270021    8864 out.go:177] * Pulling base image ...
	I0921 22:09:49.272101    8864 preload.go:132] Checking if preload exists for k8s version v1.25.2 and runtime docker
	I0921 22:09:49.272101    8864 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c in local docker daemon
	I0921 22:09:49.272101    8864 preload.go:148] Found local preload: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.25.2-docker-overlay2-amd64.tar.lz4
	I0921 22:09:49.272101    8864 cache.go:57] Caching tarball of preloaded images
	I0921 22:09:49.273153    8864 preload.go:174] Found C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.25.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0921 22:09:49.273153    8864 cache.go:60] Finished verifying existence of preloaded tar for  v1.25.2 on docker
	I0921 22:09:49.273153    8864 profile.go:148] Saving config to C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\embed-certs-20220921220947-5916\config.json ...
	I0921 22:09:49.273780    8864 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\embed-certs-20220921220947-5916\config.json: {Name:mk09029f4c53669dcc51495abceb2d863d2e1096 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0921 22:09:49.475539    8864 cache.go:147] Downloading gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c to local cache
	I0921 22:09:49.475753    8864 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase_v0.0.34@sha256_f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c.tar
	I0921 22:09:49.475753    8864 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase_v0.0.34@sha256_f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c.tar
	I0921 22:09:49.475753    8864 image.go:59] Checking for gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c in local cache directory
	I0921 22:09:49.475753    8864 image.go:62] Found gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c in local cache directory, skipping pull
	I0921 22:09:49.475753    8864 image.go:103] gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c exists in cache, skipping pull
	I0921 22:09:49.476473    8864 cache.go:150] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c as a tarball
	I0921 22:09:49.476473    8864 cache.go:161] Loading gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c from local cache
	I0921 22:09:49.476473    8864 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase_v0.0.34@sha256_f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c.tar
	I0921 22:09:51.748803    8864 cache.go:164] successfully loaded gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c from cached tarball
	I0921 22:09:51.748870    8864 cache.go:170] Downloading gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c to local daemon
	I0921 22:09:51.748936    8864 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c.d.lock -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase_v0.0.34@sha256_f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c.d.lock
	I0921 22:09:51.749041    8864 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c in local docker daemon
	I0921 22:09:51.963531    8864 image.go:243] Writing gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c to local daemon
	    > gcr.io/k8s-minikube/kicbase:  0 B [__________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase:  0 B [__________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase:  0 B [__________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase:  0 B [______________________] ?% ? p/s 800msI0921 22:09:53.448658    8864 cache.go:177] use image loaded from cache gcr.io/k8s-minikube/kicbase:v0.0.34
	W0921 22:09:53.448658    8864 out.go:239] ! minikube was unable to download gcr.io/k8s-minikube/kicbase:v0.0.34, but successfully downloaded gcr.io/k8s-minikube/kicbase:v0.0.34 as a fallback image
	! minikube was unable to download gcr.io/k8s-minikube/kicbase:v0.0.34, but successfully downloaded gcr.io/k8s-minikube/kicbase:v0.0.34 as a fallback image
	I0921 22:09:53.448658    8864 cache.go:208] Successfully downloaded all kic artifacts
	I0921 22:09:53.448658    8864 start.go:364] acquiring machines lock for embed-certs-20220921220947-5916: {Name:mk43bf8b7be7335eaf7b2b1bea9994b147371248 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0921 22:09:53.448658    8864 start.go:368] acquired machines lock for "embed-certs-20220921220947-5916" in 0s
	I0921 22:09:53.449339    8864 start.go:93] Provisioning new machine with config: &{Name:embed-certs-20220921220947-5916 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.34 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.2 ClusterName:embed-certs-20220921220947-5916 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.25.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVM
netClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet} &{Name: IP: Port:8443 KubernetesVersion:v1.25.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0921 22:09:53.449386    8864 start.go:125] createHost starting for "" (driver="docker")
	I0921 22:09:53.452858    8864 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0921 22:09:53.453349    8864 start.go:159] libmachine.API.Create for "embed-certs-20220921220947-5916" (driver="docker")
	I0921 22:09:53.453423    8864 client.go:168] LocalClient.Create starting
	I0921 22:09:53.453620    8864 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I0921 22:09:53.453620    8864 main.go:134] libmachine: Decoding PEM data...
	I0921 22:09:53.453620    8864 main.go:134] libmachine: Parsing certificate...
	I0921 22:09:53.454198    8864 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I0921 22:09:53.454587    8864 main.go:134] libmachine: Decoding PEM data...
	I0921 22:09:53.454612    8864 main.go:134] libmachine: Parsing certificate...
	I0921 22:09:53.463398    8864 cli_runner.go:164] Run: docker network inspect embed-certs-20220921220947-5916 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0921 22:09:53.664879    8864 cli_runner.go:211] docker network inspect embed-certs-20220921220947-5916 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0921 22:09:53.673229    8864 network_create.go:272] running [docker network inspect embed-certs-20220921220947-5916] to gather additional debugging logs...
	I0921 22:09:53.673229    8864 cli_runner.go:164] Run: docker network inspect embed-certs-20220921220947-5916
	W0921 22:09:53.868529    8864 cli_runner.go:211] docker network inspect embed-certs-20220921220947-5916 returned with exit code 1
	I0921 22:09:53.868529    8864 network_create.go:275] error running [docker network inspect embed-certs-20220921220947-5916]: docker network inspect embed-certs-20220921220947-5916: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: embed-certs-20220921220947-5916
	I0921 22:09:53.868529    8864 network_create.go:277] output of [docker network inspect embed-certs-20220921220947-5916]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: embed-certs-20220921220947-5916
	
	** /stderr **
	I0921 22:09:53.875533    8864 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0921 22:09:54.080383    8864 network.go:290] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc00058e380] misses:0}
	I0921 22:09:54.080481    8864 network.go:236] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0921 22:09:54.080523    8864 network_create.go:115] attempt to create docker network embed-certs-20220921220947-5916 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0921 22:09:54.087358    8864 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-20220921220947-5916 embed-certs-20220921220947-5916
	W0921 22:09:54.275070    8864 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-20220921220947-5916 embed-certs-20220921220947-5916 returned with exit code 1
	E0921 22:09:54.275070    8864 network_create.go:104] error while trying to create docker network embed-certs-20220921220947-5916 192.168.49.0/24: create docker network embed-certs-20220921220947-5916 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-20220921220947-5916 embed-certs-20220921220947-5916: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 7f7e43dca6ea407a837913fff0a6c6106eba34201b871af529f824a026237eca (br-7f7e43dca6ea): conflicts with network a04d36bfb3cf4b1b1ab643a2ec70fadfc4a485878b32a275b4c7653037d66dd0 (br-a04d36bfb3cf): networks have overlapping IPv4
	W0921 22:09:54.275070    8864 out.go:239] ! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network embed-certs-20220921220947-5916 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-20220921220947-5916 embed-certs-20220921220947-5916: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 7f7e43dca6ea407a837913fff0a6c6106eba34201b871af529f824a026237eca (br-7f7e43dca6ea): conflicts with network a04d36bfb3cf4b1b1ab643a2ec70fadfc4a485878b32a275b4c7653037d66dd0 (br-a04d36bfb3cf): networks have overlapping IPv4
	
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network embed-certs-20220921220947-5916 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-20220921220947-5916 embed-certs-20220921220947-5916: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 7f7e43dca6ea407a837913fff0a6c6106eba34201b871af529f824a026237eca (br-7f7e43dca6ea): conflicts with network a04d36bfb3cf4b1b1ab643a2ec70fadfc4a485878b32a275b4c7653037d66dd0 (br-a04d36bfb3cf): networks have overlapping IPv4
	
	I0921 22:09:54.289124    8864 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0921 22:09:54.501665    8864 cli_runner.go:164] Run: docker volume create embed-certs-20220921220947-5916 --label name.minikube.sigs.k8s.io=embed-certs-20220921220947-5916 --label created_by.minikube.sigs.k8s.io=true
	W0921 22:09:54.695569    8864 cli_runner.go:211] docker volume create embed-certs-20220921220947-5916 --label name.minikube.sigs.k8s.io=embed-certs-20220921220947-5916 --label created_by.minikube.sigs.k8s.io=true returned with exit code 1
	I0921 22:09:54.695569    8864 client.go:171] LocalClient.Create took 1.2421372s
	I0921 22:09:56.712827    8864 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0921 22:09:56.722679    8864 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220921220947-5916
	W0921 22:09:56.911612    8864 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220921220947-5916 returned with exit code 1
	I0921 22:09:56.911612    8864 retry.go:31] will retry after 276.165072ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-20220921220947-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220921220947-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220921220947-5916
	I0921 22:09:57.202799    8864 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220921220947-5916
	W0921 22:09:57.425239    8864 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220921220947-5916 returned with exit code 1
	I0921 22:09:57.425239    8864 retry.go:31] will retry after 540.190908ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-20220921220947-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220921220947-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220921220947-5916
	I0921 22:09:57.984289    8864 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220921220947-5916
	W0921 22:09:58.174843    8864 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220921220947-5916 returned with exit code 1
	W0921 22:09:58.175164    8864 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-20220921220947-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220921220947-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220921220947-5916
	
	W0921 22:09:58.175164    8864 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-20220921220947-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220921220947-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220921220947-5916
	I0921 22:09:58.185727    8864 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0921 22:09:58.191516    8864 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220921220947-5916
	W0921 22:09:58.376452    8864 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220921220947-5916 returned with exit code 1
	I0921 22:09:58.376654    8864 retry.go:31] will retry after 234.428547ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-20220921220947-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220921220947-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220921220947-5916
	I0921 22:09:58.620158    8864 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220921220947-5916
	W0921 22:09:58.800898    8864 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220921220947-5916 returned with exit code 1
	I0921 22:09:58.800898    8864 retry.go:31] will retry after 346.739061ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-20220921220947-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220921220947-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220921220947-5916
	I0921 22:09:59.167835    8864 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220921220947-5916
	W0921 22:09:59.347246    8864 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220921220947-5916 returned with exit code 1
	I0921 22:09:59.347246    8864 retry.go:31] will retry after 667.587979ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-20220921220947-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220921220947-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220921220947-5916
	I0921 22:10:00.023833    8864 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220921220947-5916
	W0921 22:10:00.203435    8864 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220921220947-5916 returned with exit code 1
	W0921 22:10:00.203435    8864 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-20220921220947-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220921220947-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220921220947-5916
	
	W0921 22:10:00.203435    8864 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-20220921220947-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220921220947-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220921220947-5916
	I0921 22:10:00.203435    8864 start.go:128] duration metric: createHost completed in 6.7539979s
	I0921 22:10:00.203435    8864 start.go:83] releasing machines lock for "embed-certs-20220921220947-5916", held for 6.7542006s
	W0921 22:10:00.203435    8864 start.go:602] error starting host: creating host: create: creating: setting up container node: creating volume for embed-certs-20220921220947-5916 container: docker volume create embed-certs-20220921220947-5916 --label name.minikube.sigs.k8s.io=embed-certs-20220921220947-5916 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create embed-certs-20220921220947-5916: error while creating volume root path '/var/lib/docker/volumes/embed-certs-20220921220947-5916': mkdir /var/lib/docker/volumes/embed-certs-20220921220947-5916: read-only file system
	I0921 22:10:00.214154    8864 cli_runner.go:164] Run: docker container inspect embed-certs-20220921220947-5916 --format={{.State.Status}}
	W0921 22:10:00.416607    8864 cli_runner.go:211] docker container inspect embed-certs-20220921220947-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:10:00.416775    8864 delete.go:82] Unable to get host status for embed-certs-20220921220947-5916, assuming it has already been deleted: state: unknown state "embed-certs-20220921220947-5916": docker container inspect embed-certs-20220921220947-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220921220947-5916
	W0921 22:10:00.417008    8864 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: setting up container node: creating volume for embed-certs-20220921220947-5916 container: docker volume create embed-certs-20220921220947-5916 --label name.minikube.sigs.k8s.io=embed-certs-20220921220947-5916 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create embed-certs-20220921220947-5916: error while creating volume root path '/var/lib/docker/volumes/embed-certs-20220921220947-5916': mkdir /var/lib/docker/volumes/embed-certs-20220921220947-5916: read-only file system
	
	! StartHost failed, but will try again: creating host: create: creating: setting up container node: creating volume for embed-certs-20220921220947-5916 container: docker volume create embed-certs-20220921220947-5916 --label name.minikube.sigs.k8s.io=embed-certs-20220921220947-5916 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create embed-certs-20220921220947-5916: error while creating volume root path '/var/lib/docker/volumes/embed-certs-20220921220947-5916': mkdir /var/lib/docker/volumes/embed-certs-20220921220947-5916: read-only file system
	
	I0921 22:10:00.417008    8864 start.go:617] Will try again in 5 seconds ...
	I0921 22:10:05.432631    8864 start.go:364] acquiring machines lock for embed-certs-20220921220947-5916: {Name:mk43bf8b7be7335eaf7b2b1bea9994b147371248 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0921 22:10:05.433117    8864 start.go:368] acquired machines lock for "embed-certs-20220921220947-5916" in 333.8µs
	I0921 22:10:05.433347    8864 start.go:96] Skipping create...Using existing machine configuration
	I0921 22:10:05.433458    8864 fix.go:55] fixHost starting: 
	I0921 22:10:05.449931    8864 cli_runner.go:164] Run: docker container inspect embed-certs-20220921220947-5916 --format={{.State.Status}}
	W0921 22:10:05.635978    8864 cli_runner.go:211] docker container inspect embed-certs-20220921220947-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:10:05.635978    8864 fix.go:103] recreateIfNeeded on embed-certs-20220921220947-5916: state= err=unknown state "embed-certs-20220921220947-5916": docker container inspect embed-certs-20220921220947-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220921220947-5916
	I0921 22:10:05.635978    8864 fix.go:108] machineExists: false. err=machine does not exist
	I0921 22:10:05.640839    8864 out.go:177] * docker "embed-certs-20220921220947-5916" container is missing, will recreate.
	I0921 22:10:05.643158    8864 delete.go:124] DEMOLISHING embed-certs-20220921220947-5916 ...
	I0921 22:10:05.660786    8864 cli_runner.go:164] Run: docker container inspect embed-certs-20220921220947-5916 --format={{.State.Status}}
	W0921 22:10:05.853629    8864 cli_runner.go:211] docker container inspect embed-certs-20220921220947-5916 --format={{.State.Status}} returned with exit code 1
	W0921 22:10:05.853629    8864 stop.go:75] unable to get state: unknown state "embed-certs-20220921220947-5916": docker container inspect embed-certs-20220921220947-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220921220947-5916
	I0921 22:10:05.853629    8864 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "embed-certs-20220921220947-5916": docker container inspect embed-certs-20220921220947-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220921220947-5916
	I0921 22:10:05.872270    8864 cli_runner.go:164] Run: docker container inspect embed-certs-20220921220947-5916 --format={{.State.Status}}
	W0921 22:10:06.057163    8864 cli_runner.go:211] docker container inspect embed-certs-20220921220947-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:10:06.057273    8864 delete.go:82] Unable to get host status for embed-certs-20220921220947-5916, assuming it has already been deleted: state: unknown state "embed-certs-20220921220947-5916": docker container inspect embed-certs-20220921220947-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220921220947-5916
	I0921 22:10:06.064401    8864 cli_runner.go:164] Run: docker container inspect -f {{.Id}} embed-certs-20220921220947-5916
	W0921 22:10:06.244337    8864 cli_runner.go:211] docker container inspect -f {{.Id}} embed-certs-20220921220947-5916 returned with exit code 1
	I0921 22:10:06.244337    8864 kic.go:356] could not find the container embed-certs-20220921220947-5916 to remove it. will try anyways
	I0921 22:10:06.252270    8864 cli_runner.go:164] Run: docker container inspect embed-certs-20220921220947-5916 --format={{.State.Status}}
	W0921 22:10:06.476202    8864 cli_runner.go:211] docker container inspect embed-certs-20220921220947-5916 --format={{.State.Status}} returned with exit code 1
	W0921 22:10:06.476202    8864 oci.go:84] error getting container status, will try to delete anyways: unknown state "embed-certs-20220921220947-5916": docker container inspect embed-certs-20220921220947-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220921220947-5916
	I0921 22:10:06.486104    8864 cli_runner.go:164] Run: docker exec --privileged -t embed-certs-20220921220947-5916 /bin/bash -c "sudo init 0"
	W0921 22:10:06.710433    8864 cli_runner.go:211] docker exec --privileged -t embed-certs-20220921220947-5916 /bin/bash -c "sudo init 0" returned with exit code 1
	I0921 22:10:06.710647    8864 oci.go:646] error shutdown embed-certs-20220921220947-5916: docker exec --privileged -t embed-certs-20220921220947-5916 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: embed-certs-20220921220947-5916
	I0921 22:10:07.730750    8864 cli_runner.go:164] Run: docker container inspect embed-certs-20220921220947-5916 --format={{.State.Status}}
	W0921 22:10:07.923413    8864 cli_runner.go:211] docker container inspect embed-certs-20220921220947-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:10:07.923413    8864 oci.go:658] temporary error verifying shutdown: unknown state "embed-certs-20220921220947-5916": docker container inspect embed-certs-20220921220947-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220921220947-5916
	I0921 22:10:07.923720    8864 oci.go:660] temporary error: container embed-certs-20220921220947-5916 status is  but expect it to be exited
	I0921 22:10:07.923720    8864 retry.go:31] will retry after 328.259627ms: couldn't verify container is exited. %v: unknown state "embed-certs-20220921220947-5916": docker container inspect embed-certs-20220921220947-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220921220947-5916
	I0921 22:10:08.266943    8864 cli_runner.go:164] Run: docker container inspect embed-certs-20220921220947-5916 --format={{.State.Status}}
	W0921 22:10:08.474331    8864 cli_runner.go:211] docker container inspect embed-certs-20220921220947-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:10:08.474516    8864 oci.go:658] temporary error verifying shutdown: unknown state "embed-certs-20220921220947-5916": docker container inspect embed-certs-20220921220947-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220921220947-5916
	I0921 22:10:08.474516    8864 oci.go:660] temporary error: container embed-certs-20220921220947-5916 status is  but expect it to be exited
	I0921 22:10:08.474666    8864 retry.go:31] will retry after 447.727139ms: couldn't verify container is exited. %v: unknown state "embed-certs-20220921220947-5916": docker container inspect embed-certs-20220921220947-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220921220947-5916
	I0921 22:10:08.947521    8864 cli_runner.go:164] Run: docker container inspect embed-certs-20220921220947-5916 --format={{.State.Status}}
	W0921 22:10:09.155453    8864 cli_runner.go:211] docker container inspect embed-certs-20220921220947-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:10:09.155775    8864 oci.go:658] temporary error verifying shutdown: unknown state "embed-certs-20220921220947-5916": docker container inspect embed-certs-20220921220947-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220921220947-5916
	I0921 22:10:09.155775    8864 oci.go:660] temporary error: container embed-certs-20220921220947-5916 status is  but expect it to be exited
	I0921 22:10:09.155775    8864 retry.go:31] will retry after 901.025843ms: couldn't verify container is exited. %v: unknown state "embed-certs-20220921220947-5916": docker container inspect embed-certs-20220921220947-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220921220947-5916
	I0921 22:10:10.067839    8864 cli_runner.go:164] Run: docker container inspect embed-certs-20220921220947-5916 --format={{.State.Status}}
	W0921 22:10:10.260774    8864 cli_runner.go:211] docker container inspect embed-certs-20220921220947-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:10:10.260944    8864 oci.go:658] temporary error verifying shutdown: unknown state "embed-certs-20220921220947-5916": docker container inspect embed-certs-20220921220947-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220921220947-5916
	I0921 22:10:10.260944    8864 oci.go:660] temporary error: container embed-certs-20220921220947-5916 status is  but expect it to be exited
	I0921 22:10:10.261017    8864 retry.go:31] will retry after 1.713171311s: couldn't verify container is exited. %v: unknown state "embed-certs-20220921220947-5916": docker container inspect embed-certs-20220921220947-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220921220947-5916
	I0921 22:10:11.994017    8864 cli_runner.go:164] Run: docker container inspect embed-certs-20220921220947-5916 --format={{.State.Status}}
	W0921 22:10:12.203594    8864 cli_runner.go:211] docker container inspect embed-certs-20220921220947-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:10:12.203594    8864 oci.go:658] temporary error verifying shutdown: unknown state "embed-certs-20220921220947-5916": docker container inspect embed-certs-20220921220947-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220921220947-5916
	I0921 22:10:12.203594    8864 oci.go:660] temporary error: container embed-certs-20220921220947-5916 status is  but expect it to be exited
	I0921 22:10:12.203594    8864 retry.go:31] will retry after 3.325151152s: couldn't verify container is exited. %v: unknown state "embed-certs-20220921220947-5916": docker container inspect embed-certs-20220921220947-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220921220947-5916
	I0921 22:10:15.542860    8864 cli_runner.go:164] Run: docker container inspect embed-certs-20220921220947-5916 --format={{.State.Status}}
	W0921 22:10:15.735299    8864 cli_runner.go:211] docker container inspect embed-certs-20220921220947-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:10:15.735299    8864 oci.go:658] temporary error verifying shutdown: unknown state "embed-certs-20220921220947-5916": docker container inspect embed-certs-20220921220947-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220921220947-5916
	I0921 22:10:15.735299    8864 oci.go:660] temporary error: container embed-certs-20220921220947-5916 status is  but expect it to be exited
	I0921 22:10:15.735299    8864 retry.go:31] will retry after 2.711970641s: couldn't verify container is exited. %v: unknown state "embed-certs-20220921220947-5916": docker container inspect embed-certs-20220921220947-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220921220947-5916
	I0921 22:10:18.466297    8864 cli_runner.go:164] Run: docker container inspect embed-certs-20220921220947-5916 --format={{.State.Status}}
	W0921 22:10:18.662885    8864 cli_runner.go:211] docker container inspect embed-certs-20220921220947-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:10:18.662885    8864 oci.go:658] temporary error verifying shutdown: unknown state "embed-certs-20220921220947-5916": docker container inspect embed-certs-20220921220947-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220921220947-5916
	I0921 22:10:18.662885    8864 oci.go:660] temporary error: container embed-certs-20220921220947-5916 status is  but expect it to be exited
	I0921 22:10:18.662885    8864 retry.go:31] will retry after 5.015617898s: couldn't verify container is exited. %v: unknown state "embed-certs-20220921220947-5916": docker container inspect embed-certs-20220921220947-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220921220947-5916
	I0921 22:10:23.687322    8864 cli_runner.go:164] Run: docker container inspect embed-certs-20220921220947-5916 --format={{.State.Status}}
	W0921 22:10:23.909772    8864 cli_runner.go:211] docker container inspect embed-certs-20220921220947-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:10:23.910027    8864 oci.go:658] temporary error verifying shutdown: unknown state "embed-certs-20220921220947-5916": docker container inspect embed-certs-20220921220947-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220921220947-5916
	I0921 22:10:23.910073    8864 oci.go:660] temporary error: container embed-certs-20220921220947-5916 status is  but expect it to be exited
	I0921 22:10:23.910149    8864 oci.go:88] couldn't shut down embed-certs-20220921220947-5916 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "embed-certs-20220921220947-5916": docker container inspect embed-certs-20220921220947-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220921220947-5916
	 
	I0921 22:10:23.919415    8864 cli_runner.go:164] Run: docker rm -f -v embed-certs-20220921220947-5916
	I0921 22:10:24.135281    8864 cli_runner.go:164] Run: docker container inspect -f {{.Id}} embed-certs-20220921220947-5916
	W0921 22:10:24.313479    8864 cli_runner.go:211] docker container inspect -f {{.Id}} embed-certs-20220921220947-5916 returned with exit code 1
	I0921 22:10:24.320359    8864 cli_runner.go:164] Run: docker network inspect embed-certs-20220921220947-5916 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0921 22:10:24.499607    8864 cli_runner.go:211] docker network inspect embed-certs-20220921220947-5916 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0921 22:10:24.508279    8864 network_create.go:272] running [docker network inspect embed-certs-20220921220947-5916] to gather additional debugging logs...
	I0921 22:10:24.508279    8864 cli_runner.go:164] Run: docker network inspect embed-certs-20220921220947-5916
	W0921 22:10:24.733515    8864 cli_runner.go:211] docker network inspect embed-certs-20220921220947-5916 returned with exit code 1
	I0921 22:10:24.733605    8864 network_create.go:275] error running [docker network inspect embed-certs-20220921220947-5916]: docker network inspect embed-certs-20220921220947-5916: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: embed-certs-20220921220947-5916
	I0921 22:10:24.733677    8864 network_create.go:277] output of [docker network inspect embed-certs-20220921220947-5916]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: embed-certs-20220921220947-5916
	
	** /stderr **
	W0921 22:10:24.734568    8864 delete.go:139] delete failed (probably ok) <nil>
	I0921 22:10:24.734568    8864 fix.go:115] Sleeping 1 second for extra luck!
	I0921 22:10:25.745816    8864 start.go:125] createHost starting for "" (driver="docker")
	I0921 22:10:25.753463    8864 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0921 22:10:25.753463    8864 start.go:159] libmachine.API.Create for "embed-certs-20220921220947-5916" (driver="docker")
	I0921 22:10:25.753463    8864 client.go:168] LocalClient.Create starting
	I0921 22:10:25.754135    8864 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I0921 22:10:25.754135    8864 main.go:134] libmachine: Decoding PEM data...
	I0921 22:10:25.754135    8864 main.go:134] libmachine: Parsing certificate...
	I0921 22:10:25.754710    8864 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I0921 22:10:25.754878    8864 main.go:134] libmachine: Decoding PEM data...
	I0921 22:10:25.754878    8864 main.go:134] libmachine: Parsing certificate...
	I0921 22:10:25.767579    8864 cli_runner.go:164] Run: docker network inspect embed-certs-20220921220947-5916 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0921 22:10:26.011806    8864 cli_runner.go:211] docker network inspect embed-certs-20220921220947-5916 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0921 22:10:26.019296    8864 network_create.go:272] running [docker network inspect embed-certs-20220921220947-5916] to gather additional debugging logs...
	I0921 22:10:26.019296    8864 cli_runner.go:164] Run: docker network inspect embed-certs-20220921220947-5916
	W0921 22:10:26.229663    8864 cli_runner.go:211] docker network inspect embed-certs-20220921220947-5916 returned with exit code 1
	I0921 22:10:26.229663    8864 network_create.go:275] error running [docker network inspect embed-certs-20220921220947-5916]: docker network inspect embed-certs-20220921220947-5916: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: embed-certs-20220921220947-5916
	I0921 22:10:26.229663    8864 network_create.go:277] output of [docker network inspect embed-certs-20220921220947-5916]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: embed-certs-20220921220947-5916
	
	** /stderr **
	I0921 22:10:26.239773    8864 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0921 22:10:26.467015    8864 network.go:281] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00058e380] amended:false}} dirty:map[] misses:0}
	I0921 22:10:26.467090    8864 network.go:239] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0921 22:10:26.486131    8864 network.go:290] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00058e380] amended:true}} dirty:map[192.168.49.0:0xc00058e380 192.168.58.0:0xc00058e4d8] misses:0}
	I0921 22:10:26.486131    8864 network.go:236] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0921 22:10:26.486854    8864 network_create.go:115] attempt to create docker network embed-certs-20220921220947-5916 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0921 22:10:26.492690    8864 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-20220921220947-5916 embed-certs-20220921220947-5916
	W0921 22:10:26.705356    8864 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-20220921220947-5916 embed-certs-20220921220947-5916 returned with exit code 1
	E0921 22:10:26.705504    8864 network_create.go:104] error while trying to create docker network embed-certs-20220921220947-5916 192.168.58.0/24: create docker network embed-certs-20220921220947-5916 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-20220921220947-5916 embed-certs-20220921220947-5916: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network b83192d580323a7085aa1fde25ef32b1a1d42671bfba797a63b5fe4d6237347e (br-b83192d58032): conflicts with network 8a3cd8d165a4af032276277dbee93b52b8ee4aed7ca6d8aceee6814901b31792 (br-8a3cd8d165a4): networks have overlapping IPv4
	W0921 22:10:26.705867    8864 out.go:239] ! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network embed-certs-20220921220947-5916 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-20220921220947-5916 embed-certs-20220921220947-5916: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network b83192d580323a7085aa1fde25ef32b1a1d42671bfba797a63b5fe4d6237347e (br-b83192d58032): conflicts with network 8a3cd8d165a4af032276277dbee93b52b8ee4aed7ca6d8aceee6814901b31792 (br-8a3cd8d165a4): networks have overlapping IPv4
	
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network embed-certs-20220921220947-5916 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-20220921220947-5916 embed-certs-20220921220947-5916: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network b83192d580323a7085aa1fde25ef32b1a1d42671bfba797a63b5fe4d6237347e (br-b83192d58032): conflicts with network 8a3cd8d165a4af032276277dbee93b52b8ee4aed7ca6d8aceee6814901b31792 (br-8a3cd8d165a4): networks have overlapping IPv4
	
	I0921 22:10:26.724507    8864 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0921 22:10:26.953099    8864 cli_runner.go:164] Run: docker volume create embed-certs-20220921220947-5916 --label name.minikube.sigs.k8s.io=embed-certs-20220921220947-5916 --label created_by.minikube.sigs.k8s.io=true
	W0921 22:10:27.177030    8864 cli_runner.go:211] docker volume create embed-certs-20220921220947-5916 --label name.minikube.sigs.k8s.io=embed-certs-20220921220947-5916 --label created_by.minikube.sigs.k8s.io=true returned with exit code 1
	I0921 22:10:27.177030    8864 client.go:171] LocalClient.Create took 1.4235558s
	I0921 22:10:29.190058    8864 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0921 22:10:29.196784    8864 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220921220947-5916
	W0921 22:10:29.422078    8864 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220921220947-5916 returned with exit code 1
	I0921 22:10:29.422308    8864 retry.go:31] will retry after 242.222461ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-20220921220947-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220921220947-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220921220947-5916
	I0921 22:10:29.679541    8864 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220921220947-5916
	W0921 22:10:29.898932    8864 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220921220947-5916 returned with exit code 1
	I0921 22:10:29.898932    8864 retry.go:31] will retry after 293.637806ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-20220921220947-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220921220947-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220921220947-5916
	I0921 22:10:30.215631    8864 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220921220947-5916
	W0921 22:10:30.424974    8864 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220921220947-5916 returned with exit code 1
	I0921 22:10:30.425183    8864 retry.go:31] will retry after 446.119795ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-20220921220947-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220921220947-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220921220947-5916
	I0921 22:10:30.886849    8864 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220921220947-5916
	W0921 22:10:31.093460    8864 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220921220947-5916 returned with exit code 1
	W0921 22:10:31.093990    8864 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-20220921220947-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220921220947-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220921220947-5916
	
	W0921 22:10:31.094066    8864 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-20220921220947-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220921220947-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220921220947-5916
	I0921 22:10:31.106721    8864 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0921 22:10:31.114351    8864 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220921220947-5916
	W0921 22:10:31.326030    8864 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220921220947-5916 returned with exit code 1
	I0921 22:10:31.326091    8864 retry.go:31] will retry after 179.638263ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-20220921220947-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220921220947-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220921220947-5916
	I0921 22:10:31.521297    8864 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220921220947-5916
	W0921 22:10:31.716077    8864 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220921220947-5916 returned with exit code 1
	I0921 22:10:31.716077    8864 retry.go:31] will retry after 263.695078ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-20220921220947-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220921220947-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220921220947-5916
	I0921 22:10:31.997723    8864 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220921220947-5916
	W0921 22:10:32.199076    8864 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220921220947-5916 returned with exit code 1
	I0921 22:10:32.199076    8864 retry.go:31] will retry after 484.240172ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-20220921220947-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220921220947-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220921220947-5916
	I0921 22:10:32.703537    8864 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220921220947-5916
	W0921 22:10:32.885919    8864 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220921220947-5916 returned with exit code 1
	W0921 22:10:32.885919    8864 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-20220921220947-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220921220947-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220921220947-5916
	
	W0921 22:10:32.885919    8864 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-20220921220947-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220921220947-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220921220947-5916
	I0921 22:10:32.885919    8864 start.go:128] duration metric: createHost completed in 7.1400174s
	I0921 22:10:32.895901    8864 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0921 22:10:32.902912    8864 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220921220947-5916
	W0921 22:10:33.088638    8864 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220921220947-5916 returned with exit code 1
	I0921 22:10:33.088638    8864 retry.go:31] will retry after 340.62286ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-20220921220947-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220921220947-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220921220947-5916
	I0921 22:10:33.460335    8864 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220921220947-5916
	W0921 22:10:33.655843    8864 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220921220947-5916 returned with exit code 1
	I0921 22:10:33.656006    8864 retry.go:31] will retry after 297.417842ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-20220921220947-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220921220947-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220921220947-5916
	I0921 22:10:33.974546    8864 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220921220947-5916
	W0921 22:10:34.166062    8864 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220921220947-5916 returned with exit code 1
	I0921 22:10:34.166062    8864 retry.go:31] will retry after 448.358942ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-20220921220947-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220921220947-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220921220947-5916
	I0921 22:10:34.628987    8864 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220921220947-5916
	W0921 22:10:34.827424    8864 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220921220947-5916 returned with exit code 1
	W0921 22:10:34.827649    8864 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-20220921220947-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220921220947-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220921220947-5916
	
	W0921 22:10:34.827769    8864 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-20220921220947-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220921220947-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220921220947-5916
	I0921 22:10:34.837374    8864 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0921 22:10:34.843513    8864 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220921220947-5916
	W0921 22:10:35.012885    8864 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220921220947-5916 returned with exit code 1
	I0921 22:10:35.012885    8864 retry.go:31] will retry after 176.645665ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-20220921220947-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220921220947-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220921220947-5916
	I0921 22:10:35.202124    8864 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220921220947-5916
	W0921 22:10:35.409443    8864 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220921220947-5916 returned with exit code 1
	I0921 22:10:35.409684    8864 retry.go:31] will retry after 512.00063ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-20220921220947-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220921220947-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220921220947-5916
	I0921 22:10:35.937588    8864 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220921220947-5916
	W0921 22:10:36.130383    8864 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220921220947-5916 returned with exit code 1
	I0921 22:10:36.130383    8864 retry.go:31] will retry after 673.154531ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-20220921220947-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220921220947-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220921220947-5916
	I0921 22:10:36.814371    8864 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220921220947-5916
	W0921 22:10:37.005915    8864 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220921220947-5916 returned with exit code 1
	W0921 22:10:37.006095    8864 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-20220921220947-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220921220947-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220921220947-5916
	
	W0921 22:10:37.006095    8864 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-20220921220947-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220921220947-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220921220947-5916
	I0921 22:10:37.006095    8864 fix.go:57] fixHost completed within 31.5723944s
	I0921 22:10:37.006095    8864 start.go:83] releasing machines lock for "embed-certs-20220921220947-5916", held for 31.5727352s
	W0921 22:10:37.006753    8864 out.go:239] * Failed to start docker container. Running "minikube delete -p embed-certs-20220921220947-5916" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for embed-certs-20220921220947-5916 container: docker volume create embed-certs-20220921220947-5916 --label name.minikube.sigs.k8s.io=embed-certs-20220921220947-5916 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create embed-certs-20220921220947-5916: error while creating volume root path '/var/lib/docker/volumes/embed-certs-20220921220947-5916': mkdir /var/lib/docker/volumes/embed-certs-20220921220947-5916: read-only file system
	
	* Failed to start docker container. Running "minikube delete -p embed-certs-20220921220947-5916" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for embed-certs-20220921220947-5916 container: docker volume create embed-certs-20220921220947-5916 --label name.minikube.sigs.k8s.io=embed-certs-20220921220947-5916 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create embed-certs-20220921220947-5916: error while creating volume root path '/var/lib/docker/volumes/embed-certs-20220921220947-5916': mkdir /var/lib/docker/volumes/embed-certs-20220921220947-5916: read-only file system
	
	I0921 22:10:37.010454    8864 out.go:177] 
	W0921 22:10:37.013647    8864 out.go:239] X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for embed-certs-20220921220947-5916 container: docker volume create embed-certs-20220921220947-5916 --label name.minikube.sigs.k8s.io=embed-certs-20220921220947-5916 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create embed-certs-20220921220947-5916: error while creating volume root path '/var/lib/docker/volumes/embed-certs-20220921220947-5916': mkdir /var/lib/docker/volumes/embed-certs-20220921220947-5916: read-only file system
	
	X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for embed-certs-20220921220947-5916 container: docker volume create embed-certs-20220921220947-5916 --label name.minikube.sigs.k8s.io=embed-certs-20220921220947-5916 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create embed-certs-20220921220947-5916: error while creating volume root path '/var/lib/docker/volumes/embed-certs-20220921220947-5916': mkdir /var/lib/docker/volumes/embed-certs-20220921220947-5916: read-only file system
	
	W0921 22:10:37.014285    8864 out.go:239] * Suggestion: Restart Docker
	* Suggestion: Restart Docker
	W0921 22:10:37.014285    8864 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/6825
	* Related issue: https://github.com/kubernetes/minikube/issues/6825
	I0921 22:10:37.017315    8864 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-windows-amd64.exe start -p embed-certs-20220921220947-5916 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker --kubernetes-version=v1.25.2": exit status 60
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/embed-certs/serial/FirstStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect embed-certs-20220921220947-5916
helpers_test.go:231: (dbg) Non-zero exit: docker inspect embed-certs-20220921220947-5916: exit status 1 (245.3614ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: embed-certs-20220921220947-5916

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p embed-certs-20220921220947-5916 -n embed-certs-20220921220947-5916
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p embed-certs-20220921220947-5916 -n embed-certs-20220921220947-5916: exit status 7 (576.1366ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0921 22:10:37.970411    5244 status.go:247] status error: host: state: unknown state "embed-certs-20220921220947-5916": docker container inspect embed-certs-20220921220947-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220921220947-5916

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-20220921220947-5916" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/embed-certs/serial/FirstStart (50.57s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (2.13s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-20220921220934-5916 create -f testdata\busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-20220921220934-5916 create -f testdata\busybox.yaml: exit status 1 (208.0651ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-20220921220934-5916" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-20220921220934-5916 create -f testdata\busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/DeployApp]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-20220921220934-5916
helpers_test.go:231: (dbg) Non-zero exit: docker inspect old-k8s-version-20220921220934-5916: exit status 1 (268.9334ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: old-k8s-version-20220921220934-5916

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-20220921220934-5916 -n old-k8s-version-20220921220934-5916

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/DeployApp
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-20220921220934-5916 -n old-k8s-version-20220921220934-5916: exit status 7 (601.3532ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0921 22:10:26.737143    4228 status.go:247] status error: host: state: unknown state "old-k8s-version-20220921220934-5916": docker container inspect old-k8s-version-20220921220934-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220921220934-5916

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/DeployApp
helpers_test.go:239: status error: exit status 7 (may be ok)

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/DeployApp
helpers_test.go:241: "old-k8s-version-20220921220934-5916" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/DeployApp]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-20220921220934-5916

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/DeployApp
helpers_test.go:231: (dbg) Non-zero exit: docker inspect old-k8s-version-20220921220934-5916: exit status 1 (361.4807ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: old-k8s-version-20220921220934-5916

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-20220921220934-5916 -n old-k8s-version-20220921220934-5916

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/DeployApp
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-20220921220934-5916 -n old-k8s-version-20220921220934-5916: exit status 7 (606.0238ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0921 22:10:27.783679    6088 status.go:247] status error: host: state: unknown state "old-k8s-version-20220921220934-5916": docker container inspect old-k8s-version-20220921220934-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220921220934-5916

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-20220921220934-5916" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (2.13s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (2.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-20220921220937-5916 create -f testdata\busybox.yaml

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context no-preload-20220921220937-5916 create -f testdata\busybox.yaml: exit status 1 (211.2686ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-20220921220937-5916" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context no-preload-20220921220937-5916 create -f testdata\busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect no-preload-20220921220937-5916
helpers_test.go:231: (dbg) Non-zero exit: docker inspect no-preload-20220921220937-5916: exit status 1 (267.7689ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: no-preload-20220921220937-5916

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-20220921220937-5916 -n no-preload-20220921220937-5916

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/DeployApp
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-20220921220937-5916 -n no-preload-20220921220937-5916: exit status 7 (603.2905ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0921 22:10:28.776104    4312 status.go:247] status error: host: state: unknown state "no-preload-20220921220937-5916": docker container inspect no-preload-20220921220937-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220921220937-5916

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-20220921220937-5916" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect no-preload-20220921220937-5916

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/DeployApp
helpers_test.go:231: (dbg) Non-zero exit: docker inspect no-preload-20220921220937-5916: exit status 1 (267.7513ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: no-preload-20220921220937-5916

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-20220921220937-5916 -n no-preload-20220921220937-5916

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/DeployApp
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-20220921220937-5916 -n no-preload-20220921220937-5916: exit status 7 (630.5024ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0921 22:10:29.699532    8372 status.go:247] status error: host: state: unknown state "no-preload-20220921220937-5916": docker container inspect no-preload-20220921220937-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220921220937-5916

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-20220921220937-5916" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/no-preload/serial/DeployApp (2.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.76s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-windows-amd64.exe addons enable metrics-server -p old-k8s-version-20220921220934-5916 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-20220921220934-5916 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-20220921220934-5916 describe deploy/metrics-server -n kube-system: exit status 1 (181.5297ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-20220921220934-5916" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-20220921220934-5916 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/k8s.gcr.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-20220921220934-5916

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
helpers_test.go:231: (dbg) Non-zero exit: docker inspect old-k8s-version-20220921220934-5916: exit status 1 (266.4407ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: old-k8s-version-20220921220934-5916

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-20220921220934-5916 -n old-k8s-version-20220921220934-5916

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-20220921220934-5916 -n old-k8s-version-20220921220934-5916: exit status 7 (630.2897ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0921 22:10:29.529447    4348 status.go:247] status error: host: state: unknown state "old-k8s-version-20220921220934-5916": docker container inspect old-k8s-version-20220921220934-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220921220934-5916

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-20220921220934-5916" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.76s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (20.21s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-windows-amd64.exe stop -p old-k8s-version-20220921220934-5916 --alsologtostderr -v=3

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-windows-amd64.exe stop -p old-k8s-version-20220921220934-5916 --alsologtostderr -v=3: exit status 82 (19.3729235s)

                                                
                                                
-- stdout --
	* Stopping node "old-k8s-version-20220921220934-5916"  ...
	* Stopping node "old-k8s-version-20220921220934-5916"  ...
	* Stopping node "old-k8s-version-20220921220934-5916"  ...
	* Stopping node "old-k8s-version-20220921220934-5916"  ...
	* Stopping node "old-k8s-version-20220921220934-5916"  ...
	* Stopping node "old-k8s-version-20220921220934-5916"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0921 22:10:29.835932    7864 out.go:296] Setting OutFile to fd 1976 ...
	I0921 22:10:29.906955    7864 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0921 22:10:29.906955    7864 out.go:309] Setting ErrFile to fd 1868...
	I0921 22:10:29.906955    7864 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0921 22:10:29.918979    7864 out.go:303] Setting JSON to false
	I0921 22:10:29.919942    7864 daemonize_windows.go:44] trying to kill existing schedule stop for profile old-k8s-version-20220921220934-5916...
	I0921 22:10:29.932006    7864 ssh_runner.go:195] Run: systemctl --version
	I0921 22:10:29.940941    7864 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220921220934-5916
	W0921 22:10:30.134101    7864 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220921220934-5916 returned with exit code 1
	I0921 22:10:30.134158    7864 retry.go:31] will retry after 276.165072ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-20220921220934-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220921220934-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220921220934-5916
	I0921 22:10:30.432903    7864 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220921220934-5916
	W0921 22:10:30.633264    7864 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220921220934-5916 returned with exit code 1
	I0921 22:10:30.633264    7864 retry.go:31] will retry after 540.190908ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-20220921220934-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220921220934-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220921220934-5916
	I0921 22:10:31.196898    7864 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220921220934-5916
	W0921 22:10:31.420783    7864 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220921220934-5916 returned with exit code 1
	I0921 22:10:31.435274    7864 ssh_runner.go:195] Run: sudo service minikube-scheduled-stop stop
	I0921 22:10:31.443453    7864 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220921220934-5916
	W0921 22:10:31.638346    7864 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220921220934-5916 returned with exit code 1
	I0921 22:10:31.638346    7864 retry.go:31] will retry after 234.428547ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-20220921220934-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220921220934-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220921220934-5916
	I0921 22:10:31.887586    7864 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220921220934-5916
	W0921 22:10:32.097856    7864 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220921220934-5916 returned with exit code 1
	I0921 22:10:32.097856    7864 retry.go:31] will retry after 346.739061ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-20220921220934-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220921220934-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220921220934-5916
	I0921 22:10:32.458634    7864 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220921220934-5916
	W0921 22:10:32.648552    7864 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220921220934-5916 returned with exit code 1
	I0921 22:10:32.648552    7864 retry.go:31] will retry after 667.587979ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-20220921220934-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220921220934-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220921220934-5916
	I0921 22:10:33.342387    7864 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220921220934-5916
	W0921 22:10:33.530936    7864 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220921220934-5916 returned with exit code 1
	E0921 22:10:33.530936    7864 daemonize_windows.go:38] error terminating scheduled stop for profile old-k8s-version-20220921220934-5916: stopping schedule-stop service for profile old-k8s-version-20220921220934-5916: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-20220921220934-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220921220934-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220921220934-5916
	I0921 22:10:33.530936    7864 mustload.go:65] Loading cluster: old-k8s-version-20220921220934-5916
	I0921 22:10:33.531938    7864 config.go:180] Loaded profile config "old-k8s-version-20220921220934-5916": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0921 22:10:33.531938    7864 stop.go:39] StopHost: old-k8s-version-20220921220934-5916
	I0921 22:10:33.540440    7864 out.go:177] * Stopping node "old-k8s-version-20220921220934-5916"  ...
	I0921 22:10:33.559222    7864 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220921220934-5916 --format={{.State.Status}}
	W0921 22:10:33.747909    7864 cli_runner.go:211] docker container inspect old-k8s-version-20220921220934-5916 --format={{.State.Status}} returned with exit code 1
	W0921 22:10:33.747909    7864 stop.go:75] unable to get state: unknown state "old-k8s-version-20220921220934-5916": docker container inspect old-k8s-version-20220921220934-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220921220934-5916
	W0921 22:10:33.747909    7864 stop.go:163] stop host returned error: ssh power off: unknown state "old-k8s-version-20220921220934-5916": docker container inspect old-k8s-version-20220921220934-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220921220934-5916
	I0921 22:10:33.747909    7864 retry.go:31] will retry after 656.519254ms: docker container inspect old-k8s-version-20220921220934-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220921220934-5916
	I0921 22:10:34.417250    7864 stop.go:39] StopHost: old-k8s-version-20220921220934-5916
	I0921 22:10:34.434354    7864 out.go:177] * Stopping node "old-k8s-version-20220921220934-5916"  ...
	I0921 22:10:34.458882    7864 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220921220934-5916 --format={{.State.Status}}
	W0921 22:10:34.651147    7864 cli_runner.go:211] docker container inspect old-k8s-version-20220921220934-5916 --format={{.State.Status}} returned with exit code 1
	W0921 22:10:34.651147    7864 stop.go:75] unable to get state: unknown state "old-k8s-version-20220921220934-5916": docker container inspect old-k8s-version-20220921220934-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220921220934-5916
	W0921 22:10:34.651147    7864 stop.go:163] stop host returned error: ssh power off: unknown state "old-k8s-version-20220921220934-5916": docker container inspect old-k8s-version-20220921220934-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220921220934-5916
	I0921 22:10:34.651147    7864 retry.go:31] will retry after 895.454278ms: docker container inspect old-k8s-version-20220921220934-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220921220934-5916
	I0921 22:10:35.555027    7864 stop.go:39] StopHost: old-k8s-version-20220921220934-5916
	I0921 22:10:35.559138    7864 out.go:177] * Stopping node "old-k8s-version-20220921220934-5916"  ...
	I0921 22:10:35.575075    7864 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220921220934-5916 --format={{.State.Status}}
	W0921 22:10:35.786736    7864 cli_runner.go:211] docker container inspect old-k8s-version-20220921220934-5916 --format={{.State.Status}} returned with exit code 1
	W0921 22:10:35.786736    7864 stop.go:75] unable to get state: unknown state "old-k8s-version-20220921220934-5916": docker container inspect old-k8s-version-20220921220934-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220921220934-5916
	W0921 22:10:35.786736    7864 stop.go:163] stop host returned error: ssh power off: unknown state "old-k8s-version-20220921220934-5916": docker container inspect old-k8s-version-20220921220934-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220921220934-5916
	I0921 22:10:35.786736    7864 retry.go:31] will retry after 1.802051686s: docker container inspect old-k8s-version-20220921220934-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220921220934-5916
	I0921 22:10:37.596110    7864 stop.go:39] StopHost: old-k8s-version-20220921220934-5916
	I0921 22:10:37.600991    7864 out.go:177] * Stopping node "old-k8s-version-20220921220934-5916"  ...
	I0921 22:10:37.626308    7864 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220921220934-5916 --format={{.State.Status}}
	W0921 22:10:37.813825    7864 cli_runner.go:211] docker container inspect old-k8s-version-20220921220934-5916 --format={{.State.Status}} returned with exit code 1
	W0921 22:10:37.813825    7864 stop.go:75] unable to get state: unknown state "old-k8s-version-20220921220934-5916": docker container inspect old-k8s-version-20220921220934-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220921220934-5916
	W0921 22:10:37.813825    7864 stop.go:163] stop host returned error: ssh power off: unknown state "old-k8s-version-20220921220934-5916": docker container inspect old-k8s-version-20220921220934-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220921220934-5916
	I0921 22:10:37.813825    7864 retry.go:31] will retry after 3.426342621s: docker container inspect old-k8s-version-20220921220934-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220921220934-5916
	I0921 22:10:41.252724    7864 stop.go:39] StopHost: old-k8s-version-20220921220934-5916
	I0921 22:10:41.258362    7864 out.go:177] * Stopping node "old-k8s-version-20220921220934-5916"  ...
	I0921 22:10:41.282478    7864 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220921220934-5916 --format={{.State.Status}}
	W0921 22:10:41.516328    7864 cli_runner.go:211] docker container inspect old-k8s-version-20220921220934-5916 --format={{.State.Status}} returned with exit code 1
	W0921 22:10:41.516577    7864 stop.go:75] unable to get state: unknown state "old-k8s-version-20220921220934-5916": docker container inspect old-k8s-version-20220921220934-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220921220934-5916
	W0921 22:10:41.516577    7864 stop.go:163] stop host returned error: ssh power off: unknown state "old-k8s-version-20220921220934-5916": docker container inspect old-k8s-version-20220921220934-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220921220934-5916
	I0921 22:10:41.516577    7864 retry.go:31] will retry after 6.650302303s: docker container inspect old-k8s-version-20220921220934-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220921220934-5916
	I0921 22:10:48.168004    7864 stop.go:39] StopHost: old-k8s-version-20220921220934-5916
	I0921 22:10:48.173697    7864 out.go:177] * Stopping node "old-k8s-version-20220921220934-5916"  ...
	I0921 22:10:48.191839    7864 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220921220934-5916 --format={{.State.Status}}
	W0921 22:10:48.404819    7864 cli_runner.go:211] docker container inspect old-k8s-version-20220921220934-5916 --format={{.State.Status}} returned with exit code 1
	W0921 22:10:48.405014    7864 stop.go:75] unable to get state: unknown state "old-k8s-version-20220921220934-5916": docker container inspect old-k8s-version-20220921220934-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220921220934-5916
	W0921 22:10:48.405014    7864 stop.go:163] stop host returned error: ssh power off: unknown state "old-k8s-version-20220921220934-5916": docker container inspect old-k8s-version-20220921220934-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220921220934-5916
	I0921 22:10:48.408887    7864 out.go:177] 
	W0921 22:10:48.411865    7864 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: docker container inspect old-k8s-version-20220921220934-5916 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220921220934-5916
	
	X Exiting due to GUEST_STOP_TIMEOUT: docker container inspect old-k8s-version-20220921220934-5916 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220921220934-5916
	
	W0921 22:10:48.411939    7864 out.go:239] * 
	* 
	W0921 22:10:48.883244    7864 out.go:239] ╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                        │
	│    * If the above advice does not help, please let us know:                                                            │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                          │
	│                                                                                                                        │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                               │
	│    * Please also attach the following file to the GitHub issue:                                                        │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_stop_eeafc14e343686f0df1f1d4295ac2d3042636ff8_153.log    │
	│                                                                                                                        │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                        │
	│    * If the above advice does not help, please let us know:                                                            │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                          │
	│                                                                                                                        │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                               │
	│    * Please also attach the following file to the GitHub issue:                                                        │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_stop_eeafc14e343686f0df1f1d4295ac2d3042636ff8_153.log    │
	│                                                                                                                        │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0921 22:10:48.886173    7864 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-windows-amd64.exe stop -p old-k8s-version-20220921220934-5916 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Stop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-20220921220934-5916
helpers_test.go:231: (dbg) Non-zero exit: docker inspect old-k8s-version-20220921220934-5916: exit status 1 (251.0871ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: old-k8s-version-20220921220934-5916

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-20220921220934-5916 -n old-k8s-version-20220921220934-5916
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-20220921220934-5916 -n old-k8s-version-20220921220934-5916: exit status 7 (576.6833ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0921 22:10:49.744377    4160 status.go:247] status error: host: state: unknown state "old-k8s-version-20220921220934-5916": docker container inspect old-k8s-version-20220921220934-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220921220934-5916

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-20220921220934-5916" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/old-k8s-version/serial/Stop (20.21s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.72s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-windows-amd64.exe addons enable metrics-server -p no-preload-20220921220937-5916 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-20220921220937-5916 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context no-preload-20220921220937-5916 describe deploy/metrics-server -n kube-system: exit status 1 (188.9913ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-20220921220937-5916" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context no-preload-20220921220937-5916 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/k8s.gcr.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect no-preload-20220921220937-5916
helpers_test.go:231: (dbg) Non-zero exit: docker inspect no-preload-20220921220937-5916: exit status 1 (256.2218ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: no-preload-20220921220937-5916

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-20220921220937-5916 -n no-preload-20220921220937-5916
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-20220921220937-5916 -n no-preload-20220921220937-5916: exit status 7 (588.9622ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0921 22:10:31.404612    4984 status.go:247] status error: host: state: unknown state "no-preload-20220921220937-5916": docker container inspect no-preload-20220921220937-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220921220937-5916

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-20220921220937-5916" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.72s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (20.17s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-windows-amd64.exe stop -p no-preload-20220921220937-5916 --alsologtostderr -v=3

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-windows-amd64.exe stop -p no-preload-20220921220937-5916 --alsologtostderr -v=3: exit status 82 (19.3170205s)

                                                
                                                
-- stdout --
	* Stopping node "no-preload-20220921220937-5916"  ...
	* Stopping node "no-preload-20220921220937-5916"  ...
	* Stopping node "no-preload-20220921220937-5916"  ...
	* Stopping node "no-preload-20220921220937-5916"  ...
	* Stopping node "no-preload-20220921220937-5916"  ...
	* Stopping node "no-preload-20220921220937-5916"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0921 22:10:31.706070    7464 out.go:296] Setting OutFile to fd 1576 ...
	I0921 22:10:31.772472    7464 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0921 22:10:31.772472    7464 out.go:309] Setting ErrFile to fd 1456...
	I0921 22:10:31.772472    7464 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0921 22:10:31.784563    7464 out.go:303] Setting JSON to false
	I0921 22:10:31.784563    7464 daemonize_windows.go:44] trying to kill existing schedule stop for profile no-preload-20220921220937-5916...
	I0921 22:10:31.795450    7464 ssh_runner.go:195] Run: systemctl --version
	I0921 22:10:31.802418    7464 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220921220937-5916
	W0921 22:10:32.018506    7464 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220921220937-5916 returned with exit code 1
	I0921 22:10:32.018506    7464 retry.go:31] will retry after 276.165072ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-20220921220937-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220921220937-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220921220937-5916
	I0921 22:10:32.317724    7464 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220921220937-5916
	W0921 22:10:32.525331    7464 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220921220937-5916 returned with exit code 1
	I0921 22:10:32.525331    7464 retry.go:31] will retry after 540.190908ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-20220921220937-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220921220937-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220921220937-5916
	I0921 22:10:33.083332    7464 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220921220937-5916
	W0921 22:10:33.277218    7464 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220921220937-5916 returned with exit code 1
	I0921 22:10:33.287805    7464 ssh_runner.go:195] Run: sudo service minikube-scheduled-stop stop
	I0921 22:10:33.293228    7464 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220921220937-5916
	W0921 22:10:33.514932    7464 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220921220937-5916 returned with exit code 1
	I0921 22:10:33.514932    7464 retry.go:31] will retry after 234.428547ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-20220921220937-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220921220937-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220921220937-5916
	I0921 22:10:33.771734    7464 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220921220937-5916
	W0921 22:10:33.962196    7464 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220921220937-5916 returned with exit code 1
	I0921 22:10:33.962196    7464 retry.go:31] will retry after 346.739061ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-20220921220937-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220921220937-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220921220937-5916
	I0921 22:10:34.317733    7464 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220921220937-5916
	W0921 22:10:34.510303    7464 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220921220937-5916 returned with exit code 1
	I0921 22:10:34.510303    7464 retry.go:31] will retry after 667.587979ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-20220921220937-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220921220937-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220921220937-5916
	I0921 22:10:35.202124    7464 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220921220937-5916
	W0921 22:10:35.424944    7464 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220921220937-5916 returned with exit code 1
	E0921 22:10:35.425041    7464 daemonize_windows.go:38] error terminating scheduled stop for profile no-preload-20220921220937-5916: stopping schedule-stop service for profile no-preload-20220921220937-5916: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-20220921220937-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220921220937-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220921220937-5916
	I0921 22:10:35.425041    7464 mustload.go:65] Loading cluster: no-preload-20220921220937-5916
	I0921 22:10:35.425787    7464 config.go:180] Loaded profile config "no-preload-20220921220937-5916": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.2
	I0921 22:10:35.425787    7464 stop.go:39] StopHost: no-preload-20220921220937-5916
	I0921 22:10:35.430937    7464 out.go:177] * Stopping node "no-preload-20220921220937-5916"  ...
	I0921 22:10:35.446371    7464 cli_runner.go:164] Run: docker container inspect no-preload-20220921220937-5916 --format={{.State.Status}}
	W0921 22:10:35.663528    7464 cli_runner.go:211] docker container inspect no-preload-20220921220937-5916 --format={{.State.Status}} returned with exit code 1
	W0921 22:10:35.663528    7464 stop.go:75] unable to get state: unknown state "no-preload-20220921220937-5916": docker container inspect no-preload-20220921220937-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220921220937-5916
	W0921 22:10:35.663528    7464 stop.go:163] stop host returned error: ssh power off: unknown state "no-preload-20220921220937-5916": docker container inspect no-preload-20220921220937-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220921220937-5916
	I0921 22:10:35.663528    7464 retry.go:31] will retry after 656.519254ms: docker container inspect no-preload-20220921220937-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220921220937-5916
	I0921 22:10:36.322707    7464 stop.go:39] StopHost: no-preload-20220921220937-5916
	I0921 22:10:36.337379    7464 out.go:177] * Stopping node "no-preload-20220921220937-5916"  ...
	I0921 22:10:36.352930    7464 cli_runner.go:164] Run: docker container inspect no-preload-20220921220937-5916 --format={{.State.Status}}
	W0921 22:10:36.542523    7464 cli_runner.go:211] docker container inspect no-preload-20220921220937-5916 --format={{.State.Status}} returned with exit code 1
	W0921 22:10:36.542609    7464 stop.go:75] unable to get state: unknown state "no-preload-20220921220937-5916": docker container inspect no-preload-20220921220937-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220921220937-5916
	W0921 22:10:36.542609    7464 stop.go:163] stop host returned error: ssh power off: unknown state "no-preload-20220921220937-5916": docker container inspect no-preload-20220921220937-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220921220937-5916
	I0921 22:10:36.542609    7464 retry.go:31] will retry after 895.454278ms: docker container inspect no-preload-20220921220937-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220921220937-5916
	I0921 22:10:37.441627    7464 stop.go:39] StopHost: no-preload-20220921220937-5916
	I0921 22:10:37.447459    7464 out.go:177] * Stopping node "no-preload-20220921220937-5916"  ...
	I0921 22:10:37.462929    7464 cli_runner.go:164] Run: docker container inspect no-preload-20220921220937-5916 --format={{.State.Status}}
	W0921 22:10:37.673480    7464 cli_runner.go:211] docker container inspect no-preload-20220921220937-5916 --format={{.State.Status}} returned with exit code 1
	W0921 22:10:37.673480    7464 stop.go:75] unable to get state: unknown state "no-preload-20220921220937-5916": docker container inspect no-preload-20220921220937-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220921220937-5916
	W0921 22:10:37.673480    7464 stop.go:163] stop host returned error: ssh power off: unknown state "no-preload-20220921220937-5916": docker container inspect no-preload-20220921220937-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220921220937-5916
	I0921 22:10:37.673480    7464 retry.go:31] will retry after 1.802051686s: docker container inspect no-preload-20220921220937-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220921220937-5916
	I0921 22:10:39.486725    7464 stop.go:39] StopHost: no-preload-20220921220937-5916
	I0921 22:10:39.491440    7464 out.go:177] * Stopping node "no-preload-20220921220937-5916"  ...
	I0921 22:10:39.507921    7464 cli_runner.go:164] Run: docker container inspect no-preload-20220921220937-5916 --format={{.State.Status}}
	W0921 22:10:39.707430    7464 cli_runner.go:211] docker container inspect no-preload-20220921220937-5916 --format={{.State.Status}} returned with exit code 1
	W0921 22:10:39.707430    7464 stop.go:75] unable to get state: unknown state "no-preload-20220921220937-5916": docker container inspect no-preload-20220921220937-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220921220937-5916
	W0921 22:10:39.707430    7464 stop.go:163] stop host returned error: ssh power off: unknown state "no-preload-20220921220937-5916": docker container inspect no-preload-20220921220937-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220921220937-5916
	I0921 22:10:39.707430    7464 retry.go:31] will retry after 3.426342621s: docker container inspect no-preload-20220921220937-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220921220937-5916
	I0921 22:10:43.135275    7464 stop.go:39] StopHost: no-preload-20220921220937-5916
	I0921 22:10:43.144661    7464 out.go:177] * Stopping node "no-preload-20220921220937-5916"  ...
	I0921 22:10:43.160672    7464 cli_runner.go:164] Run: docker container inspect no-preload-20220921220937-5916 --format={{.State.Status}}
	W0921 22:10:43.354729    7464 cli_runner.go:211] docker container inspect no-preload-20220921220937-5916 --format={{.State.Status}} returned with exit code 1
	W0921 22:10:43.354729    7464 stop.go:75] unable to get state: unknown state "no-preload-20220921220937-5916": docker container inspect no-preload-20220921220937-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220921220937-5916
	W0921 22:10:43.354729    7464 stop.go:163] stop host returned error: ssh power off: unknown state "no-preload-20220921220937-5916": docker container inspect no-preload-20220921220937-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220921220937-5916
	I0921 22:10:43.354729    7464 retry.go:31] will retry after 6.650302303s: docker container inspect no-preload-20220921220937-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220921220937-5916
	I0921 22:10:50.017871    7464 stop.go:39] StopHost: no-preload-20220921220937-5916
	I0921 22:10:50.022284    7464 out.go:177] * Stopping node "no-preload-20220921220937-5916"  ...
	I0921 22:10:50.038281    7464 cli_runner.go:164] Run: docker container inspect no-preload-20220921220937-5916 --format={{.State.Status}}
	W0921 22:10:50.232447    7464 cli_runner.go:211] docker container inspect no-preload-20220921220937-5916 --format={{.State.Status}} returned with exit code 1
	W0921 22:10:50.232447    7464 stop.go:75] unable to get state: unknown state "no-preload-20220921220937-5916": docker container inspect no-preload-20220921220937-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220921220937-5916
	W0921 22:10:50.232447    7464 stop.go:163] stop host returned error: ssh power off: unknown state "no-preload-20220921220937-5916": docker container inspect no-preload-20220921220937-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220921220937-5916
	I0921 22:10:50.236486    7464 out.go:177] 
	W0921 22:10:50.239436    7464 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: docker container inspect no-preload-20220921220937-5916 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220921220937-5916
	
	X Exiting due to GUEST_STOP_TIMEOUT: docker container inspect no-preload-20220921220937-5916 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220921220937-5916
	
	W0921 22:10:50.239436    7464 out.go:239] * 
	* 
	W0921 22:10:50.715347    7464 out.go:239] ╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                        │
	│    * If the above advice does not help, please let us know:                                                            │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                          │
	│                                                                                                                        │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                               │
	│    * Please also attach the following file to the GitHub issue:                                                        │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_stop_eeafc14e343686f0df1f1d4295ac2d3042636ff8_153.log    │
	│                                                                                                                        │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                        │
	│    * If the above advice does not help, please let us know:                                                            │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                          │
	│                                                                                                                        │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                               │
	│    * Please also attach the following file to the GitHub issue:                                                        │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_stop_eeafc14e343686f0df1f1d4295ac2d3042636ff8_153.log    │
	│                                                                                                                        │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0921 22:10:50.718921    7464 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-windows-amd64.exe stop -p no-preload-20220921220937-5916 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/no-preload/serial/Stop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect no-preload-20220921220937-5916

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/Stop
helpers_test.go:231: (dbg) Non-zero exit: docker inspect no-preload-20220921220937-5916: exit status 1 (255.532ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: no-preload-20220921220937-5916

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-20220921220937-5916 -n no-preload-20220921220937-5916

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/Stop
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-20220921220937-5916 -n no-preload-20220921220937-5916: exit status 7 (582.3062ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0921 22:10:51.571649    8104 status.go:247] status error: host: state: unknown state "no-preload-20220921220937-5916": docker container inspect no-preload-20220921220937-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220921220937-5916

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-20220921220937-5916" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/no-preload/serial/Stop (20.17s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (1.78s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-20220921220947-5916 create -f testdata\busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context embed-certs-20220921220947-5916 create -f testdata\busybox.yaml: exit status 1 (164.3054ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-20220921220947-5916" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context embed-certs-20220921220947-5916 create -f testdata\busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/embed-certs/serial/DeployApp]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect embed-certs-20220921220947-5916
helpers_test.go:231: (dbg) Non-zero exit: docker inspect embed-certs-20220921220947-5916: exit status 1 (256.7268ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: embed-certs-20220921220947-5916

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p embed-certs-20220921220947-5916 -n embed-certs-20220921220947-5916
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p embed-certs-20220921220947-5916 -n embed-certs-20220921220947-5916: exit status 7 (549.4784ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0921 22:10:38.957463    3088 status.go:247] status error: host: state: unknown state "embed-certs-20220921220947-5916": docker container inspect embed-certs-20220921220947-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220921220947-5916

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-20220921220947-5916" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/embed-certs/serial/DeployApp]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect embed-certs-20220921220947-5916
helpers_test.go:231: (dbg) Non-zero exit: docker inspect embed-certs-20220921220947-5916: exit status 1 (254.765ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: embed-certs-20220921220947-5916

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p embed-certs-20220921220947-5916 -n embed-certs-20220921220947-5916
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p embed-certs-20220921220947-5916 -n embed-certs-20220921220947-5916: exit status 7 (533.6905ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0921 22:10:39.754447    2344 status.go:247] status error: host: state: unknown state "embed-certs-20220921220947-5916": docker container inspect embed-certs-20220921220947-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220921220947-5916

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-20220921220947-5916" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/embed-certs/serial/DeployApp (1.78s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.57s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-windows-amd64.exe addons enable metrics-server -p embed-certs-20220921220947-5916 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-20220921220947-5916 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context embed-certs-20220921220947-5916 describe deploy/metrics-server -n kube-system: exit status 1 (168.046ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-20220921220947-5916" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-20220921220947-5916 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/k8s.gcr.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect embed-certs-20220921220947-5916
helpers_test.go:231: (dbg) Non-zero exit: docker inspect embed-certs-20220921220947-5916: exit status 1 (251.8633ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: embed-certs-20220921220947-5916

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p embed-certs-20220921220947-5916 -n embed-certs-20220921220947-5916
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p embed-certs-20220921220947-5916 -n embed-certs-20220921220947-5916: exit status 7 (561.1103ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0921 22:10:41.329786    8868 status.go:247] status error: host: state: unknown state "embed-certs-20220921220947-5916": docker container inspect embed-certs-20220921220947-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220921220947-5916

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-20220921220947-5916" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.57s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (20.26s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-windows-amd64.exe stop -p embed-certs-20220921220947-5916 --alsologtostderr -v=3

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-windows-amd64.exe stop -p embed-certs-20220921220947-5916 --alsologtostderr -v=3: exit status 82 (19.3393392s)

                                                
                                                
-- stdout --
	* Stopping node "embed-certs-20220921220947-5916"  ...
	* Stopping node "embed-certs-20220921220947-5916"  ...
	* Stopping node "embed-certs-20220921220947-5916"  ...
	* Stopping node "embed-certs-20220921220947-5916"  ...
	* Stopping node "embed-certs-20220921220947-5916"  ...
	* Stopping node "embed-certs-20220921220947-5916"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0921 22:10:41.618652    8152 out.go:296] Setting OutFile to fd 2004 ...
	I0921 22:10:41.685666    8152 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0921 22:10:41.685666    8152 out.go:309] Setting ErrFile to fd 1760...
	I0921 22:10:41.685724    8152 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0921 22:10:41.696937    8152 out.go:303] Setting JSON to false
	I0921 22:10:41.697971    8152 daemonize_windows.go:44] trying to kill existing schedule stop for profile embed-certs-20220921220947-5916...
	I0921 22:10:41.708611    8152 ssh_runner.go:195] Run: systemctl --version
	I0921 22:10:41.714187    8152 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220921220947-5916
	W0921 22:10:41.892552    8152 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220921220947-5916 returned with exit code 1
	I0921 22:10:41.892977    8152 retry.go:31] will retry after 276.165072ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-20220921220947-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220921220947-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220921220947-5916
	I0921 22:10:42.187996    8152 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220921220947-5916
	W0921 22:10:42.381032    8152 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220921220947-5916 returned with exit code 1
	I0921 22:10:42.381032    8152 retry.go:31] will retry after 540.190908ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-20220921220947-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220921220947-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220921220947-5916
	I0921 22:10:42.940269    8152 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220921220947-5916
	W0921 22:10:43.135322    8152 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220921220947-5916 returned with exit code 1
	I0921 22:10:43.149664    8152 ssh_runner.go:195] Run: sudo service minikube-scheduled-stop stop
	I0921 22:10:43.157661    8152 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220921220947-5916
	W0921 22:10:43.369542    8152 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220921220947-5916 returned with exit code 1
	I0921 22:10:43.369542    8152 retry.go:31] will retry after 234.428547ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-20220921220947-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220921220947-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220921220947-5916
	I0921 22:10:43.616179    8152 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220921220947-5916
	W0921 22:10:43.793572    8152 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220921220947-5916 returned with exit code 1
	I0921 22:10:43.793777    8152 retry.go:31] will retry after 346.739061ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-20220921220947-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220921220947-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220921220947-5916
	I0921 22:10:44.158024    8152 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220921220947-5916
	W0921 22:10:44.353318    8152 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220921220947-5916 returned with exit code 1
	I0921 22:10:44.353318    8152 retry.go:31] will retry after 667.587979ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-20220921220947-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220921220947-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220921220947-5916
	I0921 22:10:45.039664    8152 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220921220947-5916
	W0921 22:10:45.216440    8152 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220921220947-5916 returned with exit code 1
	E0921 22:10:45.216829    8152 daemonize_windows.go:38] error terminating scheduled stop for profile embed-certs-20220921220947-5916: stopping schedule-stop service for profile embed-certs-20220921220947-5916: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-20220921220947-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220921220947-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220921220947-5916
	I0921 22:10:45.216912    8152 mustload.go:65] Loading cluster: embed-certs-20220921220947-5916
	I0921 22:10:45.217692    8152 config.go:180] Loaded profile config "embed-certs-20220921220947-5916": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.2
	I0921 22:10:45.217692    8152 stop.go:39] StopHost: embed-certs-20220921220947-5916
	I0921 22:10:45.223017    8152 out.go:177] * Stopping node "embed-certs-20220921220947-5916"  ...
	I0921 22:10:45.238102    8152 cli_runner.go:164] Run: docker container inspect embed-certs-20220921220947-5916 --format={{.State.Status}}
	W0921 22:10:45.434171    8152 cli_runner.go:211] docker container inspect embed-certs-20220921220947-5916 --format={{.State.Status}} returned with exit code 1
	W0921 22:10:45.434171    8152 stop.go:75] unable to get state: unknown state "embed-certs-20220921220947-5916": docker container inspect embed-certs-20220921220947-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220921220947-5916
	W0921 22:10:45.434171    8152 stop.go:163] stop host returned error: ssh power off: unknown state "embed-certs-20220921220947-5916": docker container inspect embed-certs-20220921220947-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220921220947-5916
	I0921 22:10:45.434171    8152 retry.go:31] will retry after 656.519254ms: docker container inspect embed-certs-20220921220947-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220921220947-5916
	I0921 22:10:46.100948    8152 stop.go:39] StopHost: embed-certs-20220921220947-5916
	I0921 22:10:46.107579    8152 out.go:177] * Stopping node "embed-certs-20220921220947-5916"  ...
	I0921 22:10:46.122962    8152 cli_runner.go:164] Run: docker container inspect embed-certs-20220921220947-5916 --format={{.State.Status}}
	W0921 22:10:46.344591    8152 cli_runner.go:211] docker container inspect embed-certs-20220921220947-5916 --format={{.State.Status}} returned with exit code 1
	W0921 22:10:46.344591    8152 stop.go:75] unable to get state: unknown state "embed-certs-20220921220947-5916": docker container inspect embed-certs-20220921220947-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220921220947-5916
	W0921 22:10:46.344591    8152 stop.go:163] stop host returned error: ssh power off: unknown state "embed-certs-20220921220947-5916": docker container inspect embed-certs-20220921220947-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220921220947-5916
	I0921 22:10:46.344591    8152 retry.go:31] will retry after 895.454278ms: docker container inspect embed-certs-20220921220947-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220921220947-5916
	I0921 22:10:47.250350    8152 stop.go:39] StopHost: embed-certs-20220921220947-5916
	I0921 22:10:47.257837    8152 out.go:177] * Stopping node "embed-certs-20220921220947-5916"  ...
	I0921 22:10:47.281919    8152 cli_runner.go:164] Run: docker container inspect embed-certs-20220921220947-5916 --format={{.State.Status}}
	W0921 22:10:47.498827    8152 cli_runner.go:211] docker container inspect embed-certs-20220921220947-5916 --format={{.State.Status}} returned with exit code 1
	W0921 22:10:47.498986    8152 stop.go:75] unable to get state: unknown state "embed-certs-20220921220947-5916": docker container inspect embed-certs-20220921220947-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220921220947-5916
	W0921 22:10:47.499059    8152 stop.go:163] stop host returned error: ssh power off: unknown state "embed-certs-20220921220947-5916": docker container inspect embed-certs-20220921220947-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220921220947-5916
	I0921 22:10:47.499059    8152 retry.go:31] will retry after 1.802051686s: docker container inspect embed-certs-20220921220947-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220921220947-5916
	I0921 22:10:49.308766    8152 stop.go:39] StopHost: embed-certs-20220921220947-5916
	I0921 22:10:49.314151    8152 out.go:177] * Stopping node "embed-certs-20220921220947-5916"  ...
	I0921 22:10:49.332810    8152 cli_runner.go:164] Run: docker container inspect embed-certs-20220921220947-5916 --format={{.State.Status}}
	W0921 22:10:49.525331    8152 cli_runner.go:211] docker container inspect embed-certs-20220921220947-5916 --format={{.State.Status}} returned with exit code 1
	W0921 22:10:49.525331    8152 stop.go:75] unable to get state: unknown state "embed-certs-20220921220947-5916": docker container inspect embed-certs-20220921220947-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220921220947-5916
	W0921 22:10:49.525331    8152 stop.go:163] stop host returned error: ssh power off: unknown state "embed-certs-20220921220947-5916": docker container inspect embed-certs-20220921220947-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220921220947-5916
	I0921 22:10:49.525331    8152 retry.go:31] will retry after 3.426342621s: docker container inspect embed-certs-20220921220947-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220921220947-5916
	I0921 22:10:52.956844    8152 stop.go:39] StopHost: embed-certs-20220921220947-5916
	I0921 22:10:52.960844    8152 out.go:177] * Stopping node "embed-certs-20220921220947-5916"  ...
	I0921 22:10:52.977130    8152 cli_runner.go:164] Run: docker container inspect embed-certs-20220921220947-5916 --format={{.State.Status}}
	W0921 22:10:53.162178    8152 cli_runner.go:211] docker container inspect embed-certs-20220921220947-5916 --format={{.State.Status}} returned with exit code 1
	W0921 22:10:53.162178    8152 stop.go:75] unable to get state: unknown state "embed-certs-20220921220947-5916": docker container inspect embed-certs-20220921220947-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220921220947-5916
	W0921 22:10:53.162178    8152 stop.go:163] stop host returned error: ssh power off: unknown state "embed-certs-20220921220947-5916": docker container inspect embed-certs-20220921220947-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220921220947-5916
	I0921 22:10:53.162178    8152 retry.go:31] will retry after 6.650302303s: docker container inspect embed-certs-20220921220947-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220921220947-5916
	I0921 22:10:59.820370    8152 stop.go:39] StopHost: embed-certs-20220921220947-5916
	I0921 22:10:59.825996    8152 out.go:177] * Stopping node "embed-certs-20220921220947-5916"  ...
	I0921 22:10:59.847902    8152 cli_runner.go:164] Run: docker container inspect embed-certs-20220921220947-5916 --format={{.State.Status}}
	W0921 22:11:00.053620    8152 cli_runner.go:211] docker container inspect embed-certs-20220921220947-5916 --format={{.State.Status}} returned with exit code 1
	W0921 22:11:00.053759    8152 stop.go:75] unable to get state: unknown state "embed-certs-20220921220947-5916": docker container inspect embed-certs-20220921220947-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220921220947-5916
	W0921 22:11:00.053835    8152 stop.go:163] stop host returned error: ssh power off: unknown state "embed-certs-20220921220947-5916": docker container inspect embed-certs-20220921220947-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220921220947-5916
	I0921 22:11:00.056899    8152 out.go:177] 
	W0921 22:11:00.058791    8152 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: docker container inspect embed-certs-20220921220947-5916 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220921220947-5916
	
	X Exiting due to GUEST_STOP_TIMEOUT: docker container inspect embed-certs-20220921220947-5916 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220921220947-5916
	
	W0921 22:11:00.058791    8152 out.go:239] * 
	* 
	W0921 22:11:00.637007    8152 out.go:239] ╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                        │
	│    * If the above advice does not help, please let us know:                                                            │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                          │
	│                                                                                                                        │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                               │
	│    * Please also attach the following file to the GitHub issue:                                                        │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_stop_eeafc14e343686f0df1f1d4295ac2d3042636ff8_153.log    │
	│                                                                                                                        │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                        │
	│    * If the above advice does not help, please let us know:                                                            │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                          │
	│                                                                                                                        │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                               │
	│    * Please also attach the following file to the GitHub issue:                                                        │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_stop_eeafc14e343686f0df1f1d4295ac2d3042636ff8_153.log    │
	│                                                                                                                        │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0921 22:11:00.640637    8152 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-windows-amd64.exe stop -p embed-certs-20220921220947-5916 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Stop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect embed-certs-20220921220947-5916
helpers_test.go:231: (dbg) Non-zero exit: docker inspect embed-certs-20220921220947-5916: exit status 1 (279.7297ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: embed-certs-20220921220947-5916

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p embed-certs-20220921220947-5916 -n embed-certs-20220921220947-5916
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p embed-certs-20220921220947-5916 -n embed-certs-20220921220947-5916: exit status 7 (625.7137ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0921 22:11:01.591474    8368 status.go:247] status error: host: state: unknown state "embed-certs-20220921220947-5916": docker container inspect embed-certs-20220921220947-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220921220947-5916

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-20220921220947-5916" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/embed-certs/serial/Stop (20.26s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (2.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-20220921220934-5916 -n old-k8s-version-20220921220934-5916
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-20220921220934-5916 -n old-k8s-version-20220921220934-5916: exit status 7 (549.6322ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0921 22:10:50.295401    7916 status.go:247] status error: host: state: unknown state "old-k8s-version-20220921220934-5916": docker container inspect old-k8s-version-20220921220934-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220921220934-5916

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Nonexistent"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p old-k8s-version-20220921220934-5916 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-20220921220934-5916

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
helpers_test.go:231: (dbg) Non-zero exit: docker inspect old-k8s-version-20220921220934-5916: exit status 1 (256.9264ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: old-k8s-version-20220921220934-5916

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-20220921220934-5916 -n old-k8s-version-20220921220934-5916

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-20220921220934-5916 -n old-k8s-version-20220921220934-5916: exit status 7 (569.3031ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0921 22:10:51.774276    8244 status.go:247] status error: host: state: unknown state "old-k8s-version-20220921220934-5916": docker container inspect old-k8s-version-20220921220934-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220921220934-5916

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-20220921220934-5916" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (2.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (2s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-20220921220937-5916 -n no-preload-20220921220937-5916

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-20220921220937-5916 -n no-preload-20220921220937-5916: exit status 7 (574.7104ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0921 22:10:52.147061    3800 status.go:247] status error: host: state: unknown state "no-preload-20220921220937-5916": docker container inspect no-preload-20220921220937-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220921220937-5916

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Nonexistent"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p no-preload-20220921220937-5916 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect no-preload-20220921220937-5916
helpers_test.go:231: (dbg) Non-zero exit: docker inspect no-preload-20220921220937-5916: exit status 1 (240.8582ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: no-preload-20220921220937-5916

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-20220921220937-5916 -n no-preload-20220921220937-5916
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-20220921220937-5916 -n no-preload-20220921220937-5916: exit status 7 (580.988ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0921 22:10:53.571015    7512 status.go:247] status error: host: state: unknown state "no-preload-20220921220937-5916": docker container inspect no-preload-20220921220937-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220921220937-5916

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-20220921220937-5916" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (2.00s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (77.57s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-windows-amd64.exe start -p old-k8s-version-20220921220934-5916 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --kubernetes-version=v1.16.0

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p old-k8s-version-20220921220934-5916 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --kubernetes-version=v1.16.0: exit status 60 (1m16.5189585s)

                                                
                                                
-- stdout --
	* [old-k8s-version-20220921220934-5916] minikube v1.27.0 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	  - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=14995
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Kubernetes 1.25.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.25.2
	* Using the docker driver based on existing profile
	* Starting control plane node old-k8s-version-20220921220934-5916 in cluster old-k8s-version-20220921220934-5916
	* Pulling base image ...
	* docker "old-k8s-version-20220921220934-5916" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* docker "old-k8s-version-20220921220934-5916" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0921 22:10:52.043040    8216 out.go:296] Setting OutFile to fd 1660 ...
	I0921 22:10:52.118846    8216 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0921 22:10:52.118846    8216 out.go:309] Setting ErrFile to fd 1936...
	I0921 22:10:52.118917    8216 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0921 22:10:52.140780    8216 out.go:303] Setting JSON to false
	I0921 22:10:52.144257    8216 start.go:115] hostinfo: {"hostname":"minikube2","uptime":4320,"bootTime":1663793932,"procs":151,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19044 Build 19044","kernelVersion":"10.0.19044 Build 19044","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"29dced62-21fb-45d8-a34e-472b66ced002"}
	W0921 22:10:52.144257    8216 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0921 22:10:52.148140    8216 out.go:177] * [old-k8s-version-20220921220934-5916] minikube v1.27.0 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	I0921 22:10:52.150864    8216 notify.go:214] Checking for updates...
	I0921 22:10:52.153905    8216 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	I0921 22:10:52.156578    8216 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	I0921 22:10:52.161373    8216 out.go:177]   - MINIKUBE_LOCATION=14995
	I0921 22:10:52.163691    8216 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0921 22:10:52.167133    8216 config.go:180] Loaded profile config "old-k8s-version-20220921220934-5916": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0921 22:10:52.169835    8216 out.go:177] * Kubernetes 1.25.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.25.2
	I0921 22:10:52.171918    8216 driver.go:365] Setting default libvirt URI to qemu:///system
	I0921 22:10:52.458070    8216 docker.go:137] docker version: linux-20.10.17
	I0921 22:10:52.465930    8216 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0921 22:10:53.051838    8216 info.go:265] docker info: {ID:6VFI:N2PO:PLBP:KXLR:EN7R:OR7P:564S:UY7N:ZWDA:5V2U:KGL2:KJZ7 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:43 OomKillDisable:true NGoroutines:81 SystemTime:2022-09-21 22:10:52.6285871 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.10.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.9] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-
plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.19.0]] Warnings:<nil>}}
	I0921 22:10:53.055841    8216 out.go:177] * Using the docker driver based on existing profile
	I0921 22:10:53.057842    8216 start.go:284] selected driver: docker
	I0921 22:10:53.057842    8216 start.go:808] validating driver "docker" against &{Name:old-k8s-version-20220921220934-5916 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-20220921220934-5916 Namespace:default APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host M
ount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I0921 22:10:53.057842    8216 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0921 22:10:53.149187    8216 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0921 22:10:53.709790    8216 info.go:265] docker info: {ID:6VFI:N2PO:PLBP:KXLR:EN7R:OR7P:564S:UY7N:ZWDA:5V2U:KGL2:KJZ7 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:43 OomKillDisable:true NGoroutines:81 SystemTime:2022-09-21 22:10:53.3171342 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.10.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.9] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-
plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.19.0]] Warnings:<nil>}}
	I0921 22:10:53.709790    8216 start_flags.go:867] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0921 22:10:53.709790    8216 cni.go:95] Creating CNI manager for ""
	I0921 22:10:53.709790    8216 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0921 22:10:53.709790    8216 start_flags.go:316] config:
	{Name:old-k8s-version-20220921220934-5916 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-20220921220934-5916 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contai
nerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I0921 22:10:53.715154    8216 out.go:177] * Starting control plane node old-k8s-version-20220921220934-5916 in cluster old-k8s-version-20220921220934-5916
	I0921 22:10:53.717561    8216 cache.go:120] Beginning downloading kic base image for docker with docker
	I0921 22:10:53.720392    8216 out.go:177] * Pulling base image ...
	I0921 22:10:53.722557    8216 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0921 22:10:53.722557    8216 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c in local docker daemon
	I0921 22:10:53.722657    8216 preload.go:148] Found local preload: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0921 22:10:53.722657    8216 cache.go:57] Caching tarball of preloaded images
	I0921 22:10:53.723269    8216 preload.go:174] Found C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0921 22:10:53.723269    8216 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0921 22:10:53.723269    8216 profile.go:148] Saving config to C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\old-k8s-version-20220921220934-5916\config.json ...
	I0921 22:10:53.912912    8216 cache.go:147] Downloading gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c to local cache
	I0921 22:10:53.912912    8216 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase_v0.0.34@sha256_f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c.tar
	I0921 22:10:53.912912    8216 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase_v0.0.34@sha256_f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c.tar
	I0921 22:10:53.912912    8216 image.go:59] Checking for gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c in local cache directory
	I0921 22:10:53.912912    8216 image.go:62] Found gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c in local cache directory, skipping pull
	I0921 22:10:53.912912    8216 image.go:103] gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c exists in cache, skipping pull
	I0921 22:10:53.912912    8216 cache.go:150] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c as a tarball
	I0921 22:10:53.912912    8216 cache.go:161] Loading gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c from local cache
	I0921 22:10:53.912912    8216 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase_v0.0.34@sha256_f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c.tar
	I0921 22:10:56.325723    8216 cache.go:164] successfully loaded gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c from cached tarball
	I0921 22:10:56.326281    8216 cache.go:170] Downloading gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c to local daemon
	I0921 22:10:56.326401    8216 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c.d.lock -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase_v0.0.34@sha256_f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c.d.lock
	I0921 22:10:56.326713    8216 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c in local docker daemon
	I0921 22:10:56.541974    8216 image.go:243] Writing gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c to local daemon
	    > gcr.io/k8s-minikube/kicbase:  0 B [__________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase:  0 B [__________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase:  0 B [__________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase:  0 B [__________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase:  0 B [______________________] ?% ? p/s 900msI0921 22:10:58.180946    8216 cache.go:177] use image loaded from cache gcr.io/k8s-minikube/kicbase:v0.0.34
	W0921 22:10:58.180983    8216 out.go:239] ! minikube was unable to download gcr.io/k8s-minikube/kicbase:v0.0.34, but successfully downloaded gcr.io/k8s-minikube/kicbase:v0.0.34 as a fallback image
	! minikube was unable to download gcr.io/k8s-minikube/kicbase:v0.0.34, but successfully downloaded gcr.io/k8s-minikube/kicbase:v0.0.34 as a fallback image
	I0921 22:10:58.180983    8216 cache.go:208] Successfully downloaded all kic artifacts
	I0921 22:10:58.180983    8216 start.go:364] acquiring machines lock for old-k8s-version-20220921220934-5916: {Name:mka5121945d619472d3cfcf71df0e13caeaa183b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0921 22:10:58.180983    8216 start.go:368] acquired machines lock for "old-k8s-version-20220921220934-5916" in 0s
	I0921 22:10:58.180983    8216 start.go:96] Skipping create...Using existing machine configuration
	I0921 22:10:58.181578    8216 fix.go:55] fixHost starting: 
	I0921 22:10:58.197955    8216 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220921220934-5916 --format={{.State.Status}}
	W0921 22:10:58.410711    8216 cli_runner.go:211] docker container inspect old-k8s-version-20220921220934-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:10:58.410711    8216 fix.go:103] recreateIfNeeded on old-k8s-version-20220921220934-5916: state= err=unknown state "old-k8s-version-20220921220934-5916": docker container inspect old-k8s-version-20220921220934-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220921220934-5916
	I0921 22:10:58.410711    8216 fix.go:108] machineExists: false. err=machine does not exist
	I0921 22:10:58.414717    8216 out.go:177] * docker "old-k8s-version-20220921220934-5916" container is missing, will recreate.
	I0921 22:10:58.417239    8216 delete.go:124] DEMOLISHING old-k8s-version-20220921220934-5916 ...
	I0921 22:10:58.432681    8216 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220921220934-5916 --format={{.State.Status}}
	W0921 22:10:58.629118    8216 cli_runner.go:211] docker container inspect old-k8s-version-20220921220934-5916 --format={{.State.Status}} returned with exit code 1
	W0921 22:10:58.629118    8216 stop.go:75] unable to get state: unknown state "old-k8s-version-20220921220934-5916": docker container inspect old-k8s-version-20220921220934-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220921220934-5916
	I0921 22:10:58.629118    8216 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "old-k8s-version-20220921220934-5916": docker container inspect old-k8s-version-20220921220934-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220921220934-5916
	I0921 22:10:58.644291    8216 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220921220934-5916 --format={{.State.Status}}
	W0921 22:10:58.815959    8216 cli_runner.go:211] docker container inspect old-k8s-version-20220921220934-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:10:58.815959    8216 delete.go:82] Unable to get host status for old-k8s-version-20220921220934-5916, assuming it has already been deleted: state: unknown state "old-k8s-version-20220921220934-5916": docker container inspect old-k8s-version-20220921220934-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220921220934-5916
	I0921 22:10:58.824047    8216 cli_runner.go:164] Run: docker container inspect -f {{.Id}} old-k8s-version-20220921220934-5916
	W0921 22:10:59.012713    8216 cli_runner.go:211] docker container inspect -f {{.Id}} old-k8s-version-20220921220934-5916 returned with exit code 1
	I0921 22:10:59.012713    8216 kic.go:356] could not find the container old-k8s-version-20220921220934-5916 to remove it. will try anyways
	I0921 22:10:59.020713    8216 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220921220934-5916 --format={{.State.Status}}
	W0921 22:10:59.213328    8216 cli_runner.go:211] docker container inspect old-k8s-version-20220921220934-5916 --format={{.State.Status}} returned with exit code 1
	W0921 22:10:59.213328    8216 oci.go:84] error getting container status, will try to delete anyways: unknown state "old-k8s-version-20220921220934-5916": docker container inspect old-k8s-version-20220921220934-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220921220934-5916
	I0921 22:10:59.223582    8216 cli_runner.go:164] Run: docker exec --privileged -t old-k8s-version-20220921220934-5916 /bin/bash -c "sudo init 0"
	W0921 22:10:59.445649    8216 cli_runner.go:211] docker exec --privileged -t old-k8s-version-20220921220934-5916 /bin/bash -c "sudo init 0" returned with exit code 1
	I0921 22:10:59.445815    8216 oci.go:646] error shutdown old-k8s-version-20220921220934-5916: docker exec --privileged -t old-k8s-version-20220921220934-5916 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: old-k8s-version-20220921220934-5916
	I0921 22:11:00.463237    8216 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220921220934-5916 --format={{.State.Status}}
	W0921 22:11:00.669475    8216 cli_runner.go:211] docker container inspect old-k8s-version-20220921220934-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:11:00.669475    8216 oci.go:658] temporary error verifying shutdown: unknown state "old-k8s-version-20220921220934-5916": docker container inspect old-k8s-version-20220921220934-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220921220934-5916
	I0921 22:11:00.669475    8216 oci.go:660] temporary error: container old-k8s-version-20220921220934-5916 status is  but expect it to be exited
	I0921 22:11:00.669475    8216 retry.go:31] will retry after 552.330144ms: couldn't verify container is exited. %v: unknown state "old-k8s-version-20220921220934-5916": docker container inspect old-k8s-version-20220921220934-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220921220934-5916
	I0921 22:11:01.240893    8216 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220921220934-5916 --format={{.State.Status}}
	W0921 22:11:01.452439    8216 cli_runner.go:211] docker container inspect old-k8s-version-20220921220934-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:11:01.452439    8216 oci.go:658] temporary error verifying shutdown: unknown state "old-k8s-version-20220921220934-5916": docker container inspect old-k8s-version-20220921220934-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220921220934-5916
	I0921 22:11:01.452439    8216 oci.go:660] temporary error: container old-k8s-version-20220921220934-5916 status is  but expect it to be exited
	I0921 22:11:01.452439    8216 retry.go:31] will retry after 1.080381816s: couldn't verify container is exited. %v: unknown state "old-k8s-version-20220921220934-5916": docker container inspect old-k8s-version-20220921220934-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220921220934-5916
	I0921 22:11:02.540956    8216 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220921220934-5916 --format={{.State.Status}}
	W0921 22:11:02.754228    8216 cli_runner.go:211] docker container inspect old-k8s-version-20220921220934-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:11:02.754228    8216 oci.go:658] temporary error verifying shutdown: unknown state "old-k8s-version-20220921220934-5916": docker container inspect old-k8s-version-20220921220934-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220921220934-5916
	I0921 22:11:02.754228    8216 oci.go:660] temporary error: container old-k8s-version-20220921220934-5916 status is  but expect it to be exited
	I0921 22:11:02.754228    8216 retry.go:31] will retry after 1.31013006s: couldn't verify container is exited. %v: unknown state "old-k8s-version-20220921220934-5916": docker container inspect old-k8s-version-20220921220934-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220921220934-5916
	I0921 22:11:04.078146    8216 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220921220934-5916 --format={{.State.Status}}
	W0921 22:11:04.299926    8216 cli_runner.go:211] docker container inspect old-k8s-version-20220921220934-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:11:04.299926    8216 oci.go:658] temporary error verifying shutdown: unknown state "old-k8s-version-20220921220934-5916": docker container inspect old-k8s-version-20220921220934-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220921220934-5916
	I0921 22:11:04.299926    8216 oci.go:660] temporary error: container old-k8s-version-20220921220934-5916 status is  but expect it to be exited
	I0921 22:11:04.299926    8216 retry.go:31] will retry after 1.582392691s: couldn't verify container is exited. %v: unknown state "old-k8s-version-20220921220934-5916": docker container inspect old-k8s-version-20220921220934-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220921220934-5916
	I0921 22:11:05.896286    8216 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220921220934-5916 --format={{.State.Status}}
	W0921 22:11:06.119961    8216 cli_runner.go:211] docker container inspect old-k8s-version-20220921220934-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:11:06.119961    8216 oci.go:658] temporary error verifying shutdown: unknown state "old-k8s-version-20220921220934-5916": docker container inspect old-k8s-version-20220921220934-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220921220934-5916
	I0921 22:11:06.119961    8216 oci.go:660] temporary error: container old-k8s-version-20220921220934-5916 status is  but expect it to be exited
	I0921 22:11:06.119961    8216 retry.go:31] will retry after 2.340488664s: couldn't verify container is exited. %v: unknown state "old-k8s-version-20220921220934-5916": docker container inspect old-k8s-version-20220921220934-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220921220934-5916
	I0921 22:11:08.486500    8216 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220921220934-5916 --format={{.State.Status}}
	W0921 22:11:08.679465    8216 cli_runner.go:211] docker container inspect old-k8s-version-20220921220934-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:11:08.679465    8216 oci.go:658] temporary error verifying shutdown: unknown state "old-k8s-version-20220921220934-5916": docker container inspect old-k8s-version-20220921220934-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220921220934-5916
	I0921 22:11:08.679465    8216 oci.go:660] temporary error: container old-k8s-version-20220921220934-5916 status is  but expect it to be exited
	I0921 22:11:08.679465    8216 retry.go:31] will retry after 4.506218855s: couldn't verify container is exited. %v: unknown state "old-k8s-version-20220921220934-5916": docker container inspect old-k8s-version-20220921220934-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220921220934-5916
	I0921 22:11:13.206052    8216 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220921220934-5916 --format={{.State.Status}}
	W0921 22:11:13.395913    8216 cli_runner.go:211] docker container inspect old-k8s-version-20220921220934-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:11:13.396005    8216 oci.go:658] temporary error verifying shutdown: unknown state "old-k8s-version-20220921220934-5916": docker container inspect old-k8s-version-20220921220934-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220921220934-5916
	I0921 22:11:13.396108    8216 oci.go:660] temporary error: container old-k8s-version-20220921220934-5916 status is  but expect it to be exited
	I0921 22:11:13.396108    8216 retry.go:31] will retry after 3.221479586s: couldn't verify container is exited. %v: unknown state "old-k8s-version-20220921220934-5916": docker container inspect old-k8s-version-20220921220934-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220921220934-5916
	I0921 22:11:16.640678    8216 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220921220934-5916 --format={{.State.Status}}
	W0921 22:11:16.842528    8216 cli_runner.go:211] docker container inspect old-k8s-version-20220921220934-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:11:16.842528    8216 oci.go:658] temporary error verifying shutdown: unknown state "old-k8s-version-20220921220934-5916": docker container inspect old-k8s-version-20220921220934-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220921220934-5916
	I0921 22:11:16.842528    8216 oci.go:660] temporary error: container old-k8s-version-20220921220934-5916 status is  but expect it to be exited
	I0921 22:11:16.842528    8216 oci.go:88] couldn't shut down old-k8s-version-20220921220934-5916 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "old-k8s-version-20220921220934-5916": docker container inspect old-k8s-version-20220921220934-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220921220934-5916
	 
	I0921 22:11:16.850143    8216 cli_runner.go:164] Run: docker rm -f -v old-k8s-version-20220921220934-5916
	I0921 22:11:17.054682    8216 cli_runner.go:164] Run: docker container inspect -f {{.Id}} old-k8s-version-20220921220934-5916
	W0921 22:11:17.249067    8216 cli_runner.go:211] docker container inspect -f {{.Id}} old-k8s-version-20220921220934-5916 returned with exit code 1
	I0921 22:11:17.257587    8216 cli_runner.go:164] Run: docker network inspect old-k8s-version-20220921220934-5916 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0921 22:11:17.451984    8216 cli_runner.go:211] docker network inspect old-k8s-version-20220921220934-5916 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0921 22:11:17.459370    8216 network_create.go:272] running [docker network inspect old-k8s-version-20220921220934-5916] to gather additional debugging logs...
	I0921 22:11:17.459370    8216 cli_runner.go:164] Run: docker network inspect old-k8s-version-20220921220934-5916
	W0921 22:11:17.638807    8216 cli_runner.go:211] docker network inspect old-k8s-version-20220921220934-5916 returned with exit code 1
	I0921 22:11:17.638903    8216 network_create.go:275] error running [docker network inspect old-k8s-version-20220921220934-5916]: docker network inspect old-k8s-version-20220921220934-5916: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: old-k8s-version-20220921220934-5916
	I0921 22:11:17.638937    8216 network_create.go:277] output of [docker network inspect old-k8s-version-20220921220934-5916]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: old-k8s-version-20220921220934-5916
	
	** /stderr **
	W0921 22:11:17.639963    8216 delete.go:139] delete failed (probably ok) <nil>
	I0921 22:11:17.640011    8216 fix.go:115] Sleeping 1 second for extra luck!
	I0921 22:11:18.652471    8216 start.go:125] createHost starting for "" (driver="docker")
	I0921 22:11:18.659400    8216 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0921 22:11:18.659984    8216 start.go:159] libmachine.API.Create for "old-k8s-version-20220921220934-5916" (driver="docker")
	I0921 22:11:18.659984    8216 client.go:168] LocalClient.Create starting
	I0921 22:11:18.660618    8216 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I0921 22:11:18.660618    8216 main.go:134] libmachine: Decoding PEM data...
	I0921 22:11:18.660618    8216 main.go:134] libmachine: Parsing certificate...
	I0921 22:11:18.660618    8216 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I0921 22:11:18.661400    8216 main.go:134] libmachine: Decoding PEM data...
	I0921 22:11:18.661400    8216 main.go:134] libmachine: Parsing certificate...
	I0921 22:11:18.669482    8216 cli_runner.go:164] Run: docker network inspect old-k8s-version-20220921220934-5916 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0921 22:11:18.870032    8216 cli_runner.go:211] docker network inspect old-k8s-version-20220921220934-5916 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0921 22:11:18.877393    8216 network_create.go:272] running [docker network inspect old-k8s-version-20220921220934-5916] to gather additional debugging logs...
	I0921 22:11:18.877907    8216 cli_runner.go:164] Run: docker network inspect old-k8s-version-20220921220934-5916
	W0921 22:11:19.089095    8216 cli_runner.go:211] docker network inspect old-k8s-version-20220921220934-5916 returned with exit code 1
	I0921 22:11:19.089095    8216 network_create.go:275] error running [docker network inspect old-k8s-version-20220921220934-5916]: docker network inspect old-k8s-version-20220921220934-5916: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: old-k8s-version-20220921220934-5916
	I0921 22:11:19.089095    8216 network_create.go:277] output of [docker network inspect old-k8s-version-20220921220934-5916]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: old-k8s-version-20220921220934-5916
	
	** /stderr **
	I0921 22:11:19.098850    8216 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0921 22:11:19.317573    8216 network.go:290] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc0006dc180] misses:0}
	I0921 22:11:19.317573    8216 network.go:236] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0921 22:11:19.318147    8216 network_create.go:115] attempt to create docker network old-k8s-version-20220921220934-5916 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0921 22:11:19.325617    8216 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=old-k8s-version-20220921220934-5916 old-k8s-version-20220921220934-5916
	W0921 22:11:19.528335    8216 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=old-k8s-version-20220921220934-5916 old-k8s-version-20220921220934-5916 returned with exit code 1
	E0921 22:11:19.528335    8216 network_create.go:104] error while trying to create docker network old-k8s-version-20220921220934-5916 192.168.49.0/24: create docker network old-k8s-version-20220921220934-5916 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=old-k8s-version-20220921220934-5916 old-k8s-version-20220921220934-5916: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network bee4a1b964fdab5b90734b0269b51e6b2604b19093ff12571ec9fb0951d3e573 (br-bee4a1b964fd): conflicts with network a04d36bfb3cf4b1b1ab643a2ec70fadfc4a485878b32a275b4c7653037d66dd0 (br-a04d36bfb3cf): networks have overlapping IPv4
	W0921 22:11:19.528335    8216 out.go:239] ! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network old-k8s-version-20220921220934-5916 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=old-k8s-version-20220921220934-5916 old-k8s-version-20220921220934-5916: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network bee4a1b964fdab5b90734b0269b51e6b2604b19093ff12571ec9fb0951d3e573 (br-bee4a1b964fd): conflicts with network a04d36bfb3cf4b1b1ab643a2ec70fadfc4a485878b32a275b4c7653037d66dd0 (br-a04d36bfb3cf): networks have overlapping IPv4
	
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network old-k8s-version-20220921220934-5916 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=old-k8s-version-20220921220934-5916 old-k8s-version-20220921220934-5916: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network bee4a1b964fdab5b90734b0269b51e6b2604b19093ff12571ec9fb0951d3e573 (br-bee4a1b964fd): conflicts with network a04d36bfb3cf4b1b1ab643a2ec70fadfc4a485878b32a275b4c7653037d66dd0 (br-a04d36bfb3cf): networks have overlapping IPv4
	
	I0921 22:11:19.541335    8216 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0921 22:11:19.750559    8216 cli_runner.go:164] Run: docker volume create old-k8s-version-20220921220934-5916 --label name.minikube.sigs.k8s.io=old-k8s-version-20220921220934-5916 --label created_by.minikube.sigs.k8s.io=true
	W0921 22:11:19.947384    8216 cli_runner.go:211] docker volume create old-k8s-version-20220921220934-5916 --label name.minikube.sigs.k8s.io=old-k8s-version-20220921220934-5916 --label created_by.minikube.sigs.k8s.io=true returned with exit code 1
	I0921 22:11:19.947384    8216 client.go:171] LocalClient.Create took 1.2873897s
	I0921 22:11:21.976577    8216 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0921 22:11:21.983842    8216 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220921220934-5916
	W0921 22:11:22.177948    8216 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220921220934-5916 returned with exit code 1
	I0921 22:11:22.177948    8216 retry.go:31] will retry after 149.242379ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-20220921220934-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220921220934-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220921220934-5916
	I0921 22:11:22.340962    8216 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220921220934-5916
	W0921 22:11:22.534334    8216 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220921220934-5916 returned with exit code 1
	I0921 22:11:22.534334    8216 retry.go:31] will retry after 300.341948ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-20220921220934-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220921220934-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220921220934-5916
	I0921 22:11:22.846424    8216 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220921220934-5916
	W0921 22:11:23.042442    8216 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220921220934-5916 returned with exit code 1
	I0921 22:11:23.042496    8216 retry.go:31] will retry after 571.057104ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-20220921220934-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220921220934-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220921220934-5916
	I0921 22:11:23.628980    8216 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220921220934-5916
	W0921 22:11:23.843743    8216 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220921220934-5916 returned with exit code 1
	W0921 22:11:23.843942    8216 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-20220921220934-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220921220934-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220921220934-5916
	
	W0921 22:11:23.843942    8216 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-20220921220934-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220921220934-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220921220934-5916
	I0921 22:11:23.854921    8216 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0921 22:11:23.861421    8216 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220921220934-5916
	W0921 22:11:24.047016    8216 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220921220934-5916 returned with exit code 1
	I0921 22:11:24.047016    8216 retry.go:31] will retry after 178.565968ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-20220921220934-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220921220934-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220921220934-5916
	I0921 22:11:24.242440    8216 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220921220934-5916
	W0921 22:11:24.436069    8216 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220921220934-5916 returned with exit code 1
	I0921 22:11:24.436416    8216 retry.go:31] will retry after 330.246446ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-20220921220934-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220921220934-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220921220934-5916
	I0921 22:11:24.780187    8216 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220921220934-5916
	W0921 22:11:25.017386    8216 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220921220934-5916 returned with exit code 1
	I0921 22:11:25.017386    8216 retry.go:31] will retry after 460.157723ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-20220921220934-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220921220934-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220921220934-5916
	I0921 22:11:25.486769    8216 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220921220934-5916
	W0921 22:11:25.694589    8216 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220921220934-5916 returned with exit code 1
	W0921 22:11:25.695099    8216 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-20220921220934-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220921220934-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220921220934-5916
	
	W0921 22:11:25.695180    8216 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-20220921220934-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220921220934-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220921220934-5916
	I0921 22:11:25.695180    8216 start.go:128] duration metric: createHost completed in 7.0426536s
	I0921 22:11:25.708174    8216 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0921 22:11:25.714315    8216 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220921220934-5916
	W0921 22:11:25.910731    8216 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220921220934-5916 returned with exit code 1
	I0921 22:11:25.910910    8216 retry.go:31] will retry after 195.758538ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-20220921220934-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220921220934-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220921220934-5916
	I0921 22:11:26.120795    8216 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220921220934-5916
	W0921 22:11:26.343720    8216 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220921220934-5916 returned with exit code 1
	I0921 22:11:26.343720    8216 retry.go:31] will retry after 297.413196ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-20220921220934-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220921220934-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220921220934-5916
	I0921 22:11:26.653682    8216 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220921220934-5916
	W0921 22:11:26.875612    8216 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220921220934-5916 returned with exit code 1
	I0921 22:11:26.875644    8216 retry.go:31] will retry after 663.23513ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-20220921220934-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220921220934-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220921220934-5916
	I0921 22:11:27.553162    8216 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220921220934-5916
	W0921 22:11:27.782516    8216 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220921220934-5916 returned with exit code 1
	W0921 22:11:27.782516    8216 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-20220921220934-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220921220934-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220921220934-5916
	
	W0921 22:11:27.782516    8216 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-20220921220934-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220921220934-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220921220934-5916
	I0921 22:11:27.794298    8216 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0921 22:11:27.800885    8216 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220921220934-5916
	W0921 22:11:28.025596    8216 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220921220934-5916 returned with exit code 1
	I0921 22:11:28.025596    8216 retry.go:31] will retry after 175.796719ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-20220921220934-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220921220934-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220921220934-5916
	I0921 22:11:28.220444    8216 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220921220934-5916
	W0921 22:11:28.425568    8216 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220921220934-5916 returned with exit code 1
	I0921 22:11:28.425794    8216 retry.go:31] will retry after 322.826781ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-20220921220934-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220921220934-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220921220934-5916
	I0921 22:11:28.758196    8216 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220921220934-5916
	W0921 22:11:28.965837    8216 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220921220934-5916 returned with exit code 1
	I0921 22:11:28.965837    8216 retry.go:31] will retry after 602.253718ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-20220921220934-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220921220934-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220921220934-5916
	I0921 22:11:29.583914    8216 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220921220934-5916
	W0921 22:11:29.781211    8216 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220921220934-5916 returned with exit code 1
	W0921 22:11:29.781508    8216 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-20220921220934-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220921220934-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220921220934-5916
	
	W0921 22:11:29.781596    8216 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-20220921220934-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220921220934-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220921220934-5916
	I0921 22:11:29.781619    8216 fix.go:57] fixHost completed within 31.5997729s
	I0921 22:11:29.781619    8216 start.go:83] releasing machines lock for "old-k8s-version-20220921220934-5916", held for 31.6003913s
	W0921 22:11:29.781619    8216 start.go:602] error starting host: recreate: creating host: create: creating: setting up container node: creating volume for old-k8s-version-20220921220934-5916 container: docker volume create old-k8s-version-20220921220934-5916 --label name.minikube.sigs.k8s.io=old-k8s-version-20220921220934-5916 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create old-k8s-version-20220921220934-5916: error while creating volume root path '/var/lib/docker/volumes/old-k8s-version-20220921220934-5916': mkdir /var/lib/docker/volumes/old-k8s-version-20220921220934-5916: read-only file system
	W0921 22:11:29.782234    8216 out.go:239] ! StartHost failed, but will try again: recreate: creating host: create: creating: setting up container node: creating volume for old-k8s-version-20220921220934-5916 container: docker volume create old-k8s-version-20220921220934-5916 --label name.minikube.sigs.k8s.io=old-k8s-version-20220921220934-5916 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create old-k8s-version-20220921220934-5916: error while creating volume root path '/var/lib/docker/volumes/old-k8s-version-20220921220934-5916': mkdir /var/lib/docker/volumes/old-k8s-version-20220921220934-5916: read-only file system
	
	! StartHost failed, but will try again: recreate: creating host: create: creating: setting up container node: creating volume for old-k8s-version-20220921220934-5916 container: docker volume create old-k8s-version-20220921220934-5916 --label name.minikube.sigs.k8s.io=old-k8s-version-20220921220934-5916 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create old-k8s-version-20220921220934-5916: error while creating volume root path '/var/lib/docker/volumes/old-k8s-version-20220921220934-5916': mkdir /var/lib/docker/volumes/old-k8s-version-20220921220934-5916: read-only file system
	
	I0921 22:11:29.782277    8216 start.go:617] Will try again in 5 seconds ...
	I0921 22:11:34.793381    8216 start.go:364] acquiring machines lock for old-k8s-version-20220921220934-5916: {Name:mka5121945d619472d3cfcf71df0e13caeaa183b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0921 22:11:34.793381    8216 start.go:368] acquired machines lock for "old-k8s-version-20220921220934-5916" in 0s
	I0921 22:11:34.793381    8216 start.go:96] Skipping create...Using existing machine configuration
	I0921 22:11:34.793381    8216 fix.go:55] fixHost starting: 
	I0921 22:11:34.811199    8216 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220921220934-5916 --format={{.State.Status}}
	W0921 22:11:35.009119    8216 cli_runner.go:211] docker container inspect old-k8s-version-20220921220934-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:11:35.009119    8216 fix.go:103] recreateIfNeeded on old-k8s-version-20220921220934-5916: state= err=unknown state "old-k8s-version-20220921220934-5916": docker container inspect old-k8s-version-20220921220934-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220921220934-5916
	I0921 22:11:35.009119    8216 fix.go:108] machineExists: false. err=machine does not exist
	I0921 22:11:35.013957    8216 out.go:177] * docker "old-k8s-version-20220921220934-5916" container is missing, will recreate.
	I0921 22:11:35.018871    8216 delete.go:124] DEMOLISHING old-k8s-version-20220921220934-5916 ...
	I0921 22:11:35.030789    8216 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220921220934-5916 --format={{.State.Status}}
	W0921 22:11:35.211209    8216 cli_runner.go:211] docker container inspect old-k8s-version-20220921220934-5916 --format={{.State.Status}} returned with exit code 1
	W0921 22:11:35.211209    8216 stop.go:75] unable to get state: unknown state "old-k8s-version-20220921220934-5916": docker container inspect old-k8s-version-20220921220934-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220921220934-5916
	I0921 22:11:35.211209    8216 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "old-k8s-version-20220921220934-5916": docker container inspect old-k8s-version-20220921220934-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220921220934-5916
	I0921 22:11:35.225699    8216 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220921220934-5916 --format={{.State.Status}}
	W0921 22:11:35.412224    8216 cli_runner.go:211] docker container inspect old-k8s-version-20220921220934-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:11:35.412415    8216 delete.go:82] Unable to get host status for old-k8s-version-20220921220934-5916, assuming it has already been deleted: state: unknown state "old-k8s-version-20220921220934-5916": docker container inspect old-k8s-version-20220921220934-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220921220934-5916
	I0921 22:11:35.421898    8216 cli_runner.go:164] Run: docker container inspect -f {{.Id}} old-k8s-version-20220921220934-5916
	W0921 22:11:35.615229    8216 cli_runner.go:211] docker container inspect -f {{.Id}} old-k8s-version-20220921220934-5916 returned with exit code 1
	I0921 22:11:35.615229    8216 kic.go:356] could not find the container old-k8s-version-20220921220934-5916 to remove it. will try anyways
	I0921 22:11:35.622238    8216 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220921220934-5916 --format={{.State.Status}}
	W0921 22:11:35.807833    8216 cli_runner.go:211] docker container inspect old-k8s-version-20220921220934-5916 --format={{.State.Status}} returned with exit code 1
	W0921 22:11:35.807833    8216 oci.go:84] error getting container status, will try to delete anyways: unknown state "old-k8s-version-20220921220934-5916": docker container inspect old-k8s-version-20220921220934-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220921220934-5916
	I0921 22:11:35.813826    8216 cli_runner.go:164] Run: docker exec --privileged -t old-k8s-version-20220921220934-5916 /bin/bash -c "sudo init 0"
	W0921 22:11:35.998823    8216 cli_runner.go:211] docker exec --privileged -t old-k8s-version-20220921220934-5916 /bin/bash -c "sudo init 0" returned with exit code 1
	I0921 22:11:35.998823    8216 oci.go:646] error shutdown old-k8s-version-20220921220934-5916: docker exec --privileged -t old-k8s-version-20220921220934-5916 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: old-k8s-version-20220921220934-5916
	I0921 22:11:37.008194    8216 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220921220934-5916 --format={{.State.Status}}
	W0921 22:11:37.213978    8216 cli_runner.go:211] docker container inspect old-k8s-version-20220921220934-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:11:37.213978    8216 oci.go:658] temporary error verifying shutdown: unknown state "old-k8s-version-20220921220934-5916": docker container inspect old-k8s-version-20220921220934-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220921220934-5916
	I0921 22:11:37.213978    8216 oci.go:660] temporary error: container old-k8s-version-20220921220934-5916 status is  but expect it to be exited
	I0921 22:11:37.213978    8216 retry.go:31] will retry after 396.557122ms: couldn't verify container is exited. %v: unknown state "old-k8s-version-20220921220934-5916": docker container inspect old-k8s-version-20220921220934-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220921220934-5916
	I0921 22:11:37.621792    8216 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220921220934-5916 --format={{.State.Status}}
	W0921 22:11:37.814093    8216 cli_runner.go:211] docker container inspect old-k8s-version-20220921220934-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:11:37.814093    8216 oci.go:658] temporary error verifying shutdown: unknown state "old-k8s-version-20220921220934-5916": docker container inspect old-k8s-version-20220921220934-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220921220934-5916
	I0921 22:11:37.814093    8216 oci.go:660] temporary error: container old-k8s-version-20220921220934-5916 status is  but expect it to be exited
	I0921 22:11:37.814093    8216 retry.go:31] will retry after 597.811922ms: couldn't verify container is exited. %v: unknown state "old-k8s-version-20220921220934-5916": docker container inspect old-k8s-version-20220921220934-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220921220934-5916
	I0921 22:11:38.431558    8216 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220921220934-5916 --format={{.State.Status}}
	W0921 22:11:38.641237    8216 cli_runner.go:211] docker container inspect old-k8s-version-20220921220934-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:11:38.641237    8216 oci.go:658] temporary error verifying shutdown: unknown state "old-k8s-version-20220921220934-5916": docker container inspect old-k8s-version-20220921220934-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220921220934-5916
	I0921 22:11:38.641237    8216 oci.go:660] temporary error: container old-k8s-version-20220921220934-5916 status is  but expect it to be exited
	I0921 22:11:38.641237    8216 retry.go:31] will retry after 1.409144665s: couldn't verify container is exited. %v: unknown state "old-k8s-version-20220921220934-5916": docker container inspect old-k8s-version-20220921220934-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220921220934-5916
	I0921 22:11:40.067489    8216 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220921220934-5916 --format={{.State.Status}}
	W0921 22:11:40.277316    8216 cli_runner.go:211] docker container inspect old-k8s-version-20220921220934-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:11:40.277408    8216 oci.go:658] temporary error verifying shutdown: unknown state "old-k8s-version-20220921220934-5916": docker container inspect old-k8s-version-20220921220934-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220921220934-5916
	I0921 22:11:40.277408    8216 oci.go:660] temporary error: container old-k8s-version-20220921220934-5916 status is  but expect it to be exited
	I0921 22:11:40.277473    8216 retry.go:31] will retry after 1.192358242s: couldn't verify container is exited. %v: unknown state "old-k8s-version-20220921220934-5916": docker container inspect old-k8s-version-20220921220934-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220921220934-5916
	I0921 22:11:41.493754    8216 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220921220934-5916 --format={{.State.Status}}
	W0921 22:11:41.686191    8216 cli_runner.go:211] docker container inspect old-k8s-version-20220921220934-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:11:41.686191    8216 oci.go:658] temporary error verifying shutdown: unknown state "old-k8s-version-20220921220934-5916": docker container inspect old-k8s-version-20220921220934-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220921220934-5916
	I0921 22:11:41.686191    8216 oci.go:660] temporary error: container old-k8s-version-20220921220934-5916 status is  but expect it to be exited
	I0921 22:11:41.686191    8216 retry.go:31] will retry after 3.456004252s: couldn't verify container is exited. %v: unknown state "old-k8s-version-20220921220934-5916": docker container inspect old-k8s-version-20220921220934-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220921220934-5916
	I0921 22:11:45.164675    8216 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220921220934-5916 --format={{.State.Status}}
	W0921 22:11:45.357029    8216 cli_runner.go:211] docker container inspect old-k8s-version-20220921220934-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:11:45.357214    8216 oci.go:658] temporary error verifying shutdown: unknown state "old-k8s-version-20220921220934-5916": docker container inspect old-k8s-version-20220921220934-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220921220934-5916
	I0921 22:11:45.357296    8216 oci.go:660] temporary error: container old-k8s-version-20220921220934-5916 status is  but expect it to be exited
	I0921 22:11:45.357296    8216 retry.go:31] will retry after 4.543793083s: couldn't verify container is exited. %v: unknown state "old-k8s-version-20220921220934-5916": docker container inspect old-k8s-version-20220921220934-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220921220934-5916
	I0921 22:11:49.923860    8216 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220921220934-5916 --format={{.State.Status}}
	W0921 22:11:50.119882    8216 cli_runner.go:211] docker container inspect old-k8s-version-20220921220934-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:11:50.120209    8216 oci.go:658] temporary error verifying shutdown: unknown state "old-k8s-version-20220921220934-5916": docker container inspect old-k8s-version-20220921220934-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220921220934-5916
	I0921 22:11:50.120209    8216 oci.go:660] temporary error: container old-k8s-version-20220921220934-5916 status is  but expect it to be exited
	I0921 22:11:50.120209    8216 retry.go:31] will retry after 5.830976587s: couldn't verify container is exited. %v: unknown state "old-k8s-version-20220921220934-5916": docker container inspect old-k8s-version-20220921220934-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220921220934-5916
	I0921 22:11:55.964693    8216 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220921220934-5916 --format={{.State.Status}}
	W0921 22:11:56.158453    8216 cli_runner.go:211] docker container inspect old-k8s-version-20220921220934-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:11:56.158518    8216 oci.go:658] temporary error verifying shutdown: unknown state "old-k8s-version-20220921220934-5916": docker container inspect old-k8s-version-20220921220934-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220921220934-5916
	I0921 22:11:56.158518    8216 oci.go:660] temporary error: container old-k8s-version-20220921220934-5916 status is  but expect it to be exited
	I0921 22:11:56.158518    8216 oci.go:88] couldn't shut down old-k8s-version-20220921220934-5916 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "old-k8s-version-20220921220934-5916": docker container inspect old-k8s-version-20220921220934-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220921220934-5916
	 
	I0921 22:11:56.166557    8216 cli_runner.go:164] Run: docker rm -f -v old-k8s-version-20220921220934-5916
	I0921 22:11:56.375962    8216 cli_runner.go:164] Run: docker container inspect -f {{.Id}} old-k8s-version-20220921220934-5916
	W0921 22:11:56.568139    8216 cli_runner.go:211] docker container inspect -f {{.Id}} old-k8s-version-20220921220934-5916 returned with exit code 1
	I0921 22:11:56.575570    8216 cli_runner.go:164] Run: docker network inspect old-k8s-version-20220921220934-5916 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0921 22:11:56.770041    8216 cli_runner.go:211] docker network inspect old-k8s-version-20220921220934-5916 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0921 22:11:56.776853    8216 network_create.go:272] running [docker network inspect old-k8s-version-20220921220934-5916] to gather additional debugging logs...
	I0921 22:11:56.777813    8216 cli_runner.go:164] Run: docker network inspect old-k8s-version-20220921220934-5916
	W0921 22:11:56.956920    8216 cli_runner.go:211] docker network inspect old-k8s-version-20220921220934-5916 returned with exit code 1
	I0921 22:11:56.956978    8216 network_create.go:275] error running [docker network inspect old-k8s-version-20220921220934-5916]: docker network inspect old-k8s-version-20220921220934-5916: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: old-k8s-version-20220921220934-5916
	I0921 22:11:56.957005    8216 network_create.go:277] output of [docker network inspect old-k8s-version-20220921220934-5916]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: old-k8s-version-20220921220934-5916
	
	** /stderr **
	W0921 22:11:56.957800    8216 delete.go:139] delete failed (probably ok) <nil>
	I0921 22:11:56.957800    8216 fix.go:115] Sleeping 1 second for extra luck!
	I0921 22:11:57.966449    8216 start.go:125] createHost starting for "" (driver="docker")
	I0921 22:11:57.970935    8216 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0921 22:11:57.971212    8216 start.go:159] libmachine.API.Create for "old-k8s-version-20220921220934-5916" (driver="docker")
	I0921 22:11:57.971212    8216 client.go:168] LocalClient.Create starting
	I0921 22:11:57.971808    8216 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I0921 22:11:57.971808    8216 main.go:134] libmachine: Decoding PEM data...
	I0921 22:11:57.971808    8216 main.go:134] libmachine: Parsing certificate...
	I0921 22:11:57.971808    8216 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I0921 22:11:57.972335    8216 main.go:134] libmachine: Decoding PEM data...
	I0921 22:11:57.972421    8216 main.go:134] libmachine: Parsing certificate...
	I0921 22:11:57.981157    8216 cli_runner.go:164] Run: docker network inspect old-k8s-version-20220921220934-5916 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0921 22:11:58.169200    8216 cli_runner.go:211] docker network inspect old-k8s-version-20220921220934-5916 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0921 22:11:58.176391    8216 network_create.go:272] running [docker network inspect old-k8s-version-20220921220934-5916] to gather additional debugging logs...
	I0921 22:11:58.176391    8216 cli_runner.go:164] Run: docker network inspect old-k8s-version-20220921220934-5916
	W0921 22:11:58.372470    8216 cli_runner.go:211] docker network inspect old-k8s-version-20220921220934-5916 returned with exit code 1
	I0921 22:11:58.372470    8216 network_create.go:275] error running [docker network inspect old-k8s-version-20220921220934-5916]: docker network inspect old-k8s-version-20220921220934-5916: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: old-k8s-version-20220921220934-5916
	I0921 22:11:58.372470    8216 network_create.go:277] output of [docker network inspect old-k8s-version-20220921220934-5916]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: old-k8s-version-20220921220934-5916
	
	** /stderr **
	I0921 22:11:58.379759    8216 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0921 22:11:58.604732    8216 network.go:281] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0006dc180] amended:false}} dirty:map[] misses:0}
	I0921 22:11:58.604732    8216 network.go:239] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0921 22:11:58.621734    8216 network.go:290] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0006dc180] amended:true}} dirty:map[192.168.49.0:0xc0006dc180 192.168.58.0:0xc00000ada8] misses:0}
	I0921 22:11:58.621734    8216 network.go:236] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0921 22:11:58.621734    8216 network_create.go:115] attempt to create docker network old-k8s-version-20220921220934-5916 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0921 22:11:58.628726    8216 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=old-k8s-version-20220921220934-5916 old-k8s-version-20220921220934-5916
	W0921 22:11:58.837097    8216 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=old-k8s-version-20220921220934-5916 old-k8s-version-20220921220934-5916 returned with exit code 1
	E0921 22:11:58.837097    8216 network_create.go:104] error while trying to create docker network old-k8s-version-20220921220934-5916 192.168.58.0/24: create docker network old-k8s-version-20220921220934-5916 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=old-k8s-version-20220921220934-5916 old-k8s-version-20220921220934-5916: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 45a16cf1993898c999de5afd6702bee53fbdb595898f0d3f34bbce7b3f786894 (br-45a16cf19938): conflicts with network 8a3cd8d165a4af032276277dbee93b52b8ee4aed7ca6d8aceee6814901b31792 (br-8a3cd8d165a4): networks have overlapping IPv4
	W0921 22:11:58.837097    8216 out.go:239] ! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network old-k8s-version-20220921220934-5916 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=old-k8s-version-20220921220934-5916 old-k8s-version-20220921220934-5916: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 45a16cf1993898c999de5afd6702bee53fbdb595898f0d3f34bbce7b3f786894 (br-45a16cf19938): conflicts with network 8a3cd8d165a4af032276277dbee93b52b8ee4aed7ca6d8aceee6814901b31792 (br-8a3cd8d165a4): networks have overlapping IPv4
	
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network old-k8s-version-20220921220934-5916 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=old-k8s-version-20220921220934-5916 old-k8s-version-20220921220934-5916: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 45a16cf1993898c999de5afd6702bee53fbdb595898f0d3f34bbce7b3f786894 (br-45a16cf19938): conflicts with network 8a3cd8d165a4af032276277dbee93b52b8ee4aed7ca6d8aceee6814901b31792 (br-8a3cd8d165a4): networks have overlapping IPv4
	
	I0921 22:11:58.852095    8216 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0921 22:11:59.066473    8216 cli_runner.go:164] Run: docker volume create old-k8s-version-20220921220934-5916 --label name.minikube.sigs.k8s.io=old-k8s-version-20220921220934-5916 --label created_by.minikube.sigs.k8s.io=true
	W0921 22:11:59.258133    8216 cli_runner.go:211] docker volume create old-k8s-version-20220921220934-5916 --label name.minikube.sigs.k8s.io=old-k8s-version-20220921220934-5916 --label created_by.minikube.sigs.k8s.io=true returned with exit code 1
	I0921 22:11:59.258133    8216 client.go:171] LocalClient.Create took 1.2869104s
	I0921 22:12:01.281120    8216 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0921 22:12:01.288257    8216 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220921220934-5916
	W0921 22:12:01.500593    8216 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220921220934-5916 returned with exit code 1
	I0921 22:12:01.500593    8216 retry.go:31] will retry after 164.582069ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-20220921220934-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220921220934-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220921220934-5916
	I0921 22:12:01.684724    8216 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220921220934-5916
	W0921 22:12:01.902577    8216 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220921220934-5916 returned with exit code 1
	I0921 22:12:01.902752    8216 retry.go:31] will retry after 415.22004ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-20220921220934-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220921220934-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220921220934-5916
	I0921 22:12:02.327759    8216 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220921220934-5916
	W0921 22:12:02.550910    8216 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220921220934-5916 returned with exit code 1
	W0921 22:12:02.550910    8216 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-20220921220934-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220921220934-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220921220934-5916
	
	W0921 22:12:02.550910    8216 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-20220921220934-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220921220934-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220921220934-5916
	I0921 22:12:02.561462    8216 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0921 22:12:02.570436    8216 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220921220934-5916
	W0921 22:12:02.770009    8216 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220921220934-5916 returned with exit code 1
	I0921 22:12:02.770294    8216 retry.go:31] will retry after 144.863405ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-20220921220934-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220921220934-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220921220934-5916
	I0921 22:12:02.937921    8216 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220921220934-5916
	W0921 22:12:03.142484    8216 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220921220934-5916 returned with exit code 1
	I0921 22:12:03.142969    8216 retry.go:31] will retry after 410.553224ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-20220921220934-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220921220934-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220921220934-5916
	I0921 22:12:03.577655    8216 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220921220934-5916
	W0921 22:12:03.785636    8216 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220921220934-5916 returned with exit code 1
	I0921 22:12:03.785636    8216 retry.go:31] will retry after 314.505366ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-20220921220934-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220921220934-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220921220934-5916
	I0921 22:12:04.111266    8216 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220921220934-5916
	W0921 22:12:04.318579    8216 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220921220934-5916 returned with exit code 1
	W0921 22:12:04.318800    8216 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-20220921220934-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220921220934-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220921220934-5916
	
	W0921 22:12:04.318800    8216 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-20220921220934-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220921220934-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220921220934-5916
	I0921 22:12:04.318800    8216 start.go:128] duration metric: createHost completed in 6.3522202s
	I0921 22:12:04.331436    8216 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0921 22:12:04.341450    8216 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220921220934-5916
	W0921 22:12:04.549483    8216 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220921220934-5916 returned with exit code 1
	I0921 22:12:04.549698    8216 retry.go:31] will retry after 200.38067ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-20220921220934-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220921220934-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220921220934-5916
	I0921 22:12:04.762786    8216 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220921220934-5916
	W0921 22:12:04.957520    8216 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220921220934-5916 returned with exit code 1
	I0921 22:12:04.957520    8216 retry.go:31] will retry after 252.474839ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-20220921220934-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220921220934-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220921220934-5916
	I0921 22:12:05.231322    8216 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220921220934-5916
	W0921 22:12:05.412003    8216 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220921220934-5916 returned with exit code 1
	I0921 22:12:05.412074    8216 retry.go:31] will retry after 585.618668ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-20220921220934-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220921220934-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220921220934-5916
	I0921 22:12:06.020317    8216 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220921220934-5916
	W0921 22:12:06.228965    8216 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220921220934-5916 returned with exit code 1
	W0921 22:12:06.228965    8216 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-20220921220934-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220921220934-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220921220934-5916
	
	W0921 22:12:06.228965    8216 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-20220921220934-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220921220934-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220921220934-5916
	I0921 22:12:06.256444    8216 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0921 22:12:06.266898    8216 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220921220934-5916
	W0921 22:12:06.475333    8216 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220921220934-5916 returned with exit code 1
	I0921 22:12:06.475735    8216 retry.go:31] will retry after 194.626905ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-20220921220934-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220921220934-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220921220934-5916
	I0921 22:12:06.686820    8216 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220921220934-5916
	W0921 22:12:06.896448    8216 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220921220934-5916 returned with exit code 1
	I0921 22:12:06.896448    8216 retry.go:31] will retry after 346.182076ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-20220921220934-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220921220934-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220921220934-5916
	I0921 22:12:07.263234    8216 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220921220934-5916
	W0921 22:12:07.451406    8216 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220921220934-5916 returned with exit code 1
	I0921 22:12:07.457312    8216 retry.go:31] will retry after 579.704465ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-20220921220934-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220921220934-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220921220934-5916
	I0921 22:12:08.051952    8216 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220921220934-5916
	W0921 22:12:08.262595    8216 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220921220934-5916 returned with exit code 1
	W0921 22:12:08.262595    8216 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-20220921220934-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220921220934-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220921220934-5916
	
	W0921 22:12:08.262595    8216 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-20220921220934-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220921220934-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220921220934-5916
	I0921 22:12:08.262595    8216 fix.go:57] fixHost completed within 33.4689507s
	I0921 22:12:08.262595    8216 start.go:83] releasing machines lock for "old-k8s-version-20220921220934-5916", held for 33.4689507s
	W0921 22:12:08.263578    8216 out.go:239] * Failed to start docker container. Running "minikube delete -p old-k8s-version-20220921220934-5916" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for old-k8s-version-20220921220934-5916 container: docker volume create old-k8s-version-20220921220934-5916 --label name.minikube.sigs.k8s.io=old-k8s-version-20220921220934-5916 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create old-k8s-version-20220921220934-5916: error while creating volume root path '/var/lib/docker/volumes/old-k8s-version-20220921220934-5916': mkdir /var/lib/docker/volumes/old-k8s-version-20220921220934-5916: read-only file system
	
	* Failed to start docker container. Running "minikube delete -p old-k8s-version-20220921220934-5916" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for old-k8s-version-20220921220934-5916 container: docker volume create old-k8s-version-20220921220934-5916 --label name.minikube.sigs.k8s.io=old-k8s-version-20220921220934-5916 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create old-k8s-version-20220921220934-5916: error while creating volume root path '/var/lib/docker/volumes/old-k8s-version-20220921220934-5916': mkdir /var/lib/docker/volumes/old-k8s-version-20220921220934-5916: read-only file system
	
	I0921 22:12:08.268162    8216 out.go:177] 
	W0921 22:12:08.270746    8216 out.go:239] X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for old-k8s-version-20220921220934-5916 container: docker volume create old-k8s-version-20220921220934-5916 --label name.minikube.sigs.k8s.io=old-k8s-version-20220921220934-5916 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create old-k8s-version-20220921220934-5916: error while creating volume root path '/var/lib/docker/volumes/old-k8s-version-20220921220934-5916': mkdir /var/lib/docker/volumes/old-k8s-version-20220921220934-5916: read-only file system
	
	X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for old-k8s-version-20220921220934-5916 container: docker volume create old-k8s-version-20220921220934-5916 --label name.minikube.sigs.k8s.io=old-k8s-version-20220921220934-5916 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create old-k8s-version-20220921220934-5916: error while creating volume root path '/var/lib/docker/volumes/old-k8s-version-20220921220934-5916': mkdir /var/lib/docker/volumes/old-k8s-version-20220921220934-5916: read-only file system
	
	W0921 22:12:08.270989    8216 out.go:239] * Suggestion: Restart Docker
	* Suggestion: Restart Docker
	W0921 22:12:08.271022    8216 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/6825
	* Related issue: https://github.com/kubernetes/minikube/issues/6825
	I0921 22:12:08.273880    8216 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-windows-amd64.exe start -p old-k8s-version-20220921220934-5916 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --kubernetes-version=v1.16.0": exit status 60
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-20220921220934-5916
helpers_test.go:231: (dbg) Non-zero exit: docker inspect old-k8s-version-20220921220934-5916: exit status 1 (270.118ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: old-k8s-version-20220921220934-5916

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-20220921220934-5916 -n old-k8s-version-20220921220934-5916
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-20220921220934-5916 -n old-k8s-version-20220921220934-5916: exit status 7 (586.7415ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0921 22:12:09.334632    6256 status.go:247] status error: host: state: unknown state "old-k8s-version-20220921220934-5916": docker container inspect old-k8s-version-20220921220934-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220921220934-5916

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-20220921220934-5916" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (77.57s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (78.37s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-windows-amd64.exe start -p no-preload-20220921220937-5916 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker --kubernetes-version=v1.25.2

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p no-preload-20220921220937-5916 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker --kubernetes-version=v1.25.2: exit status 60 (1m17.280932s)

                                                
                                                
-- stdout --
	* [no-preload-20220921220937-5916] minikube v1.27.0 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	  - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=14995
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on existing profile
	* Starting control plane node no-preload-20220921220937-5916 in cluster no-preload-20220921220937-5916
	* Pulling base image ...
	* Another minikube instance is downloading dependencies... 
	* docker "no-preload-20220921220937-5916" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* docker "no-preload-20220921220937-5916" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0921 22:10:53.870174    2556 out.go:296] Setting OutFile to fd 1676 ...
	I0921 22:10:53.933915    2556 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0921 22:10:53.933915    2556 out.go:309] Setting ErrFile to fd 1904...
	I0921 22:10:53.933915    2556 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0921 22:10:53.958903    2556 out.go:303] Setting JSON to false
	I0921 22:10:53.960903    2556 start.go:115] hostinfo: {"hostname":"minikube2","uptime":4322,"bootTime":1663793931,"procs":151,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19044 Build 19044","kernelVersion":"10.0.19044 Build 19044","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"29dced62-21fb-45d8-a34e-472b66ced002"}
	W0921 22:10:53.960903    2556 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0921 22:10:53.964965    2556 out.go:177] * [no-preload-20220921220937-5916] minikube v1.27.0 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	I0921 22:10:53.967898    2556 notify.go:214] Checking for updates...
	I0921 22:10:53.969911    2556 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	I0921 22:10:53.973911    2556 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	I0921 22:10:53.976900    2556 out.go:177]   - MINIKUBE_LOCATION=14995
	I0921 22:10:53.978905    2556 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0921 22:10:53.981898    2556 config.go:180] Loaded profile config "no-preload-20220921220937-5916": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.2
	I0921 22:10:53.982897    2556 driver.go:365] Setting default libvirt URI to qemu:///system
	I0921 22:10:54.257747    2556 docker.go:137] docker version: linux-20.10.17
	I0921 22:10:54.266767    2556 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0921 22:10:54.801010    2556 info.go:265] docker info: {ID:6VFI:N2PO:PLBP:KXLR:EN7R:OR7P:564S:UY7N:ZWDA:5V2U:KGL2:KJZ7 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:44 OomKillDisable:true NGoroutines:82 SystemTime:2022-09-21 22:10:54.4298574 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.10.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.9] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-
plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.19.0]] Warnings:<nil>}}
	I0921 22:10:54.804693    2556 out.go:177] * Using the docker driver based on existing profile
	I0921 22:10:54.807489    2556 start.go:284] selected driver: docker
	I0921 22:10:54.807489    2556 start.go:808] validating driver "docker" against &{Name:no-preload-20220921220937-5916 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.2 ClusterName:no-preload-20220921220937-5916 Namespace:default APIServerName:mini
kubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.25.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVer
sion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I0921 22:10:54.808200    2556 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0921 22:10:54.868869    2556 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0921 22:10:55.410798    2556 info.go:265] docker info: {ID:6VFI:N2PO:PLBP:KXLR:EN7R:OR7P:564S:UY7N:ZWDA:5V2U:KGL2:KJZ7 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:44 OomKillDisable:true NGoroutines:82 SystemTime:2022-09-21 22:10:55.0150149 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.10.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.9] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-
plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.19.0]] Warnings:<nil>}}
	I0921 22:10:55.410987    2556 start_flags.go:867] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0921 22:10:55.410987    2556 cni.go:95] Creating CNI manager for ""
	I0921 22:10:55.410987    2556 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0921 22:10:55.410987    2556 start_flags.go:316] config:
	{Name:no-preload-20220921220937-5916 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.2 ClusterName:no-preload-20220921220937-5916 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.25.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPo
rt:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I0921 22:10:55.418756    2556 out.go:177] * Starting control plane node no-preload-20220921220937-5916 in cluster no-preload-20220921220937-5916
	I0921 22:10:55.422420    2556 cache.go:120] Beginning downloading kic base image for docker with docker
	I0921 22:10:55.424420    2556 out.go:177] * Pulling base image ...
	I0921 22:10:55.428425    2556 preload.go:132] Checking if preload exists for k8s version v1.25.2 and runtime docker
	I0921 22:10:55.428425    2556 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c in local docker daemon
	I0921 22:10:55.428772    2556 profile.go:148] Saving config to C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\no-preload-20220921220937-5916\config.json ...
	I0921 22:10:55.428772    2556 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner:v5 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5
	I0921 22:10:55.428772    2556 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd:3.5.4-0 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.5.4-0
	I0921 22:10:55.428772    2556 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause:3.8 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.8
	I0921 22:10:55.428772    2556 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler:v1.25.2 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.25.2
	I0921 22:10:55.428772    2556 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver:v1.25.2 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.25.2
	I0921 22:10:55.428772    2556 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager:v1.25.2 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.25.2
	I0921 22:10:55.428772    2556 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy:v1.25.2 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.25.2
	I0921 22:10:55.428772    2556 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns:v1.9.3 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns_v1.9.3
	I0921 22:10:55.589895    2556 cache.go:107] acquiring lock: {Name:mk93ccdec90972c05247bea23df9b97c54ef0291 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0921 22:10:55.589895    2556 cache.go:107] acquiring lock: {Name:mk23bc57c381d093082940a5c180cc32b71f6590 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0921 22:10:55.590334    2556 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5 exists
	I0921 22:10:55.590334    2556 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.5.4-0 exists
	I0921 22:10:55.590334    2556 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\cache\\images\\amd64\\gcr.io\\k8s-minikube\\storage-provisioner_v5" took 161.561ms
	I0921 22:10:55.590334    2556 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5 succeeded
	I0921 22:10:55.590334    2556 cache.go:96] cache image "registry.k8s.io/etcd:3.5.4-0" -> "C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\etcd_3.5.4-0" took 161.561ms
	I0921 22:10:55.590334    2556 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.4-0 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.5.4-0 succeeded
	I0921 22:10:55.593111    2556 cache.go:107] acquiring lock: {Name:mk42e25c67b04a7be621dff66042769c9efcef51 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0921 22:10:55.593111    2556 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.25.2 exists
	I0921 22:10:55.593111    2556 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.25.2" -> "C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\kube-scheduler_v1.25.2" took 164.3379ms
	I0921 22:10:55.593111    2556 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.25.2 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.25.2 succeeded
	I0921 22:10:55.595124    2556 cache.go:107] acquiring lock: {Name:mkb326da2140b0ae2e00a2988d7409604e21ee2c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0921 22:10:55.595124    2556 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.25.2 exists
	I0921 22:10:55.595124    2556 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.25.2" -> "C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\kube-proxy_v1.25.2" took 165.7728ms
	I0921 22:10:55.595124    2556 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.25.2 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.25.2 succeeded
	I0921 22:10:55.600551    2556 cache.go:107] acquiring lock: {Name:mk0addad2b04152bfd63161db235c11568b39fe8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0921 22:10:55.600551    2556 cache.go:107] acquiring lock: {Name:mk8be8007302f2b8b3da1dd98caf592762225a91 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0921 22:10:55.600551    2556 cache.go:107] acquiring lock: {Name:mkab3ed6e795d07d8ef34d153242f0555bd2990e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0921 22:10:55.600551    2556 cache.go:107] acquiring lock: {Name:mk0ca2aa3958827f29fbc172907397ae8c50da6b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0921 22:10:55.600551    2556 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.8 exists
	I0921 22:10:55.600551    2556 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.25.2 exists
	I0921 22:10:55.600551    2556 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns_v1.9.3 exists
	I0921 22:10:55.600551    2556 cache.go:96] cache image "registry.k8s.io/pause:3.8" -> "C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\pause_3.8" took 171.7771ms
	I0921 22:10:55.600551    2556 cache.go:80] save to tar file registry.k8s.io/pause:3.8 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.8 succeeded
	I0921 22:10:55.600551    2556 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.25.2 exists
	I0921 22:10:55.600551    2556 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.25.2" -> "C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\kube-apiserver_v1.25.2" took 171.7771ms
	I0921 22:10:55.600551    2556 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.9.3" -> "C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\coredns\\coredns_v1.9.3" took 171.1989ms
	I0921 22:10:55.601091    2556 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.9.3 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns_v1.9.3 succeeded
	I0921 22:10:55.600551    2556 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.25.2 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.25.2 succeeded
	I0921 22:10:55.602719    2556 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.25.2" -> "C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\kube-controller-manager_v1.25.2" took 173.9454ms
	I0921 22:10:55.602719    2556 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.25.2 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.25.2 succeeded
	I0921 22:10:55.602798    2556 cache.go:87] Successfully saved all images to host disk.
	I0921 22:10:55.680084    2556 cache.go:147] Downloading gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c to local cache
	I0921 22:10:55.680084    2556 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase_v0.0.34@sha256_f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c.tar
	I0921 22:10:55.680084    2556 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase_v0.0.34@sha256_f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c.tar
	I0921 22:10:55.680084    2556 image.go:59] Checking for gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c in local cache directory
	I0921 22:10:55.680084    2556 image.go:62] Found gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c in local cache directory, skipping pull
	I0921 22:10:55.680084    2556 image.go:103] gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c exists in cache, skipping pull
	I0921 22:10:55.680084    2556 cache.go:150] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c as a tarball
	I0921 22:10:55.680084    2556 cache.go:161] Loading gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c from local cache
	I0921 22:10:55.681075    2556 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase_v0.0.34@sha256_f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c.tar
	I0921 22:10:57.952612    2556 cache.go:164] successfully loaded gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c from cached tarball
	I0921 22:10:57.952612    2556 cache.go:170] Downloading gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c to local daemon
	I0921 22:10:57.952612    2556 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c.d.lock -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase_v0.0.34@sha256_f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c.d.lock
	I0921 22:10:58.059647    2556 out.go:204] * Another minikube instance is downloading dependencies... 
	I0921 22:10:58.180983    2556 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c in local docker daemon
	I0921 22:10:58.410662    2556 image.go:243] Writing gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c to local daemon
	    > gcr.io/k8s-minikube/kicbase:  0 B [__________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase:  0 B [__________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase:  0 B [__________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase:  0 B [__________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase:  0 B [__________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase:  0 B [__________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase:  0 B [_______________________] ?% ? p/s 1.2sI0921 22:11:00.484621    2556 cache.go:177] use image loaded from cache gcr.io/k8s-minikube/kicbase:v0.0.34
	W0921 22:11:00.484736    2556 out.go:239] ! minikube was unable to download gcr.io/k8s-minikube/kicbase:v0.0.34, but successfully downloaded gcr.io/k8s-minikube/kicbase:v0.0.34 as a fallback image
	! minikube was unable to download gcr.io/k8s-minikube/kicbase:v0.0.34, but successfully downloaded gcr.io/k8s-minikube/kicbase:v0.0.34 as a fallback image
	I0921 22:11:00.484787    2556 cache.go:208] Successfully downloaded all kic artifacts
	I0921 22:11:00.484839    2556 start.go:364] acquiring machines lock for no-preload-20220921220937-5916: {Name:mk5ebebabfef01f6dc67af3c2b2ec3d91e957a4d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0921 22:11:00.485030    2556 start.go:368] acquired machines lock for "no-preload-20220921220937-5916" in 190.5µs
	I0921 22:11:00.485275    2556 start.go:96] Skipping create...Using existing machine configuration
	I0921 22:11:00.485275    2556 fix.go:55] fixHost starting: 
	I0921 22:11:00.506152    2556 cli_runner.go:164] Run: docker container inspect no-preload-20220921220937-5916 --format={{.State.Status}}
	W0921 22:11:00.700688    2556 cli_runner.go:211] docker container inspect no-preload-20220921220937-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:11:00.700688    2556 fix.go:103] recreateIfNeeded on no-preload-20220921220937-5916: state= err=unknown state "no-preload-20220921220937-5916": docker container inspect no-preload-20220921220937-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220921220937-5916
	I0921 22:11:00.700688    2556 fix.go:108] machineExists: false. err=machine does not exist
	I0921 22:11:00.703484    2556 out.go:177] * docker "no-preload-20220921220937-5916" container is missing, will recreate.
	I0921 22:11:00.706725    2556 delete.go:124] DEMOLISHING no-preload-20220921220937-5916 ...
	I0921 22:11:00.723120    2556 cli_runner.go:164] Run: docker container inspect no-preload-20220921220937-5916 --format={{.State.Status}}
	W0921 22:11:00.934139    2556 cli_runner.go:211] docker container inspect no-preload-20220921220937-5916 --format={{.State.Status}} returned with exit code 1
	W0921 22:11:00.934139    2556 stop.go:75] unable to get state: unknown state "no-preload-20220921220937-5916": docker container inspect no-preload-20220921220937-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220921220937-5916
	I0921 22:11:00.934139    2556 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "no-preload-20220921220937-5916": docker container inspect no-preload-20220921220937-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220921220937-5916
	I0921 22:11:00.952039    2556 cli_runner.go:164] Run: docker container inspect no-preload-20220921220937-5916 --format={{.State.Status}}
	W0921 22:11:01.166987    2556 cli_runner.go:211] docker container inspect no-preload-20220921220937-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:11:01.166987    2556 delete.go:82] Unable to get host status for no-preload-20220921220937-5916, assuming it has already been deleted: state: unknown state "no-preload-20220921220937-5916": docker container inspect no-preload-20220921220937-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220921220937-5916
	I0921 22:11:01.175763    2556 cli_runner.go:164] Run: docker container inspect -f {{.Id}} no-preload-20220921220937-5916
	W0921 22:11:01.372429    2556 cli_runner.go:211] docker container inspect -f {{.Id}} no-preload-20220921220937-5916 returned with exit code 1
	I0921 22:11:01.372529    2556 kic.go:356] could not find the container no-preload-20220921220937-5916 to remove it. will try anyways
	I0921 22:11:01.381306    2556 cli_runner.go:164] Run: docker container inspect no-preload-20220921220937-5916 --format={{.State.Status}}
	W0921 22:11:01.606971    2556 cli_runner.go:211] docker container inspect no-preload-20220921220937-5916 --format={{.State.Status}} returned with exit code 1
	W0921 22:11:01.607078    2556 oci.go:84] error getting container status, will try to delete anyways: unknown state "no-preload-20220921220937-5916": docker container inspect no-preload-20220921220937-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220921220937-5916
	I0921 22:11:01.614237    2556 cli_runner.go:164] Run: docker exec --privileged -t no-preload-20220921220937-5916 /bin/bash -c "sudo init 0"
	W0921 22:11:01.816476    2556 cli_runner.go:211] docker exec --privileged -t no-preload-20220921220937-5916 /bin/bash -c "sudo init 0" returned with exit code 1
	I0921 22:11:01.816476    2556 oci.go:646] error shutdown no-preload-20220921220937-5916: docker exec --privileged -t no-preload-20220921220937-5916 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: no-preload-20220921220937-5916
	I0921 22:11:02.839305    2556 cli_runner.go:164] Run: docker container inspect no-preload-20220921220937-5916 --format={{.State.Status}}
	W0921 22:11:03.061881    2556 cli_runner.go:211] docker container inspect no-preload-20220921220937-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:11:03.061959    2556 oci.go:658] temporary error verifying shutdown: unknown state "no-preload-20220921220937-5916": docker container inspect no-preload-20220921220937-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220921220937-5916
	I0921 22:11:03.061959    2556 oci.go:660] temporary error: container no-preload-20220921220937-5916 status is  but expect it to be exited
	I0921 22:11:03.061959    2556 retry.go:31] will retry after 552.330144ms: couldn't verify container is exited. %v: unknown state "no-preload-20220921220937-5916": docker container inspect no-preload-20220921220937-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220921220937-5916
	I0921 22:11:03.635194    2556 cli_runner.go:164] Run: docker container inspect no-preload-20220921220937-5916 --format={{.State.Status}}
	W0921 22:11:03.849356    2556 cli_runner.go:211] docker container inspect no-preload-20220921220937-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:11:03.849611    2556 oci.go:658] temporary error verifying shutdown: unknown state "no-preload-20220921220937-5916": docker container inspect no-preload-20220921220937-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220921220937-5916
	I0921 22:11:03.849611    2556 oci.go:660] temporary error: container no-preload-20220921220937-5916 status is  but expect it to be exited
	I0921 22:11:03.849683    2556 retry.go:31] will retry after 1.080381816s: couldn't verify container is exited. %v: unknown state "no-preload-20220921220937-5916": docker container inspect no-preload-20220921220937-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220921220937-5916
	I0921 22:11:04.943500    2556 cli_runner.go:164] Run: docker container inspect no-preload-20220921220937-5916 --format={{.State.Status}}
	W0921 22:11:05.151248    2556 cli_runner.go:211] docker container inspect no-preload-20220921220937-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:11:05.151369    2556 oci.go:658] temporary error verifying shutdown: unknown state "no-preload-20220921220937-5916": docker container inspect no-preload-20220921220937-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220921220937-5916
	I0921 22:11:05.151369    2556 oci.go:660] temporary error: container no-preload-20220921220937-5916 status is  but expect it to be exited
	I0921 22:11:05.151423    2556 retry.go:31] will retry after 1.31013006s: couldn't verify container is exited. %v: unknown state "no-preload-20220921220937-5916": docker container inspect no-preload-20220921220937-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220921220937-5916
	I0921 22:11:06.473046    2556 cli_runner.go:164] Run: docker container inspect no-preload-20220921220937-5916 --format={{.State.Status}}
	W0921 22:11:06.698855    2556 cli_runner.go:211] docker container inspect no-preload-20220921220937-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:11:06.699115    2556 oci.go:658] temporary error verifying shutdown: unknown state "no-preload-20220921220937-5916": docker container inspect no-preload-20220921220937-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220921220937-5916
	I0921 22:11:06.699160    2556 oci.go:660] temporary error: container no-preload-20220921220937-5916 status is  but expect it to be exited
	I0921 22:11:06.699198    2556 retry.go:31] will retry after 1.582392691s: couldn't verify container is exited. %v: unknown state "no-preload-20220921220937-5916": docker container inspect no-preload-20220921220937-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220921220937-5916
	I0921 22:11:08.297769    2556 cli_runner.go:164] Run: docker container inspect no-preload-20220921220937-5916 --format={{.State.Status}}
	W0921 22:11:08.491485    2556 cli_runner.go:211] docker container inspect no-preload-20220921220937-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:11:08.491585    2556 oci.go:658] temporary error verifying shutdown: unknown state "no-preload-20220921220937-5916": docker container inspect no-preload-20220921220937-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220921220937-5916
	I0921 22:11:08.491585    2556 oci.go:660] temporary error: container no-preload-20220921220937-5916 status is  but expect it to be exited
	I0921 22:11:08.491806    2556 retry.go:31] will retry after 2.340488664s: couldn't verify container is exited. %v: unknown state "no-preload-20220921220937-5916": docker container inspect no-preload-20220921220937-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220921220937-5916
	I0921 22:11:10.849409    2556 cli_runner.go:164] Run: docker container inspect no-preload-20220921220937-5916 --format={{.State.Status}}
	W0921 22:11:11.061941    2556 cli_runner.go:211] docker container inspect no-preload-20220921220937-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:11:11.061941    2556 oci.go:658] temporary error verifying shutdown: unknown state "no-preload-20220921220937-5916": docker container inspect no-preload-20220921220937-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220921220937-5916
	I0921 22:11:11.061941    2556 oci.go:660] temporary error: container no-preload-20220921220937-5916 status is  but expect it to be exited
	I0921 22:11:11.061941    2556 retry.go:31] will retry after 4.506218855s: couldn't verify container is exited. %v: unknown state "no-preload-20220921220937-5916": docker container inspect no-preload-20220921220937-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220921220937-5916
	I0921 22:11:15.591367    2556 cli_runner.go:164] Run: docker container inspect no-preload-20220921220937-5916 --format={{.State.Status}}
	W0921 22:11:15.831047    2556 cli_runner.go:211] docker container inspect no-preload-20220921220937-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:11:15.831047    2556 oci.go:658] temporary error verifying shutdown: unknown state "no-preload-20220921220937-5916": docker container inspect no-preload-20220921220937-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220921220937-5916
	I0921 22:11:15.831047    2556 oci.go:660] temporary error: container no-preload-20220921220937-5916 status is  but expect it to be exited
	I0921 22:11:15.831047    2556 retry.go:31] will retry after 3.221479586s: couldn't verify container is exited. %v: unknown state "no-preload-20220921220937-5916": docker container inspect no-preload-20220921220937-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220921220937-5916
	I0921 22:11:19.066952    2556 cli_runner.go:164] Run: docker container inspect no-preload-20220921220937-5916 --format={{.State.Status}}
	W0921 22:11:19.248529    2556 cli_runner.go:211] docker container inspect no-preload-20220921220937-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:11:19.248658    2556 oci.go:658] temporary error verifying shutdown: unknown state "no-preload-20220921220937-5916": docker container inspect no-preload-20220921220937-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220921220937-5916
	I0921 22:11:19.248702    2556 oci.go:660] temporary error: container no-preload-20220921220937-5916 status is  but expect it to be exited
	I0921 22:11:19.248735    2556 oci.go:88] couldn't shut down no-preload-20220921220937-5916 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "no-preload-20220921220937-5916": docker container inspect no-preload-20220921220937-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220921220937-5916
	 
	I0921 22:11:19.257749    2556 cli_runner.go:164] Run: docker rm -f -v no-preload-20220921220937-5916
	I0921 22:11:19.472791    2556 cli_runner.go:164] Run: docker container inspect -f {{.Id}} no-preload-20220921220937-5916
	W0921 22:11:19.665193    2556 cli_runner.go:211] docker container inspect -f {{.Id}} no-preload-20220921220937-5916 returned with exit code 1
	I0921 22:11:19.671546    2556 cli_runner.go:164] Run: docker network inspect no-preload-20220921220937-5916 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0921 22:11:19.853417    2556 cli_runner.go:211] docker network inspect no-preload-20220921220937-5916 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0921 22:11:19.863286    2556 network_create.go:272] running [docker network inspect no-preload-20220921220937-5916] to gather additional debugging logs...
	I0921 22:11:19.863286    2556 cli_runner.go:164] Run: docker network inspect no-preload-20220921220937-5916
	W0921 22:11:20.072487    2556 cli_runner.go:211] docker network inspect no-preload-20220921220937-5916 returned with exit code 1
	I0921 22:11:20.072487    2556 network_create.go:275] error running [docker network inspect no-preload-20220921220937-5916]: docker network inspect no-preload-20220921220937-5916: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: no-preload-20220921220937-5916
	I0921 22:11:20.072487    2556 network_create.go:277] output of [docker network inspect no-preload-20220921220937-5916]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: no-preload-20220921220937-5916
	
	** /stderr **
	W0921 22:11:20.073186    2556 delete.go:139] delete failed (probably ok) <nil>
	I0921 22:11:20.073734    2556 fix.go:115] Sleeping 1 second for extra luck!
	I0921 22:11:21.074092    2556 start.go:125] createHost starting for "" (driver="docker")
	I0921 22:11:21.079123    2556 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0921 22:11:21.079799    2556 start.go:159] libmachine.API.Create for "no-preload-20220921220937-5916" (driver="docker")
	I0921 22:11:21.079799    2556 client.go:168] LocalClient.Create starting
	I0921 22:11:21.080463    2556 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I0921 22:11:21.080989    2556 main.go:134] libmachine: Decoding PEM data...
	I0921 22:11:21.080989    2556 main.go:134] libmachine: Parsing certificate...
	I0921 22:11:21.081155    2556 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I0921 22:11:21.081155    2556 main.go:134] libmachine: Decoding PEM data...
	I0921 22:11:21.081155    2556 main.go:134] libmachine: Parsing certificate...
	I0921 22:11:21.090341    2556 cli_runner.go:164] Run: docker network inspect no-preload-20220921220937-5916 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0921 22:11:21.275605    2556 cli_runner.go:211] docker network inspect no-preload-20220921220937-5916 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0921 22:11:21.285041    2556 network_create.go:272] running [docker network inspect no-preload-20220921220937-5916] to gather additional debugging logs...
	I0921 22:11:21.285041    2556 cli_runner.go:164] Run: docker network inspect no-preload-20220921220937-5916
	W0921 22:11:21.479187    2556 cli_runner.go:211] docker network inspect no-preload-20220921220937-5916 returned with exit code 1
	I0921 22:11:21.479237    2556 network_create.go:275] error running [docker network inspect no-preload-20220921220937-5916]: docker network inspect no-preload-20220921220937-5916: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: no-preload-20220921220937-5916
	I0921 22:11:21.479237    2556 network_create.go:277] output of [docker network inspect no-preload-20220921220937-5916]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: no-preload-20220921220937-5916
	
	** /stderr **
	I0921 22:11:21.486539    2556 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0921 22:11:21.685699    2556 network.go:290] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc00014ae80] misses:0}
	I0921 22:11:21.686330    2556 network.go:236] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0921 22:11:21.686330    2556 network_create.go:115] attempt to create docker network no-preload-20220921220937-5916 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0921 22:11:21.693367    2556 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=no-preload-20220921220937-5916 no-preload-20220921220937-5916
	W0921 22:11:21.883064    2556 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=no-preload-20220921220937-5916 no-preload-20220921220937-5916 returned with exit code 1
	E0921 22:11:21.883064    2556 network_create.go:104] error while trying to create docker network no-preload-20220921220937-5916 192.168.49.0/24: create docker network no-preload-20220921220937-5916 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=no-preload-20220921220937-5916 no-preload-20220921220937-5916: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 0059e9371eccf472f8717a6a6ddb74a9df8243703e64c749643bc556251c1749 (br-0059e9371ecc): conflicts with network a04d36bfb3cf4b1b1ab643a2ec70fadfc4a485878b32a275b4c7653037d66dd0 (br-a04d36bfb3cf): networks have overlapping IPv4
	W0921 22:11:21.883064    2556 out.go:239] ! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network no-preload-20220921220937-5916 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=no-preload-20220921220937-5916 no-preload-20220921220937-5916: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 0059e9371eccf472f8717a6a6ddb74a9df8243703e64c749643bc556251c1749 (br-0059e9371ecc): conflicts with network a04d36bfb3cf4b1b1ab643a2ec70fadfc4a485878b32a275b4c7653037d66dd0 (br-a04d36bfb3cf): networks have overlapping IPv4
	
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network no-preload-20220921220937-5916 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=no-preload-20220921220937-5916 no-preload-20220921220937-5916: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 0059e9371eccf472f8717a6a6ddb74a9df8243703e64c749643bc556251c1749 (br-0059e9371ecc): conflicts with network a04d36bfb3cf4b1b1ab643a2ec70fadfc4a485878b32a275b4c7653037d66dd0 (br-a04d36bfb3cf): networks have overlapping IPv4
	
	I0921 22:11:21.896070    2556 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0921 22:11:22.108077    2556 cli_runner.go:164] Run: docker volume create no-preload-20220921220937-5916 --label name.minikube.sigs.k8s.io=no-preload-20220921220937-5916 --label created_by.minikube.sigs.k8s.io=true
	W0921 22:11:22.316673    2556 cli_runner.go:211] docker volume create no-preload-20220921220937-5916 --label name.minikube.sigs.k8s.io=no-preload-20220921220937-5916 --label created_by.minikube.sigs.k8s.io=true returned with exit code 1
	I0921 22:11:22.316673    2556 client.go:171] LocalClient.Create took 1.2368644s
	I0921 22:11:24.341294    2556 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0921 22:11:24.347014    2556 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220921220937-5916
	W0921 22:11:24.543361    2556 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220921220937-5916 returned with exit code 1
	I0921 22:11:24.543672    2556 retry.go:31] will retry after 149.242379ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-20220921220937-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220921220937-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220921220937-5916
	I0921 22:11:24.716103    2556 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220921220937-5916
	W0921 22:11:24.924615    2556 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220921220937-5916 returned with exit code 1
	I0921 22:11:24.924741    2556 retry.go:31] will retry after 300.341948ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-20220921220937-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220921220937-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220921220937-5916
	I0921 22:11:25.236215    2556 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220921220937-5916
	W0921 22:11:25.429891    2556 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220921220937-5916 returned with exit code 1
	I0921 22:11:25.430112    2556 retry.go:31] will retry after 571.057104ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-20220921220937-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220921220937-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220921220937-5916
	I0921 22:11:26.012489    2556 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220921220937-5916
	W0921 22:11:26.219459    2556 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220921220937-5916 returned with exit code 1
	W0921 22:11:26.219504    2556 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-20220921220937-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220921220937-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220921220937-5916
	
	W0921 22:11:26.219504    2556 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-20220921220937-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220921220937-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220921220937-5916
	I0921 22:11:26.231999    2556 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0921 22:11:26.240427    2556 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220921220937-5916
	W0921 22:11:26.452506    2556 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220921220937-5916 returned with exit code 1
	I0921 22:11:26.452546    2556 retry.go:31] will retry after 178.565968ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-20220921220937-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220921220937-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220921220937-5916
	I0921 22:11:26.654115    2556 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220921220937-5916
	W0921 22:11:26.860286    2556 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220921220937-5916 returned with exit code 1
	I0921 22:11:26.860638    2556 retry.go:31] will retry after 330.246446ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-20220921220937-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220921220937-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220921220937-5916
	I0921 22:11:27.204892    2556 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220921220937-5916
	W0921 22:11:27.415967    2556 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220921220937-5916 returned with exit code 1
	I0921 22:11:27.416039    2556 retry.go:31] will retry after 460.157723ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-20220921220937-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220921220937-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220921220937-5916
	I0921 22:11:27.898712    2556 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220921220937-5916
	W0921 22:11:28.087509    2556 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220921220937-5916 returned with exit code 1
	W0921 22:11:28.087836    2556 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-20220921220937-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220921220937-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220921220937-5916
	
	W0921 22:11:28.087836    2556 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-20220921220937-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220921220937-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220921220937-5916
	I0921 22:11:28.087836    2556 start.go:128] duration metric: createHost completed in 7.0135012s
	I0921 22:11:28.099386    2556 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0921 22:11:28.105717    2556 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220921220937-5916
	W0921 22:11:28.302647    2556 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220921220937-5916 returned with exit code 1
	I0921 22:11:28.302820    2556 retry.go:31] will retry after 195.758538ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-20220921220937-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220921220937-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220921220937-5916
	I0921 22:11:28.512364    2556 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220921220937-5916
	W0921 22:11:28.703675    2556 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220921220937-5916 returned with exit code 1
	I0921 22:11:28.703818    2556 retry.go:31] will retry after 297.413196ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-20220921220937-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220921220937-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220921220937-5916
	I0921 22:11:29.020508    2556 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220921220937-5916
	W0921 22:11:29.249562    2556 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220921220937-5916 returned with exit code 1
	I0921 22:11:29.249562    2556 retry.go:31] will retry after 663.23513ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-20220921220937-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220921220937-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220921220937-5916
	I0921 22:11:29.929331    2556 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220921220937-5916
	W0921 22:11:30.128801    2556 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220921220937-5916 returned with exit code 1
	W0921 22:11:30.129055    2556 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-20220921220937-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220921220937-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220921220937-5916
	
	W0921 22:11:30.129055    2556 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-20220921220937-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220921220937-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220921220937-5916
	I0921 22:11:30.138837    2556 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0921 22:11:30.144979    2556 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220921220937-5916
	W0921 22:11:30.362220    2556 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220921220937-5916 returned with exit code 1
	I0921 22:11:30.362220    2556 retry.go:31] will retry after 175.796719ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-20220921220937-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220921220937-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220921220937-5916
	I0921 22:11:30.563526    2556 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220921220937-5916
	W0921 22:11:30.755250    2556 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220921220937-5916 returned with exit code 1
	I0921 22:11:30.755250    2556 retry.go:31] will retry after 322.826781ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-20220921220937-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220921220937-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220921220937-5916
	I0921 22:11:31.090927    2556 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220921220937-5916
	W0921 22:11:31.314668    2556 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220921220937-5916 returned with exit code 1
	I0921 22:11:31.314717    2556 retry.go:31] will retry after 602.253718ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-20220921220937-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220921220937-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220921220937-5916
	I0921 22:11:31.925583    2556 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220921220937-5916
	W0921 22:11:32.105114    2556 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220921220937-5916 returned with exit code 1
	W0921 22:11:32.105322    2556 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-20220921220937-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220921220937-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220921220937-5916
	
	W0921 22:11:32.105322    2556 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-20220921220937-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220921220937-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220921220937-5916
	I0921 22:11:32.105322    2556 fix.go:57] fixHost completed within 31.6198016s
	I0921 22:11:32.105322    2556 start.go:83] releasing machines lock for "no-preload-20220921220937-5916", held for 31.6199196s
	W0921 22:11:32.105322    2556 start.go:602] error starting host: recreate: creating host: create: creating: setting up container node: creating volume for no-preload-20220921220937-5916 container: docker volume create no-preload-20220921220937-5916 --label name.minikube.sigs.k8s.io=no-preload-20220921220937-5916 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create no-preload-20220921220937-5916: error while creating volume root path '/var/lib/docker/volumes/no-preload-20220921220937-5916': mkdir /var/lib/docker/volumes/no-preload-20220921220937-5916: read-only file system
	W0921 22:11:32.106009    2556 out.go:239] ! StartHost failed, but will try again: recreate: creating host: create: creating: setting up container node: creating volume for no-preload-20220921220937-5916 container: docker volume create no-preload-20220921220937-5916 --label name.minikube.sigs.k8s.io=no-preload-20220921220937-5916 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create no-preload-20220921220937-5916: error while creating volume root path '/var/lib/docker/volumes/no-preload-20220921220937-5916': mkdir /var/lib/docker/volumes/no-preload-20220921220937-5916: read-only file system
	
	! StartHost failed, but will try again: recreate: creating host: create: creating: setting up container node: creating volume for no-preload-20220921220937-5916 container: docker volume create no-preload-20220921220937-5916 --label name.minikube.sigs.k8s.io=no-preload-20220921220937-5916 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create no-preload-20220921220937-5916: error while creating volume root path '/var/lib/docker/volumes/no-preload-20220921220937-5916': mkdir /var/lib/docker/volumes/no-preload-20220921220937-5916: read-only file system
	
	I0921 22:11:32.106044    2556 start.go:617] Will try again in 5 seconds ...
	I0921 22:11:37.106636    2556 start.go:364] acquiring machines lock for no-preload-20220921220937-5916: {Name:mk5ebebabfef01f6dc67af3c2b2ec3d91e957a4d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0921 22:11:37.106636    2556 start.go:368] acquired machines lock for "no-preload-20220921220937-5916" in 0s
	I0921 22:11:37.106636    2556 start.go:96] Skipping create...Using existing machine configuration
	I0921 22:11:37.107268    2556 fix.go:55] fixHost starting: 
	I0921 22:11:37.123725    2556 cli_runner.go:164] Run: docker container inspect no-preload-20220921220937-5916 --format={{.State.Status}}
	W0921 22:11:37.332209    2556 cli_runner.go:211] docker container inspect no-preload-20220921220937-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:11:37.332298    2556 fix.go:103] recreateIfNeeded on no-preload-20220921220937-5916: state= err=unknown state "no-preload-20220921220937-5916": docker container inspect no-preload-20220921220937-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220921220937-5916
	I0921 22:11:37.332414    2556 fix.go:108] machineExists: false. err=machine does not exist
	I0921 22:11:37.336671    2556 out.go:177] * docker "no-preload-20220921220937-5916" container is missing, will recreate.
	I0921 22:11:37.339537    2556 delete.go:124] DEMOLISHING no-preload-20220921220937-5916 ...
	I0921 22:11:37.354038    2556 cli_runner.go:164] Run: docker container inspect no-preload-20220921220937-5916 --format={{.State.Status}}
	W0921 22:11:37.549739    2556 cli_runner.go:211] docker container inspect no-preload-20220921220937-5916 --format={{.State.Status}} returned with exit code 1
	W0921 22:11:37.549739    2556 stop.go:75] unable to get state: unknown state "no-preload-20220921220937-5916": docker container inspect no-preload-20220921220937-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220921220937-5916
	I0921 22:11:37.549739    2556 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "no-preload-20220921220937-5916": docker container inspect no-preload-20220921220937-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220921220937-5916
	I0921 22:11:37.572892    2556 cli_runner.go:164] Run: docker container inspect no-preload-20220921220937-5916 --format={{.State.Status}}
	W0921 22:11:37.783992    2556 cli_runner.go:211] docker container inspect no-preload-20220921220937-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:11:37.784109    2556 delete.go:82] Unable to get host status for no-preload-20220921220937-5916, assuming it has already been deleted: state: unknown state "no-preload-20220921220937-5916": docker container inspect no-preload-20220921220937-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220921220937-5916
	I0921 22:11:37.794491    2556 cli_runner.go:164] Run: docker container inspect -f {{.Id}} no-preload-20220921220937-5916
	W0921 22:11:38.021235    2556 cli_runner.go:211] docker container inspect -f {{.Id}} no-preload-20220921220937-5916 returned with exit code 1
	I0921 22:11:38.021347    2556 kic.go:356] could not find the container no-preload-20220921220937-5916 to remove it. will try anyways
	I0921 22:11:38.030021    2556 cli_runner.go:164] Run: docker container inspect no-preload-20220921220937-5916 --format={{.State.Status}}
	W0921 22:11:38.222713    2556 cli_runner.go:211] docker container inspect no-preload-20220921220937-5916 --format={{.State.Status}} returned with exit code 1
	W0921 22:11:38.222866    2556 oci.go:84] error getting container status, will try to delete anyways: unknown state "no-preload-20220921220937-5916": docker container inspect no-preload-20220921220937-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220921220937-5916
	I0921 22:11:38.231373    2556 cli_runner.go:164] Run: docker exec --privileged -t no-preload-20220921220937-5916 /bin/bash -c "sudo init 0"
	W0921 22:11:38.408884    2556 cli_runner.go:211] docker exec --privileged -t no-preload-20220921220937-5916 /bin/bash -c "sudo init 0" returned with exit code 1
	I0921 22:11:38.408990    2556 oci.go:646] error shutdown no-preload-20220921220937-5916: docker exec --privileged -t no-preload-20220921220937-5916 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: no-preload-20220921220937-5916
	I0921 22:11:39.438416    2556 cli_runner.go:164] Run: docker container inspect no-preload-20220921220937-5916 --format={{.State.Status}}
	W0921 22:11:39.687668    2556 cli_runner.go:211] docker container inspect no-preload-20220921220937-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:11:39.687668    2556 oci.go:658] temporary error verifying shutdown: unknown state "no-preload-20220921220937-5916": docker container inspect no-preload-20220921220937-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220921220937-5916
	I0921 22:11:39.687668    2556 oci.go:660] temporary error: container no-preload-20220921220937-5916 status is  but expect it to be exited
	I0921 22:11:39.687668    2556 retry.go:31] will retry after 396.557122ms: couldn't verify container is exited. %v: unknown state "no-preload-20220921220937-5916": docker container inspect no-preload-20220921220937-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220921220937-5916
	I0921 22:11:40.098284    2556 cli_runner.go:164] Run: docker container inspect no-preload-20220921220937-5916 --format={{.State.Status}}
	W0921 22:11:40.309023    2556 cli_runner.go:211] docker container inspect no-preload-20220921220937-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:11:40.309087    2556 oci.go:658] temporary error verifying shutdown: unknown state "no-preload-20220921220937-5916": docker container inspect no-preload-20220921220937-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220921220937-5916
	I0921 22:11:40.309087    2556 oci.go:660] temporary error: container no-preload-20220921220937-5916 status is  but expect it to be exited
	I0921 22:11:40.309087    2556 retry.go:31] will retry after 597.811922ms: couldn't verify container is exited. %v: unknown state "no-preload-20220921220937-5916": docker container inspect no-preload-20220921220937-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220921220937-5916
	I0921 22:11:40.917208    2556 cli_runner.go:164] Run: docker container inspect no-preload-20220921220937-5916 --format={{.State.Status}}
	W0921 22:11:41.126537    2556 cli_runner.go:211] docker container inspect no-preload-20220921220937-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:11:41.126537    2556 oci.go:658] temporary error verifying shutdown: unknown state "no-preload-20220921220937-5916": docker container inspect no-preload-20220921220937-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220921220937-5916
	I0921 22:11:41.126537    2556 oci.go:660] temporary error: container no-preload-20220921220937-5916 status is  but expect it to be exited
	I0921 22:11:41.126537    2556 retry.go:31] will retry after 1.409144665s: couldn't verify container is exited. %v: unknown state "no-preload-20220921220937-5916": docker container inspect no-preload-20220921220937-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220921220937-5916
	I0921 22:11:42.546166    2556 cli_runner.go:164] Run: docker container inspect no-preload-20220921220937-5916 --format={{.State.Status}}
	W0921 22:11:42.753479    2556 cli_runner.go:211] docker container inspect no-preload-20220921220937-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:11:42.753479    2556 oci.go:658] temporary error verifying shutdown: unknown state "no-preload-20220921220937-5916": docker container inspect no-preload-20220921220937-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220921220937-5916
	I0921 22:11:42.753479    2556 oci.go:660] temporary error: container no-preload-20220921220937-5916 status is  but expect it to be exited
	I0921 22:11:42.753479    2556 retry.go:31] will retry after 1.192358242s: couldn't verify container is exited. %v: unknown state "no-preload-20220921220937-5916": docker container inspect no-preload-20220921220937-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220921220937-5916
	I0921 22:11:43.960700    2556 cli_runner.go:164] Run: docker container inspect no-preload-20220921220937-5916 --format={{.State.Status}}
	W0921 22:11:44.155648    2556 cli_runner.go:211] docker container inspect no-preload-20220921220937-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:11:44.155854    2556 oci.go:658] temporary error verifying shutdown: unknown state "no-preload-20220921220937-5916": docker container inspect no-preload-20220921220937-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220921220937-5916
	I0921 22:11:44.155908    2556 oci.go:660] temporary error: container no-preload-20220921220937-5916 status is  but expect it to be exited
	I0921 22:11:44.155908    2556 retry.go:31] will retry after 3.456004252s: couldn't verify container is exited. %v: unknown state "no-preload-20220921220937-5916": docker container inspect no-preload-20220921220937-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220921220937-5916
	I0921 22:11:47.626788    2556 cli_runner.go:164] Run: docker container inspect no-preload-20220921220937-5916 --format={{.State.Status}}
	W0921 22:11:47.834863    2556 cli_runner.go:211] docker container inspect no-preload-20220921220937-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:11:47.835209    2556 oci.go:658] temporary error verifying shutdown: unknown state "no-preload-20220921220937-5916": docker container inspect no-preload-20220921220937-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220921220937-5916
	I0921 22:11:47.835264    2556 oci.go:660] temporary error: container no-preload-20220921220937-5916 status is  but expect it to be exited
	I0921 22:11:47.835342    2556 retry.go:31] will retry after 4.543793083s: couldn't verify container is exited. %v: unknown state "no-preload-20220921220937-5916": docker container inspect no-preload-20220921220937-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220921220937-5916
	I0921 22:11:52.392110    2556 cli_runner.go:164] Run: docker container inspect no-preload-20220921220937-5916 --format={{.State.Status}}
	W0921 22:11:52.610727    2556 cli_runner.go:211] docker container inspect no-preload-20220921220937-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:11:52.610727    2556 oci.go:658] temporary error verifying shutdown: unknown state "no-preload-20220921220937-5916": docker container inspect no-preload-20220921220937-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220921220937-5916
	I0921 22:11:52.610727    2556 oci.go:660] temporary error: container no-preload-20220921220937-5916 status is  but expect it to be exited
	I0921 22:11:52.610727    2556 retry.go:31] will retry after 5.830976587s: couldn't verify container is exited. %v: unknown state "no-preload-20220921220937-5916": docker container inspect no-preload-20220921220937-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220921220937-5916
	I0921 22:11:58.458895    2556 cli_runner.go:164] Run: docker container inspect no-preload-20220921220937-5916 --format={{.State.Status}}
	W0921 22:11:58.681284    2556 cli_runner.go:211] docker container inspect no-preload-20220921220937-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:11:58.681284    2556 oci.go:658] temporary error verifying shutdown: unknown state "no-preload-20220921220937-5916": docker container inspect no-preload-20220921220937-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220921220937-5916
	I0921 22:11:58.681284    2556 oci.go:660] temporary error: container no-preload-20220921220937-5916 status is  but expect it to be exited
	I0921 22:11:58.681284    2556 oci.go:88] couldn't shut down no-preload-20220921220937-5916 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "no-preload-20220921220937-5916": docker container inspect no-preload-20220921220937-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220921220937-5916
	 
	I0921 22:11:58.688552    2556 cli_runner.go:164] Run: docker rm -f -v no-preload-20220921220937-5916
	I0921 22:11:58.892212    2556 cli_runner.go:164] Run: docker container inspect -f {{.Id}} no-preload-20220921220937-5916
	W0921 22:11:59.118057    2556 cli_runner.go:211] docker container inspect -f {{.Id}} no-preload-20220921220937-5916 returned with exit code 1
	I0921 22:11:59.127050    2556 cli_runner.go:164] Run: docker network inspect no-preload-20220921220937-5916 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0921 22:11:59.335509    2556 cli_runner.go:211] docker network inspect no-preload-20220921220937-5916 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0921 22:11:59.343606    2556 network_create.go:272] running [docker network inspect no-preload-20220921220937-5916] to gather additional debugging logs...
	I0921 22:11:59.343606    2556 cli_runner.go:164] Run: docker network inspect no-preload-20220921220937-5916
	W0921 22:11:59.588434    2556 cli_runner.go:211] docker network inspect no-preload-20220921220937-5916 returned with exit code 1
	I0921 22:11:59.588591    2556 network_create.go:275] error running [docker network inspect no-preload-20220921220937-5916]: docker network inspect no-preload-20220921220937-5916: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: no-preload-20220921220937-5916
	I0921 22:11:59.588617    2556 network_create.go:277] output of [docker network inspect no-preload-20220921220937-5916]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: no-preload-20220921220937-5916
	
	** /stderr **
	W0921 22:11:59.589690    2556 delete.go:139] delete failed (probably ok) <nil>
	I0921 22:11:59.589690    2556 fix.go:115] Sleeping 1 second for extra luck!
	I0921 22:12:00.603853    2556 start.go:125] createHost starting for "" (driver="docker")
	I0921 22:12:00.608917    2556 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0921 22:12:00.609248    2556 start.go:159] libmachine.API.Create for "no-preload-20220921220937-5916" (driver="docker")
	I0921 22:12:00.609340    2556 client.go:168] LocalClient.Create starting
	I0921 22:12:00.609643    2556 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I0921 22:12:00.610324    2556 main.go:134] libmachine: Decoding PEM data...
	I0921 22:12:00.610324    2556 main.go:134] libmachine: Parsing certificate...
	I0921 22:12:00.610324    2556 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I0921 22:12:00.610904    2556 main.go:134] libmachine: Decoding PEM data...
	I0921 22:12:00.610904    2556 main.go:134] libmachine: Parsing certificate...
	I0921 22:12:00.619192    2556 cli_runner.go:164] Run: docker network inspect no-preload-20220921220937-5916 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0921 22:12:00.820882    2556 cli_runner.go:211] docker network inspect no-preload-20220921220937-5916 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0921 22:12:00.828691    2556 network_create.go:272] running [docker network inspect no-preload-20220921220937-5916] to gather additional debugging logs...
	I0921 22:12:00.828691    2556 cli_runner.go:164] Run: docker network inspect no-preload-20220921220937-5916
	W0921 22:12:01.038547    2556 cli_runner.go:211] docker network inspect no-preload-20220921220937-5916 returned with exit code 1
	I0921 22:12:01.038786    2556 network_create.go:275] error running [docker network inspect no-preload-20220921220937-5916]: docker network inspect no-preload-20220921220937-5916: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: no-preload-20220921220937-5916
	I0921 22:12:01.038973    2556 network_create.go:277] output of [docker network inspect no-preload-20220921220937-5916]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: no-preload-20220921220937-5916
	
	** /stderr **
	I0921 22:12:01.046618    2556 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0921 22:12:01.271311    2556 network.go:281] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00014ae80] amended:false}} dirty:map[] misses:0}
	I0921 22:12:01.271311    2556 network.go:239] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0921 22:12:01.287580    2556 network.go:290] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00014ae80] amended:true}} dirty:map[192.168.49.0:0xc00014ae80 192.168.58.0:0xc0011b8280] misses:0}
	I0921 22:12:01.287580    2556 network.go:236] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0921 22:12:01.287580    2556 network_create.go:115] attempt to create docker network no-preload-20220921220937-5916 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0921 22:12:01.293904    2556 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=no-preload-20220921220937-5916 no-preload-20220921220937-5916
	W0921 22:12:01.500593    2556 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=no-preload-20220921220937-5916 no-preload-20220921220937-5916 returned with exit code 1
	E0921 22:12:01.500593    2556 network_create.go:104] error while trying to create docker network no-preload-20220921220937-5916 192.168.58.0/24: create docker network no-preload-20220921220937-5916 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=no-preload-20220921220937-5916 no-preload-20220921220937-5916: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 4260092bf1ff73808ff538b48bdb871c3999ebe8227fd6c04e25d66cfa2248b7 (br-4260092bf1ff): conflicts with network 8a3cd8d165a4af032276277dbee93b52b8ee4aed7ca6d8aceee6814901b31792 (br-8a3cd8d165a4): networks have overlapping IPv4
	W0921 22:12:01.500593    2556 out.go:239] ! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network no-preload-20220921220937-5916 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=no-preload-20220921220937-5916 no-preload-20220921220937-5916: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 4260092bf1ff73808ff538b48bdb871c3999ebe8227fd6c04e25d66cfa2248b7 (br-4260092bf1ff): conflicts with network 8a3cd8d165a4af032276277dbee93b52b8ee4aed7ca6d8aceee6814901b31792 (br-8a3cd8d165a4): networks have overlapping IPv4
	
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network no-preload-20220921220937-5916 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=no-preload-20220921220937-5916 no-preload-20220921220937-5916: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 4260092bf1ff73808ff538b48bdb871c3999ebe8227fd6c04e25d66cfa2248b7 (br-4260092bf1ff): conflicts with network 8a3cd8d165a4af032276277dbee93b52b8ee4aed7ca6d8aceee6814901b31792 (br-8a3cd8d165a4): networks have overlapping IPv4
	
	I0921 22:12:01.517595    2556 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0921 22:12:01.757326    2556 cli_runner.go:164] Run: docker volume create no-preload-20220921220937-5916 --label name.minikube.sigs.k8s.io=no-preload-20220921220937-5916 --label created_by.minikube.sigs.k8s.io=true
	W0921 22:12:01.949577    2556 cli_runner.go:211] docker volume create no-preload-20220921220937-5916 --label name.minikube.sigs.k8s.io=no-preload-20220921220937-5916 --label created_by.minikube.sigs.k8s.io=true returned with exit code 1
	I0921 22:12:01.949920    2556 client.go:171] LocalClient.Create took 1.3404412s
	I0921 22:12:03.961745    2556 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0921 22:12:03.968751    2556 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220921220937-5916
	W0921 22:12:04.162457    2556 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220921220937-5916 returned with exit code 1
	I0921 22:12:04.162457    2556 retry.go:31] will retry after 164.582069ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-20220921220937-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220921220937-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220921220937-5916
	I0921 22:12:04.341450    2556 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220921220937-5916
	W0921 22:12:04.564513    2556 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220921220937-5916 returned with exit code 1
	I0921 22:12:04.564869    2556 retry.go:31] will retry after 415.22004ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-20220921220937-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220921220937-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220921220937-5916
	I0921 22:12:04.996555    2556 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220921220937-5916
	W0921 22:12:05.178350    2556 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220921220937-5916 returned with exit code 1
	W0921 22:12:05.178477    2556 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-20220921220937-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220921220937-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220921220937-5916
	
	W0921 22:12:05.178477    2556 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-20220921220937-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220921220937-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220921220937-5916
	I0921 22:12:05.191472    2556 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0921 22:12:05.201929    2556 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220921220937-5916
	W0921 22:12:05.381369    2556 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220921220937-5916 returned with exit code 1
	I0921 22:12:05.381369    2556 retry.go:31] will retry after 144.863405ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-20220921220937-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220921220937-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220921220937-5916
	I0921 22:12:05.545389    2556 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220921220937-5916
	W0921 22:12:05.721456    2556 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220921220937-5916 returned with exit code 1
	I0921 22:12:05.721662    2556 retry.go:31] will retry after 410.553224ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-20220921220937-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220921220937-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220921220937-5916
	I0921 22:12:06.147804    2556 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220921220937-5916
	W0921 22:12:06.382733    2556 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220921220937-5916 returned with exit code 1
	I0921 22:12:06.382921    2556 retry.go:31] will retry after 314.505366ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-20220921220937-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220921220937-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220921220937-5916
	I0921 22:12:06.716243    2556 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220921220937-5916
	W0921 22:12:06.912331    2556 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220921220937-5916 returned with exit code 1
	W0921 22:12:06.912454    2556 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-20220921220937-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220921220937-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220921220937-5916
	
	W0921 22:12:06.912454    2556 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-20220921220937-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220921220937-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220921220937-5916
	I0921 22:12:06.912454    2556 start.go:128] duration metric: createHost completed in 6.3083722s
	I0921 22:12:06.923676    2556 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0921 22:12:06.930836    2556 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220921220937-5916
	W0921 22:12:07.114848    2556 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220921220937-5916 returned with exit code 1
	I0921 22:12:07.114848    2556 retry.go:31] will retry after 200.38067ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-20220921220937-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220921220937-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220921220937-5916
	I0921 22:12:07.327245    2556 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220921220937-5916
	W0921 22:12:07.544076    2556 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220921220937-5916 returned with exit code 1
	I0921 22:12:07.544234    2556 retry.go:31] will retry after 252.474839ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-20220921220937-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220921220937-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220921220937-5916
	I0921 22:12:07.811905    2556 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220921220937-5916
	W0921 22:12:07.999369    2556 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220921220937-5916 returned with exit code 1
	I0921 22:12:07.999369    2556 retry.go:31] will retry after 585.618668ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-20220921220937-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220921220937-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220921220937-5916
	I0921 22:12:08.595460    2556 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220921220937-5916
	W0921 22:12:08.795086    2556 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220921220937-5916 returned with exit code 1
	W0921 22:12:08.795086    2556 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-20220921220937-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220921220937-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220921220937-5916
	
	W0921 22:12:08.795086    2556 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-20220921220937-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220921220937-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220921220937-5916
	I0921 22:12:08.807082    2556 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0921 22:12:08.814079    2556 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220921220937-5916
	W0921 22:12:09.024541    2556 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220921220937-5916 returned with exit code 1
	I0921 22:12:09.024541    2556 retry.go:31] will retry after 194.626905ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-20220921220937-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220921220937-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220921220937-5916
	I0921 22:12:09.233249    2556 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220921220937-5916
	W0921 22:12:09.457611    2556 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220921220937-5916 returned with exit code 1
	I0921 22:12:09.457611    2556 retry.go:31] will retry after 346.182076ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-20220921220937-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220921220937-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220921220937-5916
	I0921 22:12:09.825732    2556 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220921220937-5916
	W0921 22:12:10.033499    2556 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220921220937-5916 returned with exit code 1
	I0921 22:12:10.033499    2556 retry.go:31] will retry after 579.704465ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-20220921220937-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220921220937-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220921220937-5916
	I0921 22:12:10.636076    2556 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220921220937-5916
	W0921 22:12:10.837720    2556 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220921220937-5916 returned with exit code 1
	W0921 22:12:10.837854    2556 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-20220921220937-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220921220937-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220921220937-5916
	
	W0921 22:12:10.837854    2556 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-20220921220937-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220921220937-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220921220937-5916
	I0921 22:12:10.837854    2556 fix.go:57] fixHost completed within 33.730952s
	I0921 22:12:10.837854    2556 start.go:83] releasing machines lock for "no-preload-20220921220937-5916", held for 33.730952s
	W0921 22:12:10.837854    2556 out.go:239] * Failed to start docker container. Running "minikube delete -p no-preload-20220921220937-5916" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for no-preload-20220921220937-5916 container: docker volume create no-preload-20220921220937-5916 --label name.minikube.sigs.k8s.io=no-preload-20220921220937-5916 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create no-preload-20220921220937-5916: error while creating volume root path '/var/lib/docker/volumes/no-preload-20220921220937-5916': mkdir /var/lib/docker/volumes/no-preload-20220921220937-5916: read-only file system
	
	* Failed to start docker container. Running "minikube delete -p no-preload-20220921220937-5916" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for no-preload-20220921220937-5916 container: docker volume create no-preload-20220921220937-5916 --label name.minikube.sigs.k8s.io=no-preload-20220921220937-5916 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create no-preload-20220921220937-5916: error while creating volume root path '/var/lib/docker/volumes/no-preload-20220921220937-5916': mkdir /var/lib/docker/volumes/no-preload-20220921220937-5916: read-only file system
	
	I0921 22:12:10.843834    2556 out.go:177] 
	W0921 22:12:10.846850    2556 out.go:239] X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for no-preload-20220921220937-5916 container: docker volume create no-preload-20220921220937-5916 --label name.minikube.sigs.k8s.io=no-preload-20220921220937-5916 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create no-preload-20220921220937-5916: error while creating volume root path '/var/lib/docker/volumes/no-preload-20220921220937-5916': mkdir /var/lib/docker/volumes/no-preload-20220921220937-5916: read-only file system
	
	X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for no-preload-20220921220937-5916 container: docker volume create no-preload-20220921220937-5916 --label name.minikube.sigs.k8s.io=no-preload-20220921220937-5916 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create no-preload-20220921220937-5916: error while creating volume root path '/var/lib/docker/volumes/no-preload-20220921220937-5916': mkdir /var/lib/docker/volumes/no-preload-20220921220937-5916: read-only file system
	
	W0921 22:12:10.846850    2556 out.go:239] * Suggestion: Restart Docker
	* Suggestion: Restart Docker
	W0921 22:12:10.846850    2556 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/6825
	* Related issue: https://github.com/kubernetes/minikube/issues/6825
	I0921 22:12:10.849847    2556 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-windows-amd64.exe start -p no-preload-20220921220937-5916 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker --kubernetes-version=v1.25.2": exit status 60
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/no-preload/serial/SecondStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect no-preload-20220921220937-5916

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/SecondStart
helpers_test.go:231: (dbg) Non-zero exit: docker inspect no-preload-20220921220937-5916: exit status 1 (255.7163ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: no-preload-20220921220937-5916

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-20220921220937-5916 -n no-preload-20220921220937-5916
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-20220921220937-5916 -n no-preload-20220921220937-5916: exit status 7 (577.7078ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0921 22:12:11.942498    9144 status.go:247] status error: host: state: unknown state "no-preload-20220921220937-5916": docker container inspect no-preload-20220921220937-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220921220937-5916

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-20220921220937-5916" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/no-preload/serial/SecondStart (78.37s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (2.02s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p embed-certs-20220921220947-5916 -n embed-certs-20220921220947-5916
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p embed-certs-20220921220947-5916 -n embed-certs-20220921220947-5916: exit status 7 (552.9798ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0921 22:11:02.151113    7932 status.go:247] status error: host: state: unknown state "embed-certs-20220921220947-5916": docker container inspect embed-certs-20220921220947-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220921220947-5916

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Nonexistent"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p embed-certs-20220921220947-5916 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect embed-certs-20220921220947-5916
helpers_test.go:231: (dbg) Non-zero exit: docker inspect embed-certs-20220921220947-5916: exit status 1 (283.3033ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: embed-certs-20220921220947-5916

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p embed-certs-20220921220947-5916 -n embed-certs-20220921220947-5916
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p embed-certs-20220921220947-5916 -n embed-certs-20220921220947-5916: exit status 7 (561.4881ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0921 22:11:03.608712    1240 status.go:247] status error: host: state: unknown state "embed-certs-20220921220947-5916": docker container inspect embed-certs-20220921220947-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220921220947-5916

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-20220921220947-5916" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (2.02s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (77.78s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-windows-amd64.exe start -p embed-certs-20220921220947-5916 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker --kubernetes-version=v1.25.2

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p embed-certs-20220921220947-5916 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker --kubernetes-version=v1.25.2: exit status 60 (1m16.6738352s)

                                                
                                                
-- stdout --
	* [embed-certs-20220921220947-5916] minikube v1.27.0 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	  - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=14995
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on existing profile
	* Starting control plane node embed-certs-20220921220947-5916 in cluster embed-certs-20220921220947-5916
	* Pulling base image ...
	* docker "embed-certs-20220921220947-5916" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* docker "embed-certs-20220921220947-5916" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0921 22:11:03.885107    3596 out.go:296] Setting OutFile to fd 2016 ...
	I0921 22:11:03.940022    3596 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0921 22:11:03.940022    3596 out.go:309] Setting ErrFile to fd 1780...
	I0921 22:11:03.940022    3596 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0921 22:11:03.961048    3596 out.go:303] Setting JSON to false
	I0921 22:11:03.962514    3596 start.go:115] hostinfo: {"hostname":"minikube2","uptime":4332,"bootTime":1663793931,"procs":151,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19044 Build 19044","kernelVersion":"10.0.19044 Build 19044","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"29dced62-21fb-45d8-a34e-472b66ced002"}
	W0921 22:11:03.963595    3596 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0921 22:11:03.967444    3596 out.go:177] * [embed-certs-20220921220947-5916] minikube v1.27.0 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	I0921 22:11:03.970244    3596 notify.go:214] Checking for updates...
	I0921 22:11:03.971902    3596 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	I0921 22:11:03.974787    3596 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	I0921 22:11:03.977113    3596 out.go:177]   - MINIKUBE_LOCATION=14995
	I0921 22:11:03.980495    3596 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0921 22:11:03.984102    3596 config.go:180] Loaded profile config "embed-certs-20220921220947-5916": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.2
	I0921 22:11:03.985472    3596 driver.go:365] Setting default libvirt URI to qemu:///system
	I0921 22:11:04.299926    3596 docker.go:137] docker version: linux-20.10.17
	I0921 22:11:04.307244    3596 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0921 22:11:04.920668    3596 info.go:265] docker info: {ID:6VFI:N2PO:PLBP:KXLR:EN7R:OR7P:564S:UY7N:ZWDA:5V2U:KGL2:KJZ7 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:43 OomKillDisable:true NGoroutines:83 SystemTime:2022-09-21 22:11:04.4756585 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.10.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.9] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-
plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.19.0]] Warnings:<nil>}}
	I0921 22:11:04.926650    3596 out.go:177] * Using the docker driver based on existing profile
	I0921 22:11:04.930177    3596 start.go:284] selected driver: docker
	I0921 22:11:04.930177    3596 start.go:808] validating driver "docker" against &{Name:embed-certs-20220921220947-5916 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.2 ClusterName:embed-certs-20220921220947-5916 Namespace:default APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.25.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVe
rsion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I0921 22:11:04.930177    3596 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0921 22:11:04.996191    3596 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0921 22:11:05.561109    3596 info.go:265] docker info: {ID:6VFI:N2PO:PLBP:KXLR:EN7R:OR7P:564S:UY7N:ZWDA:5V2U:KGL2:KJZ7 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:43 OomKillDisable:true NGoroutines:83 SystemTime:2022-09-21 22:11:05.1675842 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.10.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.9] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-
plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.19.0]] Warnings:<nil>}}
	I0921 22:11:05.561109    3596 start_flags.go:867] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0921 22:11:05.561109    3596 cni.go:95] Creating CNI manager for ""
	I0921 22:11:05.561109    3596 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0921 22:11:05.561109    3596 start_flags.go:316] config:
	{Name:embed-certs-20220921220947-5916 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.2 ClusterName:embed-certs-20220921220947-5916 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunti
me:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.25.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I0921 22:11:05.565867    3596 out.go:177] * Starting control plane node embed-certs-20220921220947-5916 in cluster embed-certs-20220921220947-5916
	I0921 22:11:05.574436    3596 cache.go:120] Beginning downloading kic base image for docker with docker
	I0921 22:11:05.577430    3596 out.go:177] * Pulling base image ...
	I0921 22:11:05.580756    3596 preload.go:132] Checking if preload exists for k8s version v1.25.2 and runtime docker
	I0921 22:11:05.580756    3596 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c in local docker daemon
	I0921 22:11:05.580756    3596 preload.go:148] Found local preload: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.25.2-docker-overlay2-amd64.tar.lz4
	I0921 22:11:05.580756    3596 cache.go:57] Caching tarball of preloaded images
	I0921 22:11:05.580756    3596 preload.go:174] Found C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.25.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0921 22:11:05.580756    3596 cache.go:60] Finished verifying existence of preloaded tar for  v1.25.2 on docker
	I0921 22:11:05.581805    3596 profile.go:148] Saving config to C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\embed-certs-20220921220947-5916\config.json ...
	I0921 22:11:05.779674    3596 cache.go:147] Downloading gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c to local cache
	I0921 22:11:05.779821    3596 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase_v0.0.34@sha256_f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c.tar
	I0921 22:11:05.780197    3596 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase_v0.0.34@sha256_f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c.tar
	I0921 22:11:05.780262    3596 image.go:59] Checking for gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c in local cache directory
	I0921 22:11:05.780385    3596 image.go:62] Found gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c in local cache directory, skipping pull
	I0921 22:11:05.780385    3596 image.go:103] gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c exists in cache, skipping pull
	I0921 22:11:05.780663    3596 cache.go:150] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c as a tarball
	I0921 22:11:05.780702    3596 cache.go:161] Loading gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c from local cache
	I0921 22:11:05.780702    3596 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase_v0.0.34@sha256_f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c.tar
	I0921 22:11:08.072182    3596 cache.go:164] successfully loaded gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c from cached tarball
	I0921 22:11:08.072182    3596 cache.go:170] Downloading gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c to local daemon
	I0921 22:11:08.072182    3596 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c.d.lock -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase_v0.0.34@sha256_f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c.d.lock
	I0921 22:11:08.072836    3596 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c in local docker daemon
	I0921 22:11:08.275127    3596 image.go:243] Writing gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c to local daemon
	    > gcr.io/k8s-minikube/kicbase:  0 B [__________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase:  0 B [__________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase:  0 B [__________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase:  0 B [__________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase:  0 B [__________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase:  0 B [_______________________] ?% ? p/s 1.0sI0921 22:11:10.182483    3596 cache.go:177] use image loaded from cache gcr.io/k8s-minikube/kicbase:v0.0.34
	W0921 22:11:10.182483    3596 out.go:239] ! minikube was unable to download gcr.io/k8s-minikube/kicbase:v0.0.34, but successfully downloaded gcr.io/k8s-minikube/kicbase:v0.0.34 as a fallback image
	! minikube was unable to download gcr.io/k8s-minikube/kicbase:v0.0.34, but successfully downloaded gcr.io/k8s-minikube/kicbase:v0.0.34 as a fallback image
	I0921 22:11:10.182483    3596 cache.go:208] Successfully downloaded all kic artifacts
	I0921 22:11:10.182483    3596 start.go:364] acquiring machines lock for embed-certs-20220921220947-5916: {Name:mk43bf8b7be7335eaf7b2b1bea9994b147371248 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0921 22:11:10.182483    3596 start.go:368] acquired machines lock for "embed-certs-20220921220947-5916" in 0s
	I0921 22:11:10.182483    3596 start.go:96] Skipping create...Using existing machine configuration
	I0921 22:11:10.182483    3596 fix.go:55] fixHost starting: 
	I0921 22:11:10.198924    3596 cli_runner.go:164] Run: docker container inspect embed-certs-20220921220947-5916 --format={{.State.Status}}
	W0921 22:11:10.385247    3596 cli_runner.go:211] docker container inspect embed-certs-20220921220947-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:11:10.385247    3596 fix.go:103] recreateIfNeeded on embed-certs-20220921220947-5916: state= err=unknown state "embed-certs-20220921220947-5916": docker container inspect embed-certs-20220921220947-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220921220947-5916
	I0921 22:11:10.385247    3596 fix.go:108] machineExists: false. err=machine does not exist
	I0921 22:11:10.388229    3596 out.go:177] * docker "embed-certs-20220921220947-5916" container is missing, will recreate.
	I0921 22:11:10.390232    3596 delete.go:124] DEMOLISHING embed-certs-20220921220947-5916 ...
	I0921 22:11:10.403224    3596 cli_runner.go:164] Run: docker container inspect embed-certs-20220921220947-5916 --format={{.State.Status}}
	W0921 22:11:10.588315    3596 cli_runner.go:211] docker container inspect embed-certs-20220921220947-5916 --format={{.State.Status}} returned with exit code 1
	W0921 22:11:10.588315    3596 stop.go:75] unable to get state: unknown state "embed-certs-20220921220947-5916": docker container inspect embed-certs-20220921220947-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220921220947-5916
	I0921 22:11:10.588315    3596 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "embed-certs-20220921220947-5916": docker container inspect embed-certs-20220921220947-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220921220947-5916
	I0921 22:11:10.605312    3596 cli_runner.go:164] Run: docker container inspect embed-certs-20220921220947-5916 --format={{.State.Status}}
	W0921 22:11:10.779946    3596 cli_runner.go:211] docker container inspect embed-certs-20220921220947-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:11:10.779946    3596 delete.go:82] Unable to get host status for embed-certs-20220921220947-5916, assuming it has already been deleted: state: unknown state "embed-certs-20220921220947-5916": docker container inspect embed-certs-20220921220947-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220921220947-5916
	I0921 22:11:10.788030    3596 cli_runner.go:164] Run: docker container inspect -f {{.Id}} embed-certs-20220921220947-5916
	W0921 22:11:10.982940    3596 cli_runner.go:211] docker container inspect -f {{.Id}} embed-certs-20220921220947-5916 returned with exit code 1
	I0921 22:11:10.982940    3596 kic.go:356] could not find the container embed-certs-20220921220947-5916 to remove it. will try anyways
	I0921 22:11:10.989938    3596 cli_runner.go:164] Run: docker container inspect embed-certs-20220921220947-5916 --format={{.State.Status}}
	W0921 22:11:11.173942    3596 cli_runner.go:211] docker container inspect embed-certs-20220921220947-5916 --format={{.State.Status}} returned with exit code 1
	W0921 22:11:11.173942    3596 oci.go:84] error getting container status, will try to delete anyways: unknown state "embed-certs-20220921220947-5916": docker container inspect embed-certs-20220921220947-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220921220947-5916
	I0921 22:11:11.180937    3596 cli_runner.go:164] Run: docker exec --privileged -t embed-certs-20220921220947-5916 /bin/bash -c "sudo init 0"
	W0921 22:11:11.363859    3596 cli_runner.go:211] docker exec --privileged -t embed-certs-20220921220947-5916 /bin/bash -c "sudo init 0" returned with exit code 1
	I0921 22:11:11.363859    3596 oci.go:646] error shutdown embed-certs-20220921220947-5916: docker exec --privileged -t embed-certs-20220921220947-5916 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: embed-certs-20220921220947-5916
	I0921 22:11:12.387260    3596 cli_runner.go:164] Run: docker container inspect embed-certs-20220921220947-5916 --format={{.State.Status}}
	W0921 22:11:12.581200    3596 cli_runner.go:211] docker container inspect embed-certs-20220921220947-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:11:12.581200    3596 oci.go:658] temporary error verifying shutdown: unknown state "embed-certs-20220921220947-5916": docker container inspect embed-certs-20220921220947-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220921220947-5916
	I0921 22:11:12.581200    3596 oci.go:660] temporary error: container embed-certs-20220921220947-5916 status is  but expect it to be exited
	I0921 22:11:12.581200    3596 retry.go:31] will retry after 552.330144ms: couldn't verify container is exited. %v: unknown state "embed-certs-20220921220947-5916": docker container inspect embed-certs-20220921220947-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220921220947-5916
	I0921 22:11:13.141654    3596 cli_runner.go:164] Run: docker container inspect embed-certs-20220921220947-5916 --format={{.State.Status}}
	W0921 22:11:13.363976    3596 cli_runner.go:211] docker container inspect embed-certs-20220921220947-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:11:13.363976    3596 oci.go:658] temporary error verifying shutdown: unknown state "embed-certs-20220921220947-5916": docker container inspect embed-certs-20220921220947-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220921220947-5916
	I0921 22:11:13.363976    3596 oci.go:660] temporary error: container embed-certs-20220921220947-5916 status is  but expect it to be exited
	I0921 22:11:13.363976    3596 retry.go:31] will retry after 1.080381816s: couldn't verify container is exited. %v: unknown state "embed-certs-20220921220947-5916": docker container inspect embed-certs-20220921220947-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220921220947-5916
	I0921 22:11:14.453568    3596 cli_runner.go:164] Run: docker container inspect embed-certs-20220921220947-5916 --format={{.State.Status}}
	W0921 22:11:14.666635    3596 cli_runner.go:211] docker container inspect embed-certs-20220921220947-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:11:14.666635    3596 oci.go:658] temporary error verifying shutdown: unknown state "embed-certs-20220921220947-5916": docker container inspect embed-certs-20220921220947-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220921220947-5916
	I0921 22:11:14.666635    3596 oci.go:660] temporary error: container embed-certs-20220921220947-5916 status is  but expect it to be exited
	I0921 22:11:14.666635    3596 retry.go:31] will retry after 1.31013006s: couldn't verify container is exited. %v: unknown state "embed-certs-20220921220947-5916": docker container inspect embed-certs-20220921220947-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220921220947-5916
	I0921 22:11:15.995631    3596 cli_runner.go:164] Run: docker container inspect embed-certs-20220921220947-5916 --format={{.State.Status}}
	W0921 22:11:16.202436    3596 cli_runner.go:211] docker container inspect embed-certs-20220921220947-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:11:16.202436    3596 oci.go:658] temporary error verifying shutdown: unknown state "embed-certs-20220921220947-5916": docker container inspect embed-certs-20220921220947-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220921220947-5916
	I0921 22:11:16.202436    3596 oci.go:660] temporary error: container embed-certs-20220921220947-5916 status is  but expect it to be exited
	I0921 22:11:16.202436    3596 retry.go:31] will retry after 1.582392691s: couldn't verify container is exited. %v: unknown state "embed-certs-20220921220947-5916": docker container inspect embed-certs-20220921220947-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220921220947-5916
	I0921 22:11:17.803451    3596 cli_runner.go:164] Run: docker container inspect embed-certs-20220921220947-5916 --format={{.State.Status}}
	W0921 22:11:18.008986    3596 cli_runner.go:211] docker container inspect embed-certs-20220921220947-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:11:18.008986    3596 oci.go:658] temporary error verifying shutdown: unknown state "embed-certs-20220921220947-5916": docker container inspect embed-certs-20220921220947-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220921220947-5916
	I0921 22:11:18.008986    3596 oci.go:660] temporary error: container embed-certs-20220921220947-5916 status is  but expect it to be exited
	I0921 22:11:18.008986    3596 retry.go:31] will retry after 2.340488664s: couldn't verify container is exited. %v: unknown state "embed-certs-20220921220947-5916": docker container inspect embed-certs-20220921220947-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220921220947-5916
	I0921 22:11:20.357483    3596 cli_runner.go:164] Run: docker container inspect embed-certs-20220921220947-5916 --format={{.State.Status}}
	W0921 22:11:20.542104    3596 cli_runner.go:211] docker container inspect embed-certs-20220921220947-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:11:20.542375    3596 oci.go:658] temporary error verifying shutdown: unknown state "embed-certs-20220921220947-5916": docker container inspect embed-certs-20220921220947-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220921220947-5916
	I0921 22:11:20.542375    3596 oci.go:660] temporary error: container embed-certs-20220921220947-5916 status is  but expect it to be exited
	I0921 22:11:20.542375    3596 retry.go:31] will retry after 4.506218855s: couldn't verify container is exited. %v: unknown state "embed-certs-20220921220947-5916": docker container inspect embed-certs-20220921220947-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220921220947-5916
	I0921 22:11:25.077308    3596 cli_runner.go:164] Run: docker container inspect embed-certs-20220921220947-5916 --format={{.State.Status}}
	W0921 22:11:25.258725    3596 cli_runner.go:211] docker container inspect embed-certs-20220921220947-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:11:25.258862    3596 oci.go:658] temporary error verifying shutdown: unknown state "embed-certs-20220921220947-5916": docker container inspect embed-certs-20220921220947-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220921220947-5916
	I0921 22:11:25.258904    3596 oci.go:660] temporary error: container embed-certs-20220921220947-5916 status is  but expect it to be exited
	I0921 22:11:25.258904    3596 retry.go:31] will retry after 3.221479586s: couldn't verify container is exited. %v: unknown state "embed-certs-20220921220947-5916": docker container inspect embed-certs-20220921220947-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220921220947-5916
	I0921 22:11:28.495373    3596 cli_runner.go:164] Run: docker container inspect embed-certs-20220921220947-5916 --format={{.State.Status}}
	W0921 22:11:28.719146    3596 cli_runner.go:211] docker container inspect embed-certs-20220921220947-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:11:28.719203    3596 oci.go:658] temporary error verifying shutdown: unknown state "embed-certs-20220921220947-5916": docker container inspect embed-certs-20220921220947-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220921220947-5916
	I0921 22:11:28.719203    3596 oci.go:660] temporary error: container embed-certs-20220921220947-5916 status is  but expect it to be exited
	I0921 22:11:28.719203    3596 oci.go:88] couldn't shut down embed-certs-20220921220947-5916 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "embed-certs-20220921220947-5916": docker container inspect embed-certs-20220921220947-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220921220947-5916
	 
	I0921 22:11:28.726569    3596 cli_runner.go:164] Run: docker rm -f -v embed-certs-20220921220947-5916
	I0921 22:11:28.945681    3596 cli_runner.go:164] Run: docker container inspect -f {{.Id}} embed-certs-20220921220947-5916
	W0921 22:11:29.137250    3596 cli_runner.go:211] docker container inspect -f {{.Id}} embed-certs-20220921220947-5916 returned with exit code 1
	I0921 22:11:29.144318    3596 cli_runner.go:164] Run: docker network inspect embed-certs-20220921220947-5916 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0921 22:11:29.342592    3596 cli_runner.go:211] docker network inspect embed-certs-20220921220947-5916 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0921 22:11:29.350605    3596 network_create.go:272] running [docker network inspect embed-certs-20220921220947-5916] to gather additional debugging logs...
	I0921 22:11:29.350605    3596 cli_runner.go:164] Run: docker network inspect embed-certs-20220921220947-5916
	W0921 22:11:29.560829    3596 cli_runner.go:211] docker network inspect embed-certs-20220921220947-5916 returned with exit code 1
	I0921 22:11:29.560996    3596 network_create.go:275] error running [docker network inspect embed-certs-20220921220947-5916]: docker network inspect embed-certs-20220921220947-5916: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: embed-certs-20220921220947-5916
	I0921 22:11:29.561083    3596 network_create.go:277] output of [docker network inspect embed-certs-20220921220947-5916]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: embed-certs-20220921220947-5916
	
	** /stderr **
	W0921 22:11:29.562051    3596 delete.go:139] delete failed (probably ok) <nil>
	I0921 22:11:29.562051    3596 fix.go:115] Sleeping 1 second for extra luck!
	I0921 22:11:30.568351    3596 start.go:125] createHost starting for "" (driver="docker")
	I0921 22:11:30.579065    3596 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0921 22:11:30.579292    3596 start.go:159] libmachine.API.Create for "embed-certs-20220921220947-5916" (driver="docker")
	I0921 22:11:30.579292    3596 client.go:168] LocalClient.Create starting
	I0921 22:11:30.580275    3596 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I0921 22:11:30.580275    3596 main.go:134] libmachine: Decoding PEM data...
	I0921 22:11:30.580275    3596 main.go:134] libmachine: Parsing certificate...
	I0921 22:11:30.580988    3596 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I0921 22:11:30.580988    3596 main.go:134] libmachine: Decoding PEM data...
	I0921 22:11:30.580988    3596 main.go:134] libmachine: Parsing certificate...
	I0921 22:11:30.591469    3596 cli_runner.go:164] Run: docker network inspect embed-certs-20220921220947-5916 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0921 22:11:30.771414    3596 cli_runner.go:211] docker network inspect embed-certs-20220921220947-5916 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0921 22:11:30.778849    3596 network_create.go:272] running [docker network inspect embed-certs-20220921220947-5916] to gather additional debugging logs...
	I0921 22:11:30.779367    3596 cli_runner.go:164] Run: docker network inspect embed-certs-20220921220947-5916
	W0921 22:11:30.974588    3596 cli_runner.go:211] docker network inspect embed-certs-20220921220947-5916 returned with exit code 1
	I0921 22:11:30.974892    3596 network_create.go:275] error running [docker network inspect embed-certs-20220921220947-5916]: docker network inspect embed-certs-20220921220947-5916: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: embed-certs-20220921220947-5916
	I0921 22:11:30.974892    3596 network_create.go:277] output of [docker network inspect embed-certs-20220921220947-5916]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: embed-certs-20220921220947-5916
	
	** /stderr **
	I0921 22:11:30.981670    3596 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0921 22:11:31.195410    3596 network.go:290] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc000a34398] misses:0}
	I0921 22:11:31.195410    3596 network.go:236] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0921 22:11:31.195410    3596 network_create.go:115] attempt to create docker network embed-certs-20220921220947-5916 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0921 22:11:31.202733    3596 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-20220921220947-5916 embed-certs-20220921220947-5916
	W0921 22:11:31.392055    3596 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-20220921220947-5916 embed-certs-20220921220947-5916 returned with exit code 1
	E0921 22:11:31.392083    3596 network_create.go:104] error while trying to create docker network embed-certs-20220921220947-5916 192.168.49.0/24: create docker network embed-certs-20220921220947-5916 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-20220921220947-5916 embed-certs-20220921220947-5916: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 20971b6bf157abe93cf94d02c92d9deb2e2dfc277606baae4ae6c95bd28195c3 (br-20971b6bf157): conflicts with network a04d36bfb3cf4b1b1ab643a2ec70fadfc4a485878b32a275b4c7653037d66dd0 (br-a04d36bfb3cf): networks have overlapping IPv4
	W0921 22:11:31.392083    3596 out.go:239] ! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network embed-certs-20220921220947-5916 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-20220921220947-5916 embed-certs-20220921220947-5916: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 20971b6bf157abe93cf94d02c92d9deb2e2dfc277606baae4ae6c95bd28195c3 (br-20971b6bf157): conflicts with network a04d36bfb3cf4b1b1ab643a2ec70fadfc4a485878b32a275b4c7653037d66dd0 (br-a04d36bfb3cf): networks have overlapping IPv4
	
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network embed-certs-20220921220947-5916 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-20220921220947-5916 embed-certs-20220921220947-5916: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 20971b6bf157abe93cf94d02c92d9deb2e2dfc277606baae4ae6c95bd28195c3 (br-20971b6bf157): conflicts with network a04d36bfb3cf4b1b1ab643a2ec70fadfc4a485878b32a275b4c7653037d66dd0 (br-a04d36bfb3cf): networks have overlapping IPv4
	
	I0921 22:11:31.426371    3596 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0921 22:11:31.631354    3596 cli_runner.go:164] Run: docker volume create embed-certs-20220921220947-5916 --label name.minikube.sigs.k8s.io=embed-certs-20220921220947-5916 --label created_by.minikube.sigs.k8s.io=true
	W0921 22:11:31.810776    3596 cli_runner.go:211] docker volume create embed-certs-20220921220947-5916 --label name.minikube.sigs.k8s.io=embed-certs-20220921220947-5916 --label created_by.minikube.sigs.k8s.io=true returned with exit code 1
	I0921 22:11:31.810907    3596 client.go:171] LocalClient.Create took 1.2315614s
	I0921 22:11:33.826449    3596 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0921 22:11:33.834265    3596 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220921220947-5916
	W0921 22:11:34.038320    3596 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220921220947-5916 returned with exit code 1
	I0921 22:11:34.038320    3596 retry.go:31] will retry after 149.242379ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-20220921220947-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220921220947-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220921220947-5916
	I0921 22:11:34.200231    3596 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220921220947-5916
	W0921 22:11:34.426015    3596 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220921220947-5916 returned with exit code 1
	I0921 22:11:34.426161    3596 retry.go:31] will retry after 300.341948ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-20220921220947-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220921220947-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220921220947-5916
	I0921 22:11:34.741094    3596 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220921220947-5916
	W0921 22:11:34.947854    3596 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220921220947-5916 returned with exit code 1
	I0921 22:11:34.950183    3596 retry.go:31] will retry after 571.057104ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-20220921220947-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220921220947-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220921220947-5916
	I0921 22:11:35.546349    3596 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220921220947-5916
	W0921 22:11:35.743823    3596 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220921220947-5916 returned with exit code 1
	W0921 22:11:35.743823    3596 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-20220921220947-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220921220947-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220921220947-5916
	
	W0921 22:11:35.743823    3596 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-20220921220947-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220921220947-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220921220947-5916
	I0921 22:11:35.753822    3596 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0921 22:11:35.760825    3596 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220921220947-5916
	W0921 22:11:35.934836    3596 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220921220947-5916 returned with exit code 1
	I0921 22:11:35.934836    3596 retry.go:31] will retry after 178.565968ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-20220921220947-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220921220947-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220921220947-5916
	I0921 22:11:36.134818    3596 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220921220947-5916
	W0921 22:11:36.317243    3596 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220921220947-5916 returned with exit code 1
	I0921 22:11:36.317243    3596 retry.go:31] will retry after 330.246446ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-20220921220947-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220921220947-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220921220947-5916
	I0921 22:11:36.670128    3596 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220921220947-5916
	W0921 22:11:36.875011    3596 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220921220947-5916 returned with exit code 1
	I0921 22:11:36.875011    3596 retry.go:31] will retry after 460.157723ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-20220921220947-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220921220947-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220921220947-5916
	I0921 22:11:37.356074    3596 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220921220947-5916
	W0921 22:11:37.580600    3596 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220921220947-5916 returned with exit code 1
	W0921 22:11:37.580600    3596 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-20220921220947-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220921220947-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220921220947-5916
	
	W0921 22:11:37.580600    3596 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-20220921220947-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220921220947-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220921220947-5916
	I0921 22:11:37.580600    3596 start.go:128] duration metric: createHost completed in 7.0120138s
	I0921 22:11:37.589601    3596 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0921 22:11:37.595600    3596 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220921220947-5916
	W0921 22:11:37.798902    3596 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220921220947-5916 returned with exit code 1
	I0921 22:11:37.799220    3596 retry.go:31] will retry after 195.758538ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-20220921220947-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220921220947-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220921220947-5916
	I0921 22:11:38.014920    3596 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220921220947-5916
	W0921 22:11:38.222713    3596 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220921220947-5916 returned with exit code 1
	I0921 22:11:38.222925    3596 retry.go:31] will retry after 297.413196ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-20220921220947-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220921220947-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220921220947-5916
	I0921 22:11:38.542752    3596 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220921220947-5916
	W0921 22:11:38.779692    3596 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220921220947-5916 returned with exit code 1
	I0921 22:11:38.779692    3596 retry.go:31] will retry after 663.23513ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-20220921220947-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220921220947-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220921220947-5916
	I0921 22:11:39.462159    3596 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220921220947-5916
	W0921 22:11:39.655175    3596 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220921220947-5916 returned with exit code 1
	W0921 22:11:39.655175    3596 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-20220921220947-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220921220947-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220921220947-5916
	
	W0921 22:11:39.655175    3596 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-20220921220947-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220921220947-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220921220947-5916
	I0921 22:11:39.667684    3596 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0921 22:11:39.674658    3596 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220921220947-5916
	W0921 22:11:39.887973    3596 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220921220947-5916 returned with exit code 1
	I0921 22:11:39.888590    3596 retry.go:31] will retry after 175.796719ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-20220921220947-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220921220947-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220921220947-5916
	I0921 22:11:40.084059    3596 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220921220947-5916
	W0921 22:11:40.292975    3596 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220921220947-5916 returned with exit code 1
	I0921 22:11:40.293170    3596 retry.go:31] will retry after 322.826781ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-20220921220947-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220921220947-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220921220947-5916
	I0921 22:11:40.636231    3596 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220921220947-5916
	W0921 22:11:40.845980    3596 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220921220947-5916 returned with exit code 1
	I0921 22:11:40.846488    3596 retry.go:31] will retry after 602.253718ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-20220921220947-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220921220947-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220921220947-5916
	I0921 22:11:41.462333    3596 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220921220947-5916
	W0921 22:11:41.670192    3596 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220921220947-5916 returned with exit code 1
	W0921 22:11:41.670192    3596 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-20220921220947-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220921220947-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220921220947-5916
	
	W0921 22:11:41.670192    3596 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-20220921220947-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220921220947-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220921220947-5916
	I0921 22:11:41.670192    3596 fix.go:57] fixHost completed within 31.4874635s
	I0921 22:11:41.670192    3596 start.go:83] releasing machines lock for "embed-certs-20220921220947-5916", held for 31.4874635s
	W0921 22:11:41.670192    3596 start.go:602] error starting host: recreate: creating host: create: creating: setting up container node: creating volume for embed-certs-20220921220947-5916 container: docker volume create embed-certs-20220921220947-5916 --label name.minikube.sigs.k8s.io=embed-certs-20220921220947-5916 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create embed-certs-20220921220947-5916: error while creating volume root path '/var/lib/docker/volumes/embed-certs-20220921220947-5916': mkdir /var/lib/docker/volumes/embed-certs-20220921220947-5916: read-only file system
	W0921 22:11:41.670192    3596 out.go:239] ! StartHost failed, but will try again: recreate: creating host: create: creating: setting up container node: creating volume for embed-certs-20220921220947-5916 container: docker volume create embed-certs-20220921220947-5916 --label name.minikube.sigs.k8s.io=embed-certs-20220921220947-5916 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create embed-certs-20220921220947-5916: error while creating volume root path '/var/lib/docker/volumes/embed-certs-20220921220947-5916': mkdir /var/lib/docker/volumes/embed-certs-20220921220947-5916: read-only file system
	
	! StartHost failed, but will try again: recreate: creating host: create: creating: setting up container node: creating volume for embed-certs-20220921220947-5916 container: docker volume create embed-certs-20220921220947-5916 --label name.minikube.sigs.k8s.io=embed-certs-20220921220947-5916 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create embed-certs-20220921220947-5916: error while creating volume root path '/var/lib/docker/volumes/embed-certs-20220921220947-5916': mkdir /var/lib/docker/volumes/embed-certs-20220921220947-5916: read-only file system
	
	I0921 22:11:41.670192    3596 start.go:617] Will try again in 5 seconds ...
	I0921 22:11:46.670807    3596 start.go:364] acquiring machines lock for embed-certs-20220921220947-5916: {Name:mk43bf8b7be7335eaf7b2b1bea9994b147371248 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0921 22:11:46.671278    3596 start.go:368] acquired machines lock for "embed-certs-20220921220947-5916" in 361.6µs
	I0921 22:11:46.671488    3596 start.go:96] Skipping create...Using existing machine configuration
	I0921 22:11:46.671530    3596 fix.go:55] fixHost starting: 
	I0921 22:11:46.690624    3596 cli_runner.go:164] Run: docker container inspect embed-certs-20220921220947-5916 --format={{.State.Status}}
	W0921 22:11:46.887419    3596 cli_runner.go:211] docker container inspect embed-certs-20220921220947-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:11:46.887419    3596 fix.go:103] recreateIfNeeded on embed-certs-20220921220947-5916: state= err=unknown state "embed-certs-20220921220947-5916": docker container inspect embed-certs-20220921220947-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220921220947-5916
	I0921 22:11:46.887419    3596 fix.go:108] machineExists: false. err=machine does not exist
	I0921 22:11:46.891715    3596 out.go:177] * docker "embed-certs-20220921220947-5916" container is missing, will recreate.
	I0921 22:11:46.893737    3596 delete.go:124] DEMOLISHING embed-certs-20220921220947-5916 ...
	I0921 22:11:46.905744    3596 cli_runner.go:164] Run: docker container inspect embed-certs-20220921220947-5916 --format={{.State.Status}}
	W0921 22:11:47.089446    3596 cli_runner.go:211] docker container inspect embed-certs-20220921220947-5916 --format={{.State.Status}} returned with exit code 1
	W0921 22:11:47.089446    3596 stop.go:75] unable to get state: unknown state "embed-certs-20220921220947-5916": docker container inspect embed-certs-20220921220947-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220921220947-5916
	I0921 22:11:47.089446    3596 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "embed-certs-20220921220947-5916": docker container inspect embed-certs-20220921220947-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220921220947-5916
	I0921 22:11:47.104351    3596 cli_runner.go:164] Run: docker container inspect embed-certs-20220921220947-5916 --format={{.State.Status}}
	W0921 22:11:47.291086    3596 cli_runner.go:211] docker container inspect embed-certs-20220921220947-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:11:47.291086    3596 delete.go:82] Unable to get host status for embed-certs-20220921220947-5916, assuming it has already been deleted: state: unknown state "embed-certs-20220921220947-5916": docker container inspect embed-certs-20220921220947-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220921220947-5916
	I0921 22:11:47.299279    3596 cli_runner.go:164] Run: docker container inspect -f {{.Id}} embed-certs-20220921220947-5916
	W0921 22:11:47.477353    3596 cli_runner.go:211] docker container inspect -f {{.Id}} embed-certs-20220921220947-5916 returned with exit code 1
	I0921 22:11:47.477553    3596 kic.go:356] could not find the container embed-certs-20220921220947-5916 to remove it. will try anyways
	I0921 22:11:47.485628    3596 cli_runner.go:164] Run: docker container inspect embed-certs-20220921220947-5916 --format={{.State.Status}}
	W0921 22:11:47.680078    3596 cli_runner.go:211] docker container inspect embed-certs-20220921220947-5916 --format={{.State.Status}} returned with exit code 1
	W0921 22:11:47.680078    3596 oci.go:84] error getting container status, will try to delete anyways: unknown state "embed-certs-20220921220947-5916": docker container inspect embed-certs-20220921220947-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220921220947-5916
	I0921 22:11:47.690198    3596 cli_runner.go:164] Run: docker exec --privileged -t embed-certs-20220921220947-5916 /bin/bash -c "sudo init 0"
	W0921 22:11:47.896274    3596 cli_runner.go:211] docker exec --privileged -t embed-certs-20220921220947-5916 /bin/bash -c "sudo init 0" returned with exit code 1
	I0921 22:11:47.896450    3596 oci.go:646] error shutdown embed-certs-20220921220947-5916: docker exec --privileged -t embed-certs-20220921220947-5916 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: embed-certs-20220921220947-5916
	I0921 22:11:48.905129    3596 cli_runner.go:164] Run: docker container inspect embed-certs-20220921220947-5916 --format={{.State.Status}}
	W0921 22:11:49.115501    3596 cli_runner.go:211] docker container inspect embed-certs-20220921220947-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:11:49.115662    3596 oci.go:658] temporary error verifying shutdown: unknown state "embed-certs-20220921220947-5916": docker container inspect embed-certs-20220921220947-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220921220947-5916
	I0921 22:11:49.115753    3596 oci.go:660] temporary error: container embed-certs-20220921220947-5916 status is  but expect it to be exited
	I0921 22:11:49.115799    3596 retry.go:31] will retry after 396.557122ms: couldn't verify container is exited. %v: unknown state "embed-certs-20220921220947-5916": docker container inspect embed-certs-20220921220947-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220921220947-5916
	I0921 22:11:49.535273    3596 cli_runner.go:164] Run: docker container inspect embed-certs-20220921220947-5916 --format={{.State.Status}}
	W0921 22:11:49.743087    3596 cli_runner.go:211] docker container inspect embed-certs-20220921220947-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:11:49.743219    3596 oci.go:658] temporary error verifying shutdown: unknown state "embed-certs-20220921220947-5916": docker container inspect embed-certs-20220921220947-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220921220947-5916
	I0921 22:11:49.743283    3596 oci.go:660] temporary error: container embed-certs-20220921220947-5916 status is  but expect it to be exited
	I0921 22:11:49.743283    3596 retry.go:31] will retry after 597.811922ms: couldn't verify container is exited. %v: unknown state "embed-certs-20220921220947-5916": docker container inspect embed-certs-20220921220947-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220921220947-5916
	I0921 22:11:50.349979    3596 cli_runner.go:164] Run: docker container inspect embed-certs-20220921220947-5916 --format={{.State.Status}}
	W0921 22:11:50.558343    3596 cli_runner.go:211] docker container inspect embed-certs-20220921220947-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:11:50.558343    3596 oci.go:658] temporary error verifying shutdown: unknown state "embed-certs-20220921220947-5916": docker container inspect embed-certs-20220921220947-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220921220947-5916
	I0921 22:11:50.558343    3596 oci.go:660] temporary error: container embed-certs-20220921220947-5916 status is  but expect it to be exited
	I0921 22:11:50.558343    3596 retry.go:31] will retry after 1.409144665s: couldn't verify container is exited. %v: unknown state "embed-certs-20220921220947-5916": docker container inspect embed-certs-20220921220947-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220921220947-5916
	I0921 22:11:51.988291    3596 cli_runner.go:164] Run: docker container inspect embed-certs-20220921220947-5916 --format={{.State.Status}}
	W0921 22:11:52.210396    3596 cli_runner.go:211] docker container inspect embed-certs-20220921220947-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:11:52.210396    3596 oci.go:658] temporary error verifying shutdown: unknown state "embed-certs-20220921220947-5916": docker container inspect embed-certs-20220921220947-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220921220947-5916
	I0921 22:11:52.210396    3596 oci.go:660] temporary error: container embed-certs-20220921220947-5916 status is  but expect it to be exited
	I0921 22:11:52.210396    3596 retry.go:31] will retry after 1.192358242s: couldn't verify container is exited. %v: unknown state "embed-certs-20220921220947-5916": docker container inspect embed-certs-20220921220947-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220921220947-5916
	I0921 22:11:53.415369    3596 cli_runner.go:164] Run: docker container inspect embed-certs-20220921220947-5916 --format={{.State.Status}}
	W0921 22:11:53.624545    3596 cli_runner.go:211] docker container inspect embed-certs-20220921220947-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:11:53.624545    3596 oci.go:658] temporary error verifying shutdown: unknown state "embed-certs-20220921220947-5916": docker container inspect embed-certs-20220921220947-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220921220947-5916
	I0921 22:11:53.624545    3596 oci.go:660] temporary error: container embed-certs-20220921220947-5916 status is  but expect it to be exited
	I0921 22:11:53.624545    3596 retry.go:31] will retry after 3.456004252s: couldn't verify container is exited. %v: unknown state "embed-certs-20220921220947-5916": docker container inspect embed-certs-20220921220947-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220921220947-5916
	I0921 22:11:57.093902    3596 cli_runner.go:164] Run: docker container inspect embed-certs-20220921220947-5916 --format={{.State.Status}}
	W0921 22:11:57.302847    3596 cli_runner.go:211] docker container inspect embed-certs-20220921220947-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:11:57.303036    3596 oci.go:658] temporary error verifying shutdown: unknown state "embed-certs-20220921220947-5916": docker container inspect embed-certs-20220921220947-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220921220947-5916
	I0921 22:11:57.303154    3596 oci.go:660] temporary error: container embed-certs-20220921220947-5916 status is  but expect it to be exited
	I0921 22:11:57.303154    3596 retry.go:31] will retry after 4.543793083s: couldn't verify container is exited. %v: unknown state "embed-certs-20220921220947-5916": docker container inspect embed-certs-20220921220947-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220921220947-5916
	I0921 22:12:01.866403    3596 cli_runner.go:164] Run: docker container inspect embed-certs-20220921220947-5916 --format={{.State.Status}}
	W0921 22:12:02.059227    3596 cli_runner.go:211] docker container inspect embed-certs-20220921220947-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:12:02.059441    3596 oci.go:658] temporary error verifying shutdown: unknown state "embed-certs-20220921220947-5916": docker container inspect embed-certs-20220921220947-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220921220947-5916
	I0921 22:12:02.059489    3596 oci.go:660] temporary error: container embed-certs-20220921220947-5916 status is  but expect it to be exited
	I0921 22:12:02.059558    3596 retry.go:31] will retry after 5.830976587s: couldn't verify container is exited. %v: unknown state "embed-certs-20220921220947-5916": docker container inspect embed-certs-20220921220947-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220921220947-5916
	I0921 22:12:07.899900    3596 cli_runner.go:164] Run: docker container inspect embed-certs-20220921220947-5916 --format={{.State.Status}}
	W0921 22:12:08.091515    3596 cli_runner.go:211] docker container inspect embed-certs-20220921220947-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:12:08.091777    3596 oci.go:658] temporary error verifying shutdown: unknown state "embed-certs-20220921220947-5916": docker container inspect embed-certs-20220921220947-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220921220947-5916
	I0921 22:12:08.091816    3596 oci.go:660] temporary error: container embed-certs-20220921220947-5916 status is  but expect it to be exited
	I0921 22:12:08.091887    3596 oci.go:88] couldn't shut down embed-certs-20220921220947-5916 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "embed-certs-20220921220947-5916": docker container inspect embed-certs-20220921220947-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220921220947-5916
	 
	I0921 22:12:08.100256    3596 cli_runner.go:164] Run: docker rm -f -v embed-certs-20220921220947-5916
	I0921 22:12:08.317526    3596 cli_runner.go:164] Run: docker container inspect -f {{.Id}} embed-certs-20220921220947-5916
	W0921 22:12:08.498977    3596 cli_runner.go:211] docker container inspect -f {{.Id}} embed-certs-20220921220947-5916 returned with exit code 1
	I0921 22:12:08.505979    3596 cli_runner.go:164] Run: docker network inspect embed-certs-20220921220947-5916 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0921 22:12:08.731178    3596 cli_runner.go:211] docker network inspect embed-certs-20220921220947-5916 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0921 22:12:08.740087    3596 network_create.go:272] running [docker network inspect embed-certs-20220921220947-5916] to gather additional debugging logs...
	I0921 22:12:08.740087    3596 cli_runner.go:164] Run: docker network inspect embed-certs-20220921220947-5916
	W0921 22:12:08.952200    3596 cli_runner.go:211] docker network inspect embed-certs-20220921220947-5916 returned with exit code 1
	I0921 22:12:08.952200    3596 network_create.go:275] error running [docker network inspect embed-certs-20220921220947-5916]: docker network inspect embed-certs-20220921220947-5916: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: embed-certs-20220921220947-5916
	I0921 22:12:08.952200    3596 network_create.go:277] output of [docker network inspect embed-certs-20220921220947-5916]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: embed-certs-20220921220947-5916
	
	** /stderr **
	W0921 22:12:08.953193    3596 delete.go:139] delete failed (probably ok) <nil>
	I0921 22:12:08.953193    3596 fix.go:115] Sleeping 1 second for extra luck!
	I0921 22:12:09.955931    3596 start.go:125] createHost starting for "" (driver="docker")
	I0921 22:12:09.959146    3596 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0921 22:12:09.959683    3596 start.go:159] libmachine.API.Create for "embed-certs-20220921220947-5916" (driver="docker")
	I0921 22:12:09.959774    3596 client.go:168] LocalClient.Create starting
	I0921 22:12:09.960348    3596 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I0921 22:12:09.960671    3596 main.go:134] libmachine: Decoding PEM data...
	I0921 22:12:09.960671    3596 main.go:134] libmachine: Parsing certificate...
	I0921 22:12:09.960988    3596 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I0921 22:12:09.961248    3596 main.go:134] libmachine: Decoding PEM data...
	I0921 22:12:09.961248    3596 main.go:134] libmachine: Parsing certificate...
	I0921 22:12:09.974097    3596 cli_runner.go:164] Run: docker network inspect embed-certs-20220921220947-5916 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0921 22:12:10.180517    3596 cli_runner.go:211] docker network inspect embed-certs-20220921220947-5916 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0921 22:12:10.187518    3596 network_create.go:272] running [docker network inspect embed-certs-20220921220947-5916] to gather additional debugging logs...
	I0921 22:12:10.187518    3596 cli_runner.go:164] Run: docker network inspect embed-certs-20220921220947-5916
	W0921 22:12:10.382352    3596 cli_runner.go:211] docker network inspect embed-certs-20220921220947-5916 returned with exit code 1
	I0921 22:12:10.382471    3596 network_create.go:275] error running [docker network inspect embed-certs-20220921220947-5916]: docker network inspect embed-certs-20220921220947-5916: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: embed-certs-20220921220947-5916
	I0921 22:12:10.382471    3596 network_create.go:277] output of [docker network inspect embed-certs-20220921220947-5916]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: embed-certs-20220921220947-5916
	
	** /stderr **
	I0921 22:12:10.389908    3596 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0921 22:12:10.632076    3596 network.go:281] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000a34398] amended:false}} dirty:map[] misses:0}
	I0921 22:12:10.632076    3596 network.go:239] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0921 22:12:10.649081    3596 network.go:290] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000a34398] amended:true}} dirty:map[192.168.49.0:0xc000a34398 192.168.58.0:0xc000542130] misses:0}
	I0921 22:12:10.649081    3596 network.go:236] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0921 22:12:10.649081    3596 network_create.go:115] attempt to create docker network embed-certs-20220921220947-5916 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0921 22:12:10.657066    3596 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-20220921220947-5916 embed-certs-20220921220947-5916
	W0921 22:12:10.868833    3596 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-20220921220947-5916 embed-certs-20220921220947-5916 returned with exit code 1
	E0921 22:12:10.868833    3596 network_create.go:104] error while trying to create docker network embed-certs-20220921220947-5916 192.168.58.0/24: create docker network embed-certs-20220921220947-5916 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-20220921220947-5916 embed-certs-20220921220947-5916: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network c64a65a9407cb478f24f15910b98de2a5fca079b95af0d2fd23232b603dac9dc (br-c64a65a9407c): conflicts with network 8a3cd8d165a4af032276277dbee93b52b8ee4aed7ca6d8aceee6814901b31792 (br-8a3cd8d165a4): networks have overlapping IPv4
	W0921 22:12:10.868833    3596 out.go:239] ! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network embed-certs-20220921220947-5916 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-20220921220947-5916 embed-certs-20220921220947-5916: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network c64a65a9407cb478f24f15910b98de2a5fca079b95af0d2fd23232b603dac9dc (br-c64a65a9407c): conflicts with network 8a3cd8d165a4af032276277dbee93b52b8ee4aed7ca6d8aceee6814901b31792 (br-8a3cd8d165a4): networks have overlapping IPv4
	
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network embed-certs-20220921220947-5916 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-20220921220947-5916 embed-certs-20220921220947-5916: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network c64a65a9407cb478f24f15910b98de2a5fca079b95af0d2fd23232b603dac9dc (br-c64a65a9407c): conflicts with network 8a3cd8d165a4af032276277dbee93b52b8ee4aed7ca6d8aceee6814901b31792 (br-8a3cd8d165a4): networks have overlapping IPv4
	
	I0921 22:12:10.883609    3596 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0921 22:12:11.105875    3596 cli_runner.go:164] Run: docker volume create embed-certs-20220921220947-5916 --label name.minikube.sigs.k8s.io=embed-certs-20220921220947-5916 --label created_by.minikube.sigs.k8s.io=true
	W0921 22:12:11.309918    3596 cli_runner.go:211] docker volume create embed-certs-20220921220947-5916 --label name.minikube.sigs.k8s.io=embed-certs-20220921220947-5916 --label created_by.minikube.sigs.k8s.io=true returned with exit code 1
	I0921 22:12:11.309918    3596 client.go:171] LocalClient.Create took 1.3501337s
	I0921 22:12:13.333933    3596 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0921 22:12:13.339928    3596 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220921220947-5916
	W0921 22:12:13.540568    3596 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220921220947-5916 returned with exit code 1
	I0921 22:12:13.540568    3596 retry.go:31] will retry after 164.582069ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-20220921220947-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220921220947-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220921220947-5916
	I0921 22:12:13.721428    3596 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220921220947-5916
	W0921 22:12:13.949124    3596 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220921220947-5916 returned with exit code 1
	I0921 22:12:13.949124    3596 retry.go:31] will retry after 415.22004ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-20220921220947-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220921220947-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220921220947-5916
	I0921 22:12:14.382103    3596 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220921220947-5916
	W0921 22:12:14.590002    3596 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220921220947-5916 returned with exit code 1
	W0921 22:12:14.590002    3596 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-20220921220947-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220921220947-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220921220947-5916
	
	W0921 22:12:14.590002    3596 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-20220921220947-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220921220947-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220921220947-5916
	I0921 22:12:14.600981    3596 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0921 22:12:14.607995    3596 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220921220947-5916
	W0921 22:12:14.791774    3596 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220921220947-5916 returned with exit code 1
	I0921 22:12:14.791832    3596 retry.go:31] will retry after 144.863405ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-20220921220947-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220921220947-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220921220947-5916
	I0921 22:12:14.955924    3596 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220921220947-5916
	W0921 22:12:15.164415    3596 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220921220947-5916 returned with exit code 1
	I0921 22:12:15.164415    3596 retry.go:31] will retry after 410.553224ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-20220921220947-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220921220947-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220921220947-5916
	I0921 22:12:15.596010    3596 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220921220947-5916
	W0921 22:12:15.780012    3596 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220921220947-5916 returned with exit code 1
	I0921 22:12:15.780012    3596 retry.go:31] will retry after 314.505366ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-20220921220947-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220921220947-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220921220947-5916
	I0921 22:12:16.118747    3596 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220921220947-5916
	W0921 22:12:16.309731    3596 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220921220947-5916 returned with exit code 1
	W0921 22:12:16.309731    3596 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-20220921220947-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220921220947-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220921220947-5916
	
	W0921 22:12:16.309731    3596 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-20220921220947-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220921220947-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220921220947-5916
	I0921 22:12:16.309731    3596 start.go:128] duration metric: createHost completed in 6.3537502s
	I0921 22:12:16.322913    3596 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0921 22:12:16.332120    3596 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220921220947-5916
	W0921 22:12:16.529608    3596 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220921220947-5916 returned with exit code 1
	I0921 22:12:16.529608    3596 retry.go:31] will retry after 200.38067ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-20220921220947-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220921220947-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220921220947-5916
	I0921 22:12:16.742529    3596 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220921220947-5916
	W0921 22:12:16.938800    3596 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220921220947-5916 returned with exit code 1
	I0921 22:12:16.938800    3596 retry.go:31] will retry after 252.474839ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-20220921220947-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220921220947-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220921220947-5916
	I0921 22:12:17.211061    3596 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220921220947-5916
	W0921 22:12:17.424340    3596 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220921220947-5916 returned with exit code 1
	I0921 22:12:17.424340    3596 retry.go:31] will retry after 585.618668ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-20220921220947-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220921220947-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220921220947-5916
	I0921 22:12:18.026089    3596 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220921220947-5916
	W0921 22:12:18.284818    3596 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220921220947-5916 returned with exit code 1
	W0921 22:12:18.284818    3596 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-20220921220947-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220921220947-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220921220947-5916
	
	W0921 22:12:18.284818    3596 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-20220921220947-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220921220947-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220921220947-5916
	I0921 22:12:18.297833    3596 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0921 22:12:18.304832    3596 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220921220947-5916
	W0921 22:12:18.501799    3596 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220921220947-5916 returned with exit code 1
	I0921 22:12:18.501799    3596 retry.go:31] will retry after 194.626905ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-20220921220947-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220921220947-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220921220947-5916
	I0921 22:12:18.714022    3596 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220921220947-5916
	W0921 22:12:18.912311    3596 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220921220947-5916 returned with exit code 1
	I0921 22:12:18.912311    3596 retry.go:31] will retry after 346.182076ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-20220921220947-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220921220947-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220921220947-5916
	I0921 22:12:19.277499    3596 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220921220947-5916
	W0921 22:12:19.476910    3596 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220921220947-5916 returned with exit code 1
	I0921 22:12:19.477303    3596 retry.go:31] will retry after 579.704465ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-20220921220947-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220921220947-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220921220947-5916
	I0921 22:12:20.066437    3596 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220921220947-5916
	W0921 22:12:20.265437    3596 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220921220947-5916 returned with exit code 1
	W0921 22:12:20.265437    3596 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-20220921220947-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220921220947-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220921220947-5916
	
	W0921 22:12:20.265437    3596 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-20220921220947-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220921220947-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220921220947-5916
	I0921 22:12:20.265437    3596 fix.go:57] fixHost completed within 33.5936836s
	I0921 22:12:20.265437    3596 start.go:83] releasing machines lock for "embed-certs-20220921220947-5916", held for 33.5938932s
	W0921 22:12:20.265437    3596 out.go:239] * Failed to start docker container. Running "minikube delete -p embed-certs-20220921220947-5916" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for embed-certs-20220921220947-5916 container: docker volume create embed-certs-20220921220947-5916 --label name.minikube.sigs.k8s.io=embed-certs-20220921220947-5916 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create embed-certs-20220921220947-5916: error while creating volume root path '/var/lib/docker/volumes/embed-certs-20220921220947-5916': mkdir /var/lib/docker/volumes/embed-certs-20220921220947-5916: read-only file system
	
	* Failed to start docker container. Running "minikube delete -p embed-certs-20220921220947-5916" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for embed-certs-20220921220947-5916 container: docker volume create embed-certs-20220921220947-5916 --label name.minikube.sigs.k8s.io=embed-certs-20220921220947-5916 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create embed-certs-20220921220947-5916: error while creating volume root path '/var/lib/docker/volumes/embed-certs-20220921220947-5916': mkdir /var/lib/docker/volumes/embed-certs-20220921220947-5916: read-only file system
	
	I0921 22:12:20.270438    3596 out.go:177] 
	W0921 22:12:20.273435    3596 out.go:239] X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for embed-certs-20220921220947-5916 container: docker volume create embed-certs-20220921220947-5916 --label name.minikube.sigs.k8s.io=embed-certs-20220921220947-5916 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create embed-certs-20220921220947-5916: error while creating volume root path '/var/lib/docker/volumes/embed-certs-20220921220947-5916': mkdir /var/lib/docker/volumes/embed-certs-20220921220947-5916: read-only file system
	
	X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for embed-certs-20220921220947-5916 container: docker volume create embed-certs-20220921220947-5916 --label name.minikube.sigs.k8s.io=embed-certs-20220921220947-5916 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create embed-certs-20220921220947-5916: error while creating volume root path '/var/lib/docker/volumes/embed-certs-20220921220947-5916': mkdir /var/lib/docker/volumes/embed-certs-20220921220947-5916: read-only file system
	
	W0921 22:12:20.273435    3596 out.go:239] * Suggestion: Restart Docker
	* Suggestion: Restart Docker
	W0921 22:12:20.273435    3596 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/6825
	* Related issue: https://github.com/kubernetes/minikube/issues/6825
	I0921 22:12:20.277450    3596 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-windows-amd64.exe start -p embed-certs-20220921220947-5916 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker --kubernetes-version=v1.25.2": exit status 60
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/embed-certs/serial/SecondStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect embed-certs-20220921220947-5916

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/SecondStart
helpers_test.go:231: (dbg) Non-zero exit: docker inspect embed-certs-20220921220947-5916: exit status 1 (285.747ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: embed-certs-20220921220947-5916

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p embed-certs-20220921220947-5916 -n embed-certs-20220921220947-5916

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/SecondStart
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p embed-certs-20220921220947-5916 -n embed-certs-20220921220947-5916: exit status 7 (598.1667ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0921 22:12:21.385559    7264 status.go:247] status error: host: state: unknown state "embed-certs-20220921220947-5916": docker container inspect embed-certs-20220921220947-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220921220947-5916

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-20220921220947-5916" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/embed-certs/serial/SecondStart (77.78s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.86s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-20220921220934-5916" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-20220921220934-5916
helpers_test.go:231: (dbg) Non-zero exit: docker inspect old-k8s-version-20220921220934-5916: exit status 1 (248.1623ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: old-k8s-version-20220921220934-5916

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-20220921220934-5916 -n old-k8s-version-20220921220934-5916
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-20220921220934-5916 -n old-k8s-version-20220921220934-5916: exit status 7 (598.5544ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0921 22:12:10.196228    1868 status.go:247] status error: host: state: unknown state "old-k8s-version-20220921220934-5916": docker container inspect old-k8s-version-20220921220934-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220921220934-5916

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-20220921220934-5916" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.86s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (1.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-20220921220934-5916" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-20220921220934-5916 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-20220921220934-5916 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (184.6914ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-20220921220934-5916" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-20220921220934-5916 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " k8s.gcr.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-20220921220934-5916
helpers_test.go:231: (dbg) Non-zero exit: docker inspect old-k8s-version-20220921220934-5916: exit status 1 (255.0211ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: old-k8s-version-20220921220934-5916

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-20220921220934-5916 -n old-k8s-version-20220921220934-5916

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-20220921220934-5916 -n old-k8s-version-20220921220934-5916: exit status 7 (601.7646ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0921 22:12:11.261908    6992 status.go:247] status error: host: state: unknown state "old-k8s-version-20220921220934-5916": docker container inspect old-k8s-version-20220921220934-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220921220934-5916

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-20220921220934-5916" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (1.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (1.96s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p old-k8s-version-20220921220934-5916 "sudo crictl images -o json"

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Non-zero exit: out/minikube-windows-amd64.exe ssh -p old-k8s-version-20220921220934-5916 "sudo crictl images -o json": exit status 80 (1.1155997s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "old-k8s-version-20220921220934-5916": docker container inspect old-k8s-version-20220921220934-5916 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220921220934-5916
	
	* 
	╭──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                      │
	│    * If the above advice does not help, please let us know:                                                          │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                        │
	│                                                                                                                      │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                             │
	│    * Please also attach the following file to the GitHub issue:                                                      │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_ssh_2ebd0b017f5d88727e5083393ee181280e239d1d_18.log    │
	│                                                                                                                      │
	╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:304: failed tp get images inside minikube. args "out/minikube-windows-amd64.exe ssh -p old-k8s-version-20220921220934-5916 \"sudo crictl images -o json\"": exit status 80
start_stop_delete_test.go:304: failed to decode images json unexpected end of JSON input. output:

                                                
                                                

                                                
                                                
start_stop_delete_test.go:304: v1.16.0 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"k8s.gcr.io/coredns:1.6.2",
- 	"k8s.gcr.io/etcd:3.3.15-0",
- 	"k8s.gcr.io/kube-apiserver:v1.16.0",
- 	"k8s.gcr.io/kube-controller-manager:v1.16.0",
- 	"k8s.gcr.io/kube-proxy:v1.16.0",
- 	"k8s.gcr.io/kube-scheduler:v1.16.0",
- 	"k8s.gcr.io/pause:3.1",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-20220921220934-5916
helpers_test.go:231: (dbg) Non-zero exit: docker inspect old-k8s-version-20220921220934-5916: exit status 1 (251.9236ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: old-k8s-version-20220921220934-5916

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-20220921220934-5916 -n old-k8s-version-20220921220934-5916

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-20220921220934-5916 -n old-k8s-version-20220921220934-5916: exit status 7 (583.7253ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0921 22:12:13.224962    8508 status.go:247] status error: host: state: unknown state "old-k8s-version-20220921220934-5916": docker container inspect old-k8s-version-20220921220934-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220921220934-5916

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-20220921220934-5916" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (1.96s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.91s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-20220921220937-5916" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect no-preload-20220921220937-5916
helpers_test.go:231: (dbg) Non-zero exit: docker inspect no-preload-20220921220937-5916: exit status 1 (286.6094ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: no-preload-20220921220937-5916

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-20220921220937-5916 -n no-preload-20220921220937-5916

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-20220921220937-5916 -n no-preload-20220921220937-5916: exit status 7 (617.8406ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0921 22:12:12.856253    6088 status.go:247] status error: host: state: unknown state "no-preload-20220921220937-5916": docker container inspect no-preload-20220921220937-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220921220937-5916

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-20220921220937-5916" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.91s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (1.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-20220921220937-5916" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-20220921220937-5916 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-20220921220937-5916 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (171.0913ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-20220921220937-5916" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-20220921220937-5916 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " k8s.gcr.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect no-preload-20220921220937-5916

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/AddonExistsAfterStop
helpers_test.go:231: (dbg) Non-zero exit: docker inspect no-preload-20220921220937-5916: exit status 1 (249.9944ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: no-preload-20220921220937-5916

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-20220921220937-5916 -n no-preload-20220921220937-5916
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-20220921220937-5916 -n no-preload-20220921220937-5916: exit status 7 (644.1645ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0921 22:12:13.933151    4112 status.go:247] status error: host: state: unknown state "no-preload-20220921220937-5916": docker container inspect no-preload-20220921220937-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220921220937-5916

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-20220921220937-5916" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (1.08s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.85s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe pause -p old-k8s-version-20220921220934-5916 --alsologtostderr -v=1

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-windows-amd64.exe pause -p old-k8s-version-20220921220934-5916 --alsologtostderr -v=1: exit status 80 (1.1309447s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0921 22:12:13.529575    5512 out.go:296] Setting OutFile to fd 1388 ...
	I0921 22:12:13.592358    5512 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0921 22:12:13.592417    5512 out.go:309] Setting ErrFile to fd 1756...
	I0921 22:12:13.592476    5512 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0921 22:12:13.605931    5512 out.go:303] Setting JSON to false
	I0921 22:12:13.605931    5512 mustload.go:65] Loading cluster: old-k8s-version-20220921220934-5916
	I0921 22:12:13.606796    5512 config.go:180] Loaded profile config "old-k8s-version-20220921220934-5916": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0921 22:12:13.629999    5512 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220921220934-5916 --format={{.State.Status}}
	W0921 22:12:13.838093    5512 cli_runner.go:211] docker container inspect old-k8s-version-20220921220934-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:12:13.841092    5512 out.go:177] 
	W0921 22:12:13.848091    5512 out.go:239] X Exiting due to GUEST_STATUS: state: unknown state "old-k8s-version-20220921220934-5916": docker container inspect old-k8s-version-20220921220934-5916 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220921220934-5916
	
	X Exiting due to GUEST_STATUS: state: unknown state "old-k8s-version-20220921220934-5916": docker container inspect old-k8s-version-20220921220934-5916 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220921220934-5916
	
	W0921 22:12:13.848091    5512 out.go:239] * 
	* 
	W0921 22:12:14.351905    5512 out.go:239] ╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                        │
	│    * If the above advice does not help, please let us know:                                                            │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                          │
	│                                                                                                                        │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                               │
	│    * Please also attach the following file to the GitHub issue:                                                        │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_pause_8893f1c977cc86351b34571029ffce3d31854fd6_26.log    │
	│                                                                                                                        │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                        │
	│    * If the above advice does not help, please let us know:                                                            │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                          │
	│                                                                                                                        │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                               │
	│    * Please also attach the following file to the GitHub issue:                                                        │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_pause_8893f1c977cc86351b34571029ffce3d31854fd6_26.log    │
	│                                                                                                                        │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0921 22:12:14.356374    5512 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-windows-amd64.exe pause -p old-k8s-version-20220921220934-5916 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-20220921220934-5916
helpers_test.go:231: (dbg) Non-zero exit: docker inspect old-k8s-version-20220921220934-5916: exit status 1 (267.9878ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: old-k8s-version-20220921220934-5916

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-20220921220934-5916 -n old-k8s-version-20220921220934-5916

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/Pause
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-20220921220934-5916 -n old-k8s-version-20220921220934-5916: exit status 7 (590.1742ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0921 22:12:15.227470    5840 status.go:247] status error: host: state: unknown state "old-k8s-version-20220921220934-5916": docker container inspect old-k8s-version-20220921220934-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220921220934-5916

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-20220921220934-5916" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-20220921220934-5916

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/Pause
helpers_test.go:231: (dbg) Non-zero exit: docker inspect old-k8s-version-20220921220934-5916: exit status 1 (238.5803ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: old-k8s-version-20220921220934-5916

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-20220921220934-5916 -n old-k8s-version-20220921220934-5916

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/Pause
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-20220921220934-5916 -n old-k8s-version-20220921220934-5916: exit status 7 (594.6664ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0921 22:12:16.091182    6608 status.go:247] status error: host: state: unknown state "old-k8s-version-20220921220934-5916": docker container inspect old-k8s-version-20220921220934-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20220921220934-5916

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-20220921220934-5916" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (2.85s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (2.03s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p no-preload-20220921220937-5916 "sudo crictl images -o json"

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Non-zero exit: out/minikube-windows-amd64.exe ssh -p no-preload-20220921220937-5916 "sudo crictl images -o json": exit status 80 (1.1205384s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "no-preload-20220921220937-5916": docker container inspect no-preload-20220921220937-5916 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220921220937-5916
	
	* 
	╭──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                      │
	│    * If the above advice does not help, please let us know:                                                          │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                        │
	│                                                                                                                      │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                             │
	│    * Please also attach the following file to the GitHub issue:                                                      │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_ssh_2ebd0b017f5d88727e5083393ee181280e239d1d_18.log    │
	│                                                                                                                      │
	╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:304: failed tp get images inside minikube. args "out/minikube-windows-amd64.exe ssh -p no-preload-20220921220937-5916 \"sudo crictl images -o json\"": exit status 80
start_stop_delete_test.go:304: failed to decode images json unexpected end of JSON input. output:

                                                
                                                

                                                
                                                
start_stop_delete_test.go:304: v1.25.2 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.9.3",
- 	"registry.k8s.io/etcd:3.5.4-0",
- 	"registry.k8s.io/kube-apiserver:v1.25.2",
- 	"registry.k8s.io/kube-controller-manager:v1.25.2",
- 	"registry.k8s.io/kube-proxy:v1.25.2",
- 	"registry.k8s.io/kube-scheduler:v1.25.2",
- 	"registry.k8s.io/pause:3.8",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/no-preload/serial/VerifyKubernetesImages]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect no-preload-20220921220937-5916

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/VerifyKubernetesImages
helpers_test.go:231: (dbg) Non-zero exit: docker inspect no-preload-20220921220937-5916: exit status 1 (260.5216ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: no-preload-20220921220937-5916

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-20220921220937-5916 -n no-preload-20220921220937-5916

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/VerifyKubernetesImages
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-20220921220937-5916 -n no-preload-20220921220937-5916: exit status 7 (628.4912ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0921 22:12:15.965481    1864 status.go:247] status error: host: state: unknown state "no-preload-20220921220937-5916": docker container inspect no-preload-20220921220937-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220921220937-5916

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-20220921220937-5916" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (2.03s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (2.9s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe pause -p no-preload-20220921220937-5916 --alsologtostderr -v=1

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-windows-amd64.exe pause -p no-preload-20220921220937-5916 --alsologtostderr -v=1: exit status 80 (1.1428127s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0921 22:12:16.260391    2204 out.go:296] Setting OutFile to fd 1952 ...
	I0921 22:12:16.336242    2204 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0921 22:12:16.336242    2204 out.go:309] Setting ErrFile to fd 1576...
	I0921 22:12:16.336242    2204 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0921 22:12:16.354325    2204 out.go:303] Setting JSON to false
	I0921 22:12:16.354352    2204 mustload.go:65] Loading cluster: no-preload-20220921220937-5916
	I0921 22:12:16.355581    2204 config.go:180] Loaded profile config "no-preload-20220921220937-5916": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.2
	I0921 22:12:16.376611    2204 cli_runner.go:164] Run: docker container inspect no-preload-20220921220937-5916 --format={{.State.Status}}
	W0921 22:12:16.577616    2204 cli_runner.go:211] docker container inspect no-preload-20220921220937-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:12:16.582669    2204 out.go:177] 
	W0921 22:12:16.585676    2204 out.go:239] X Exiting due to GUEST_STATUS: state: unknown state "no-preload-20220921220937-5916": docker container inspect no-preload-20220921220937-5916 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220921220937-5916
	
	X Exiting due to GUEST_STATUS: state: unknown state "no-preload-20220921220937-5916": docker container inspect no-preload-20220921220937-5916 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220921220937-5916
	
	W0921 22:12:16.585676    2204 out.go:239] * 
	* 
	W0921 22:12:17.091052    2204 out.go:239] ╭──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                          │
	│    * If the above advice does not help, please let us know:                                                              │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                            │
	│                                                                                                                          │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                 │
	│    * Please also attach the following file to the GitHub issue:                                                          │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_delete_05e3a674b6e518bcc2eafc8a77eb8b77017a009c_466.log    │
	│                                                                                                                          │
	╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                          │
	│    * If the above advice does not help, please let us know:                                                              │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                            │
	│                                                                                                                          │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                 │
	│    * Please also attach the following file to the GitHub issue:                                                          │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_delete_05e3a674b6e518bcc2eafc8a77eb8b77017a009c_466.log    │
	│                                                                                                                          │
	╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0921 22:12:17.096116    2204 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-windows-amd64.exe pause -p no-preload-20220921220937-5916 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect no-preload-20220921220937-5916
helpers_test.go:231: (dbg) Non-zero exit: docker inspect no-preload-20220921220937-5916: exit status 1 (258.2957ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: no-preload-20220921220937-5916

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-20220921220937-5916 -n no-preload-20220921220937-5916

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/Pause
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-20220921220937-5916 -n no-preload-20220921220937-5916: exit status 7 (609.7033ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0921 22:12:17.970577    6292 status.go:247] status error: host: state: unknown state "no-preload-20220921220937-5916": docker container inspect no-preload-20220921220937-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220921220937-5916

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-20220921220937-5916" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect no-preload-20220921220937-5916
helpers_test.go:231: (dbg) Non-zero exit: docker inspect no-preload-20220921220937-5916: exit status 1 (275.7327ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: no-preload-20220921220937-5916

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-20220921220937-5916 -n no-preload-20220921220937-5916
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-20220921220937-5916 -n no-preload-20220921220937-5916: exit status 7 (595.443ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0921 22:12:18.864270    1564 status.go:247] status error: host: state: unknown state "no-preload-20220921220937-5916": docker container inspect no-preload-20220921220937-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20220921220937-5916

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-20220921220937-5916" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/no-preload/serial/Pause (2.90s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/FirstStart (50.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-windows-amd64.exe start -p default-k8s-different-port-20220921221221-5916 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker --kubernetes-version=v1.25.2

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p default-k8s-different-port-20220921221221-5916 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker --kubernetes-version=v1.25.2: exit status 60 (49.1250417s)

                                                
                                                
-- stdout --
	* [default-k8s-different-port-20220921221221-5916] minikube v1.27.0 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	  - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=14995
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting control plane node default-k8s-different-port-20220921221221-5916 in cluster default-k8s-different-port-20220921221221-5916
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* docker "default-k8s-different-port-20220921221221-5916" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0921 22:12:21.266044    9056 out.go:296] Setting OutFile to fd 1800 ...
	I0921 22:12:21.329129    9056 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0921 22:12:21.330047    9056 out.go:309] Setting ErrFile to fd 1660...
	I0921 22:12:21.330047    9056 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0921 22:12:21.351567    9056 out.go:303] Setting JSON to false
	I0921 22:12:21.355561    9056 start.go:115] hostinfo: {"hostname":"minikube2","uptime":4409,"bootTime":1663793932,"procs":156,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19044 Build 19044","kernelVersion":"10.0.19044 Build 19044","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"29dced62-21fb-45d8-a34e-472b66ced002"}
	W0921 22:12:21.355561    9056 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0921 22:12:21.361556    9056 out.go:177] * [default-k8s-different-port-20220921221221-5916] minikube v1.27.0 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	I0921 22:12:21.364556    9056 notify.go:214] Checking for updates...
	I0921 22:12:21.366565    9056 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	I0921 22:12:21.369561    9056 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	I0921 22:12:21.371563    9056 out.go:177]   - MINIKUBE_LOCATION=14995
	I0921 22:12:21.374569    9056 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0921 22:12:21.377563    9056 config.go:180] Loaded profile config "cert-expiration-20220921220719-5916": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.2
	I0921 22:12:21.378578    9056 config.go:180] Loaded profile config "embed-certs-20220921220947-5916": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.2
	I0921 22:12:21.379562    9056 config.go:180] Loaded profile config "multinode-20220921215635-5916-m01": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.2
	I0921 22:12:21.379562    9056 driver.go:365] Setting default libvirt URI to qemu:///system
	I0921 22:12:21.688563    9056 docker.go:137] docker version: linux-20.10.17
	I0921 22:12:21.698556    9056 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0921 22:12:22.269952    9056 info.go:265] docker info: {ID:6VFI:N2PO:PLBP:KXLR:EN7R:OR7P:564S:UY7N:ZWDA:5V2U:KGL2:KJZ7 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:43 OomKillDisable:true NGoroutines:85 SystemTime:2022-09-21 22:12:21.8617455 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.10.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.9] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-
plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.19.0]] Warnings:<nil>}}
	I0921 22:12:22.273857    9056 out.go:177] * Using the docker driver based on user configuration
	I0921 22:12:22.275894    9056 start.go:284] selected driver: docker
	I0921 22:12:22.275894    9056 start.go:808] validating driver "docker" against <nil>
	I0921 22:12:22.275894    9056 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0921 22:12:22.347757    9056 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0921 22:12:22.930685    9056 info.go:265] docker info: {ID:6VFI:N2PO:PLBP:KXLR:EN7R:OR7P:564S:UY7N:ZWDA:5V2U:KGL2:KJZ7 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:43 OomKillDisable:true NGoroutines:85 SystemTime:2022-09-21 22:12:22.5147667 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.10.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.9] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-
plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.19.0]] Warnings:<nil>}}
	I0921 22:12:22.930685    9056 start_flags.go:302] no existing cluster config was found, will generate one from the flags 
	I0921 22:12:22.931695    9056 start_flags.go:867] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0921 22:12:22.934683    9056 out.go:177] * Using Docker Desktop driver with root privileges
	I0921 22:12:22.936728    9056 cni.go:95] Creating CNI manager for ""
	I0921 22:12:22.936728    9056 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0921 22:12:22.936728    9056 start_flags.go:316] config:
	{Name:default-k8s-different-port-20220921221221-5916 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.2 ClusterName:default-k8s-different-port-20220921221221-5916 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDoma
in:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run
/socket_vmnet}
	I0921 22:12:22.941732    9056 out.go:177] * Starting control plane node default-k8s-different-port-20220921221221-5916 in cluster default-k8s-different-port-20220921221221-5916
	I0921 22:12:22.943702    9056 cache.go:120] Beginning downloading kic base image for docker with docker
	I0921 22:12:22.947693    9056 out.go:177] * Pulling base image ...
	I0921 22:12:22.949691    9056 preload.go:132] Checking if preload exists for k8s version v1.25.2 and runtime docker
	I0921 22:12:22.949691    9056 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c in local docker daemon
	I0921 22:12:22.949691    9056 preload.go:148] Found local preload: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.25.2-docker-overlay2-amd64.tar.lz4
	I0921 22:12:22.949691    9056 cache.go:57] Caching tarball of preloaded images
	I0921 22:12:22.950692    9056 preload.go:174] Found C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.25.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0921 22:12:22.950692    9056 cache.go:60] Finished verifying existence of preloaded tar for  v1.25.2 on docker
	I0921 22:12:22.950692    9056 profile.go:148] Saving config to C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\default-k8s-different-port-20220921221221-5916\config.json ...
	I0921 22:12:22.950692    9056 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\default-k8s-different-port-20220921221221-5916\config.json: {Name:mkd8be52f543e7b5387799e0a27e5ac4e0bfa28e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0921 22:12:23.152398    9056 cache.go:147] Downloading gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c to local cache
	I0921 22:12:23.152398    9056 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase_v0.0.34@sha256_f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c.tar
	I0921 22:12:23.152398    9056 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase_v0.0.34@sha256_f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c.tar
	I0921 22:12:23.152398    9056 image.go:59] Checking for gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c in local cache directory
	I0921 22:12:23.152398    9056 image.go:62] Found gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c in local cache directory, skipping pull
	I0921 22:12:23.152398    9056 image.go:103] gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c exists in cache, skipping pull
	I0921 22:12:23.152398    9056 cache.go:150] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c as a tarball
	I0921 22:12:23.152398    9056 cache.go:161] Loading gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c from local cache
	I0921 22:12:23.152398    9056 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase_v0.0.34@sha256_f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c.tar
	I0921 22:12:25.748848    9056 cache.go:164] successfully loaded gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c from cached tarball
	I0921 22:12:25.748848    9056 cache.go:170] Downloading gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c to local daemon
	I0921 22:12:25.748848    9056 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c.d.lock -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase_v0.0.34@sha256_f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c.d.lock
	I0921 22:12:25.749857    9056 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c in local docker daemon
	I0921 22:12:25.980376    9056 image.go:243] Writing gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c to local daemon
	    > gcr.io/k8s-minikube/kicbase:  0 B [__________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase:  0 B [__________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase:  0 B [__________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase:  0 B [__________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase:  0 B [______________________] ?% ? p/s 800msI0921 22:12:27.493132    9056 cache.go:177] use image loaded from cache gcr.io/k8s-minikube/kicbase:v0.0.34
	W0921 22:12:27.493132    9056 out.go:239] ! minikube was unable to download gcr.io/k8s-minikube/kicbase:v0.0.34, but successfully downloaded gcr.io/k8s-minikube/kicbase:v0.0.34 as a fallback image
	! minikube was unable to download gcr.io/k8s-minikube/kicbase:v0.0.34, but successfully downloaded gcr.io/k8s-minikube/kicbase:v0.0.34 as a fallback image
	I0921 22:12:27.493132    9056 cache.go:208] Successfully downloaded all kic artifacts
	I0921 22:12:27.493132    9056 start.go:364] acquiring machines lock for default-k8s-different-port-20220921221221-5916: {Name:mk83eca1da19c7d9c5cd0808c146559719914d48 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0921 22:12:27.493132    9056 start.go:368] acquired machines lock for "default-k8s-different-port-20220921221221-5916" in 0s
	I0921 22:12:27.493132    9056 start.go:93] Provisioning new machine with config: &{Name:default-k8s-different-port-20220921221221-5916 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.34 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.2 ClusterName:default-k8s-different-port-20220921221221-5916 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:
[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.25.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false C
ustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet} &{Name: IP: Port:8444 KubernetesVersion:v1.25.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0921 22:12:27.493132    9056 start.go:125] createHost starting for "" (driver="docker")
	I0921 22:12:27.497130    9056 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0921 22:12:27.498127    9056 start.go:159] libmachine.API.Create for "default-k8s-different-port-20220921221221-5916" (driver="docker")
	I0921 22:12:27.498127    9056 client.go:168] LocalClient.Create starting
	I0921 22:12:27.498127    9056 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I0921 22:12:27.499129    9056 main.go:134] libmachine: Decoding PEM data...
	I0921 22:12:27.499129    9056 main.go:134] libmachine: Parsing certificate...
	I0921 22:12:27.499129    9056 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I0921 22:12:27.499129    9056 main.go:134] libmachine: Decoding PEM data...
	I0921 22:12:27.499129    9056 main.go:134] libmachine: Parsing certificate...
	I0921 22:12:27.510138    9056 cli_runner.go:164] Run: docker network inspect default-k8s-different-port-20220921221221-5916 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0921 22:12:27.716314    9056 cli_runner.go:211] docker network inspect default-k8s-different-port-20220921221221-5916 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0921 22:12:27.724311    9056 network_create.go:272] running [docker network inspect default-k8s-different-port-20220921221221-5916] to gather additional debugging logs...
	I0921 22:12:27.724311    9056 cli_runner.go:164] Run: docker network inspect default-k8s-different-port-20220921221221-5916
	W0921 22:12:27.934901    9056 cli_runner.go:211] docker network inspect default-k8s-different-port-20220921221221-5916 returned with exit code 1
	I0921 22:12:27.935101    9056 network_create.go:275] error running [docker network inspect default-k8s-different-port-20220921221221-5916]: docker network inspect default-k8s-different-port-20220921221221-5916: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: default-k8s-different-port-20220921221221-5916
	I0921 22:12:27.935187    9056 network_create.go:277] output of [docker network inspect default-k8s-different-port-20220921221221-5916]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: default-k8s-different-port-20220921221221-5916
	
	** /stderr **
	I0921 22:12:27.943868    9056 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0921 22:12:28.191933    9056 network.go:290] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc00068ac60] misses:0}
	I0921 22:12:28.191933    9056 network.go:236] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0921 22:12:28.191933    9056 network_create.go:115] attempt to create docker network default-k8s-different-port-20220921221221-5916 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0921 22:12:28.199928    9056 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=default-k8s-different-port-20220921221221-5916 default-k8s-different-port-20220921221221-5916
	W0921 22:12:28.403355    9056 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=default-k8s-different-port-20220921221221-5916 default-k8s-different-port-20220921221221-5916 returned with exit code 1
	E0921 22:12:28.403355    9056 network_create.go:104] error while trying to create docker network default-k8s-different-port-20220921221221-5916 192.168.49.0/24: create docker network default-k8s-different-port-20220921221221-5916 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=default-k8s-different-port-20220921221221-5916 default-k8s-different-port-20220921221221-5916: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network f5d9da0d8a66496e6b583d74095de128d50b589dc26071f0782d17c2b8e5373b (br-f5d9da0d8a66): conflicts with network a04d36bfb3cf4b1b1ab643a2ec70fadfc4a485878b32a275b4c7653037d66dd0 (br-a04d36bfb3cf): networks have overlapping IPv4
	W0921 22:12:28.403355    9056 out.go:239] ! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network default-k8s-different-port-20220921221221-5916 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=default-k8s-different-port-20220921221221-5916 default-k8s-different-port-20220921221221-5916: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network f5d9da0d8a66496e6b583d74095de128d50b589dc26071f0782d17c2b8e5373b (br-f5d9da0d8a66): conflicts with network a04d36bfb3cf4b1b1ab643a2ec70fadfc4a485878b32a275b4c7653037d66dd0 (br-a04d36bfb3cf): networks have overlapping IPv4
	
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network default-k8s-different-port-20220921221221-5916 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=default-k8s-different-port-20220921221221-5916 default-k8s-different-port-20220921221221-5916: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network f5d9da0d8a66496e6b583d74095de128d50b589dc26071f0782d17c2b8e5373b (br-f5d9da0d8a66): conflicts with network a04d36bfb3cf4b1b1ab643a2ec70fadfc4a485878b32a275b4c7653037d66dd0 (br-a04d36bfb3cf): networks have overlapping IPv4
	
	I0921 22:12:28.422351    9056 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0921 22:12:28.618367    9056 cli_runner.go:164] Run: docker volume create default-k8s-different-port-20220921221221-5916 --label name.minikube.sigs.k8s.io=default-k8s-different-port-20220921221221-5916 --label created_by.minikube.sigs.k8s.io=true
	W0921 22:12:28.807265    9056 cli_runner.go:211] docker volume create default-k8s-different-port-20220921221221-5916 --label name.minikube.sigs.k8s.io=default-k8s-different-port-20220921221221-5916 --label created_by.minikube.sigs.k8s.io=true returned with exit code 1
	I0921 22:12:28.807299    9056 client.go:171] LocalClient.Create took 1.3091625s
	I0921 22:12:30.818939    9056 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0921 22:12:30.840820    9056 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220921221221-5916
	W0921 22:12:31.047722    9056 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220921221221-5916 returned with exit code 1
	I0921 22:12:31.047722    9056 retry.go:31] will retry after 276.165072ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-different-port-20220921221221-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220921221221-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220921221221-5916
	I0921 22:12:31.336859    9056 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220921221221-5916
	W0921 22:12:31.530420    9056 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220921221221-5916 returned with exit code 1
	I0921 22:12:31.530420    9056 retry.go:31] will retry after 540.190908ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-different-port-20220921221221-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220921221221-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220921221221-5916
	I0921 22:12:32.086002    9056 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220921221221-5916
	W0921 22:12:32.288207    9056 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220921221221-5916 returned with exit code 1
	W0921 22:12:32.288207    9056 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-different-port-20220921221221-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220921221221-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220921221221-5916
	
	W0921 22:12:32.288207    9056 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-different-port-20220921221221-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220921221221-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220921221221-5916
	I0921 22:12:32.298214    9056 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0921 22:12:32.305203    9056 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220921221221-5916
	W0921 22:12:32.492861    9056 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220921221221-5916 returned with exit code 1
	I0921 22:12:32.492861    9056 retry.go:31] will retry after 234.428547ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-different-port-20220921221221-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220921221221-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220921221221-5916
	I0921 22:12:32.737302    9056 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220921221221-5916
	W0921 22:12:32.933213    9056 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220921221221-5916 returned with exit code 1
	I0921 22:12:32.933213    9056 retry.go:31] will retry after 346.739061ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-different-port-20220921221221-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220921221221-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220921221221-5916
	I0921 22:12:33.300145    9056 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220921221221-5916
	W0921 22:12:33.501955    9056 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220921221221-5916 returned with exit code 1
	I0921 22:12:33.502335    9056 retry.go:31] will retry after 667.587979ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-different-port-20220921221221-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220921221221-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220921221221-5916
	I0921 22:12:34.178143    9056 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220921221221-5916
	W0921 22:12:34.379196    9056 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220921221221-5916 returned with exit code 1
	W0921 22:12:34.379196    9056 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-different-port-20220921221221-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220921221221-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220921221221-5916
	
	W0921 22:12:34.379196    9056 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-different-port-20220921221221-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220921221221-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220921221221-5916
	I0921 22:12:34.379196    9056 start.go:128] duration metric: createHost completed in 6.8860101s
	I0921 22:12:34.379196    9056 start.go:83] releasing machines lock for "default-k8s-different-port-20220921221221-5916", held for 6.8860101s
	W0921 22:12:34.379196    9056 start.go:602] error starting host: creating host: create: creating: setting up container node: creating volume for default-k8s-different-port-20220921221221-5916 container: docker volume create default-k8s-different-port-20220921221221-5916 --label name.minikube.sigs.k8s.io=default-k8s-different-port-20220921221221-5916 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create default-k8s-different-port-20220921221221-5916: error while creating volume root path '/var/lib/docker/volumes/default-k8s-different-port-20220921221221-5916': mkdir /var/lib/docker/volumes/default-k8s-different-port-20220921221221-5916: read-only file system
	I0921 22:12:34.394201    9056 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220921221221-5916 --format={{.State.Status}}
	W0921 22:12:34.583759    9056 cli_runner.go:211] docker container inspect default-k8s-different-port-20220921221221-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:12:34.583759    9056 delete.go:82] Unable to get host status for default-k8s-different-port-20220921221221-5916, assuming it has already been deleted: state: unknown state "default-k8s-different-port-20220921221221-5916": docker container inspect default-k8s-different-port-20220921221221-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220921221221-5916
	W0921 22:12:34.583759    9056 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: setting up container node: creating volume for default-k8s-different-port-20220921221221-5916 container: docker volume create default-k8s-different-port-20220921221221-5916 --label name.minikube.sigs.k8s.io=default-k8s-different-port-20220921221221-5916 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create default-k8s-different-port-20220921221221-5916: error while creating volume root path '/var/lib/docker/volumes/default-k8s-different-port-20220921221221-5916': mkdir /var/lib/docker/volumes/default-k8s-different-port-20220921221221-5916: read-only file system
	
	! StartHost failed, but will try again: creating host: create: creating: setting up container node: creating volume for default-k8s-different-port-20220921221221-5916 container: docker volume create default-k8s-different-port-20220921221221-5916 --label name.minikube.sigs.k8s.io=default-k8s-different-port-20220921221221-5916 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create default-k8s-different-port-20220921221221-5916: error while creating volume root path '/var/lib/docker/volumes/default-k8s-different-port-20220921221221-5916': mkdir /var/lib/docker/volumes/default-k8s-different-port-20220921221221-5916: read-only file system
	
	I0921 22:12:34.583759    9056 start.go:617] Will try again in 5 seconds ...
	I0921 22:12:39.590797    9056 start.go:364] acquiring machines lock for default-k8s-different-port-20220921221221-5916: {Name:mk83eca1da19c7d9c5cd0808c146559719914d48 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0921 22:12:39.590797    9056 start.go:368] acquired machines lock for "default-k8s-different-port-20220921221221-5916" in 0s
	I0921 22:12:39.591328    9056 start.go:96] Skipping create...Using existing machine configuration
	I0921 22:12:39.591413    9056 fix.go:55] fixHost starting: 
	I0921 22:12:39.604817    9056 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220921221221-5916 --format={{.State.Status}}
	W0921 22:12:39.791359    9056 cli_runner.go:211] docker container inspect default-k8s-different-port-20220921221221-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:12:39.791535    9056 fix.go:103] recreateIfNeeded on default-k8s-different-port-20220921221221-5916: state= err=unknown state "default-k8s-different-port-20220921221221-5916": docker container inspect default-k8s-different-port-20220921221221-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220921221221-5916
	I0921 22:12:39.791702    9056 fix.go:108] machineExists: false. err=machine does not exist
	I0921 22:12:39.795212    9056 out.go:177] * docker "default-k8s-different-port-20220921221221-5916" container is missing, will recreate.
	I0921 22:12:39.799040    9056 delete.go:124] DEMOLISHING default-k8s-different-port-20220921221221-5916 ...
	I0921 22:12:39.811468    9056 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220921221221-5916 --format={{.State.Status}}
	W0921 22:12:40.008297    9056 cli_runner.go:211] docker container inspect default-k8s-different-port-20220921221221-5916 --format={{.State.Status}} returned with exit code 1
	W0921 22:12:40.008437    9056 stop.go:75] unable to get state: unknown state "default-k8s-different-port-20220921221221-5916": docker container inspect default-k8s-different-port-20220921221221-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220921221221-5916
	I0921 22:12:40.008536    9056 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "default-k8s-different-port-20220921221221-5916": docker container inspect default-k8s-different-port-20220921221221-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220921221221-5916
	I0921 22:12:40.023333    9056 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220921221221-5916 --format={{.State.Status}}
	W0921 22:12:40.226254    9056 cli_runner.go:211] docker container inspect default-k8s-different-port-20220921221221-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:12:40.226254    9056 delete.go:82] Unable to get host status for default-k8s-different-port-20220921221221-5916, assuming it has already been deleted: state: unknown state "default-k8s-different-port-20220921221221-5916": docker container inspect default-k8s-different-port-20220921221221-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220921221221-5916
	I0921 22:12:40.234283    9056 cli_runner.go:164] Run: docker container inspect -f {{.Id}} default-k8s-different-port-20220921221221-5916
	W0921 22:12:40.429715    9056 cli_runner.go:211] docker container inspect -f {{.Id}} default-k8s-different-port-20220921221221-5916 returned with exit code 1
	I0921 22:12:40.429834    9056 kic.go:356] could not find the container default-k8s-different-port-20220921221221-5916 to remove it. will try anyways
	I0921 22:12:40.439196    9056 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220921221221-5916 --format={{.State.Status}}
	W0921 22:12:40.645633    9056 cli_runner.go:211] docker container inspect default-k8s-different-port-20220921221221-5916 --format={{.State.Status}} returned with exit code 1
	W0921 22:12:40.645955    9056 oci.go:84] error getting container status, will try to delete anyways: unknown state "default-k8s-different-port-20220921221221-5916": docker container inspect default-k8s-different-port-20220921221221-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220921221221-5916
	I0921 22:12:40.653309    9056 cli_runner.go:164] Run: docker exec --privileged -t default-k8s-different-port-20220921221221-5916 /bin/bash -c "sudo init 0"
	W0921 22:12:40.862900    9056 cli_runner.go:211] docker exec --privileged -t default-k8s-different-port-20220921221221-5916 /bin/bash -c "sudo init 0" returned with exit code 1
	I0921 22:12:40.862964    9056 oci.go:646] error shutdown default-k8s-different-port-20220921221221-5916: docker exec --privileged -t default-k8s-different-port-20220921221221-5916 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: default-k8s-different-port-20220921221221-5916
	I0921 22:12:41.876155    9056 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220921221221-5916 --format={{.State.Status}}
	W0921 22:12:42.103023    9056 cli_runner.go:211] docker container inspect default-k8s-different-port-20220921221221-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:12:42.103067    9056 oci.go:658] temporary error verifying shutdown: unknown state "default-k8s-different-port-20220921221221-5916": docker container inspect default-k8s-different-port-20220921221221-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220921221221-5916
	I0921 22:12:42.103067    9056 oci.go:660] temporary error: container default-k8s-different-port-20220921221221-5916 status is  but expect it to be exited
	I0921 22:12:42.103067    9056 retry.go:31] will retry after 328.259627ms: couldn't verify container is exited. %v: unknown state "default-k8s-different-port-20220921221221-5916": docker container inspect default-k8s-different-port-20220921221221-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220921221221-5916
	I0921 22:12:42.456057    9056 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220921221221-5916 --format={{.State.Status}}
	W0921 22:12:42.645177    9056 cli_runner.go:211] docker container inspect default-k8s-different-port-20220921221221-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:12:42.645241    9056 oci.go:658] temporary error verifying shutdown: unknown state "default-k8s-different-port-20220921221221-5916": docker container inspect default-k8s-different-port-20220921221221-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220921221221-5916
	I0921 22:12:42.645390    9056 oci.go:660] temporary error: container default-k8s-different-port-20220921221221-5916 status is  but expect it to be exited
	I0921 22:12:42.645390    9056 retry.go:31] will retry after 447.727139ms: couldn't verify container is exited. %v: unknown state "default-k8s-different-port-20220921221221-5916": docker container inspect default-k8s-different-port-20220921221221-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220921221221-5916
	I0921 22:12:43.103156    9056 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220921221221-5916 --format={{.State.Status}}
	W0921 22:12:43.294947    9056 cli_runner.go:211] docker container inspect default-k8s-different-port-20220921221221-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:12:43.295334    9056 oci.go:658] temporary error verifying shutdown: unknown state "default-k8s-different-port-20220921221221-5916": docker container inspect default-k8s-different-port-20220921221221-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220921221221-5916
	I0921 22:12:43.295334    9056 oci.go:660] temporary error: container default-k8s-different-port-20220921221221-5916 status is  but expect it to be exited
	I0921 22:12:43.295334    9056 retry.go:31] will retry after 901.025843ms: couldn't verify container is exited. %v: unknown state "default-k8s-different-port-20220921221221-5916": docker container inspect default-k8s-different-port-20220921221221-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220921221221-5916
	I0921 22:12:44.209673    9056 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220921221221-5916 --format={{.State.Status}}
	W0921 22:12:44.402824    9056 cli_runner.go:211] docker container inspect default-k8s-different-port-20220921221221-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:12:44.402824    9056 oci.go:658] temporary error verifying shutdown: unknown state "default-k8s-different-port-20220921221221-5916": docker container inspect default-k8s-different-port-20220921221221-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220921221221-5916
	I0921 22:12:44.402824    9056 oci.go:660] temporary error: container default-k8s-different-port-20220921221221-5916 status is  but expect it to be exited
	I0921 22:12:44.402824    9056 retry.go:31] will retry after 1.713171311s: couldn't verify container is exited. %v: unknown state "default-k8s-different-port-20220921221221-5916": docker container inspect default-k8s-different-port-20220921221221-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220921221221-5916
	I0921 22:12:46.137677    9056 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220921221221-5916 --format={{.State.Status}}
	W0921 22:12:46.344015    9056 cli_runner.go:211] docker container inspect default-k8s-different-port-20220921221221-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:12:46.344099    9056 oci.go:658] temporary error verifying shutdown: unknown state "default-k8s-different-port-20220921221221-5916": docker container inspect default-k8s-different-port-20220921221221-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220921221221-5916
	I0921 22:12:46.348228    9056 oci.go:660] temporary error: container default-k8s-different-port-20220921221221-5916 status is  but expect it to be exited
	I0921 22:12:46.348228    9056 retry.go:31] will retry after 3.325151152s: couldn't verify container is exited. %v: unknown state "default-k8s-different-port-20220921221221-5916": docker container inspect default-k8s-different-port-20220921221221-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220921221221-5916
	I0921 22:12:49.683485    9056 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220921221221-5916 --format={{.State.Status}}
	W0921 22:12:49.893076    9056 cli_runner.go:211] docker container inspect default-k8s-different-port-20220921221221-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:12:49.893076    9056 oci.go:658] temporary error verifying shutdown: unknown state "default-k8s-different-port-20220921221221-5916": docker container inspect default-k8s-different-port-20220921221221-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220921221221-5916
	I0921 22:12:49.893076    9056 oci.go:660] temporary error: container default-k8s-different-port-20220921221221-5916 status is  but expect it to be exited
	I0921 22:12:49.893076    9056 retry.go:31] will retry after 2.711970641s: couldn't verify container is exited. %v: unknown state "default-k8s-different-port-20220921221221-5916": docker container inspect default-k8s-different-port-20220921221221-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220921221221-5916
	I0921 22:12:52.619821    9056 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220921221221-5916 --format={{.State.Status}}
	W0921 22:12:52.827459    9056 cli_runner.go:211] docker container inspect default-k8s-different-port-20220921221221-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:12:52.827576    9056 oci.go:658] temporary error verifying shutdown: unknown state "default-k8s-different-port-20220921221221-5916": docker container inspect default-k8s-different-port-20220921221221-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220921221221-5916
	I0921 22:12:52.827576    9056 oci.go:660] temporary error: container default-k8s-different-port-20220921221221-5916 status is  but expect it to be exited
	I0921 22:12:52.827576    9056 retry.go:31] will retry after 5.015617898s: couldn't verify container is exited. %v: unknown state "default-k8s-different-port-20220921221221-5916": docker container inspect default-k8s-different-port-20220921221221-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220921221221-5916
	I0921 22:12:57.851978    9056 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220921221221-5916 --format={{.State.Status}}
	W0921 22:12:58.038572    9056 cli_runner.go:211] docker container inspect default-k8s-different-port-20220921221221-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:12:58.038572    9056 oci.go:658] temporary error verifying shutdown: unknown state "default-k8s-different-port-20220921221221-5916": docker container inspect default-k8s-different-port-20220921221221-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220921221221-5916
	I0921 22:12:58.038572    9056 oci.go:660] temporary error: container default-k8s-different-port-20220921221221-5916 status is  but expect it to be exited
	I0921 22:12:58.038572    9056 oci.go:88] couldn't shut down default-k8s-different-port-20220921221221-5916 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "default-k8s-different-port-20220921221221-5916": docker container inspect default-k8s-different-port-20220921221221-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220921221221-5916
	 
	I0921 22:12:58.046443    9056 cli_runner.go:164] Run: docker rm -f -v default-k8s-different-port-20220921221221-5916
	I0921 22:12:58.264211    9056 cli_runner.go:164] Run: docker container inspect -f {{.Id}} default-k8s-different-port-20220921221221-5916
	W0921 22:12:58.457526    9056 cli_runner.go:211] docker container inspect -f {{.Id}} default-k8s-different-port-20220921221221-5916 returned with exit code 1
	I0921 22:12:58.464983    9056 cli_runner.go:164] Run: docker network inspect default-k8s-different-port-20220921221221-5916 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0921 22:12:58.644762    9056 cli_runner.go:211] docker network inspect default-k8s-different-port-20220921221221-5916 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0921 22:12:58.652413    9056 network_create.go:272] running [docker network inspect default-k8s-different-port-20220921221221-5916] to gather additional debugging logs...
	I0921 22:12:58.652413    9056 cli_runner.go:164] Run: docker network inspect default-k8s-different-port-20220921221221-5916
	W0921 22:12:58.833282    9056 cli_runner.go:211] docker network inspect default-k8s-different-port-20220921221221-5916 returned with exit code 1
	I0921 22:12:58.833282    9056 network_create.go:275] error running [docker network inspect default-k8s-different-port-20220921221221-5916]: docker network inspect default-k8s-different-port-20220921221221-5916: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: default-k8s-different-port-20220921221221-5916
	I0921 22:12:58.833282    9056 network_create.go:277] output of [docker network inspect default-k8s-different-port-20220921221221-5916]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: default-k8s-different-port-20220921221221-5916
	
	** /stderr **
	W0921 22:12:58.833282    9056 delete.go:139] delete failed (probably ok) <nil>
	I0921 22:12:58.833282    9056 fix.go:115] Sleeping 1 second for extra luck!
	I0921 22:12:59.835437    9056 start.go:125] createHost starting for "" (driver="docker")
	I0921 22:12:59.842015    9056 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0921 22:12:59.842157    9056 start.go:159] libmachine.API.Create for "default-k8s-different-port-20220921221221-5916" (driver="docker")
	I0921 22:12:59.842157    9056 client.go:168] LocalClient.Create starting
	I0921 22:12:59.842755    9056 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I0921 22:12:59.842755    9056 main.go:134] libmachine: Decoding PEM data...
	I0921 22:12:59.842755    9056 main.go:134] libmachine: Parsing certificate...
	I0921 22:12:59.842755    9056 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I0921 22:12:59.843504    9056 main.go:134] libmachine: Decoding PEM data...
	I0921 22:12:59.843592    9056 main.go:134] libmachine: Parsing certificate...
	I0921 22:12:59.858270    9056 cli_runner.go:164] Run: docker network inspect default-k8s-different-port-20220921221221-5916 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0921 22:13:00.069920    9056 cli_runner.go:211] docker network inspect default-k8s-different-port-20220921221221-5916 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0921 22:13:00.081719    9056 network_create.go:272] running [docker network inspect default-k8s-different-port-20220921221221-5916] to gather additional debugging logs...
	I0921 22:13:00.081719    9056 cli_runner.go:164] Run: docker network inspect default-k8s-different-port-20220921221221-5916
	W0921 22:13:00.286338    9056 cli_runner.go:211] docker network inspect default-k8s-different-port-20220921221221-5916 returned with exit code 1
	I0921 22:13:00.286338    9056 network_create.go:275] error running [docker network inspect default-k8s-different-port-20220921221221-5916]: docker network inspect default-k8s-different-port-20220921221221-5916: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: default-k8s-different-port-20220921221221-5916
	I0921 22:13:00.286338    9056 network_create.go:277] output of [docker network inspect default-k8s-different-port-20220921221221-5916]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: default-k8s-different-port-20220921221221-5916
	
	** /stderr **
	I0921 22:13:00.292338    9056 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0921 22:13:00.507771    9056 network.go:281] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00068ac60] amended:false}} dirty:map[] misses:0}
	I0921 22:13:00.508699    9056 network.go:239] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0921 22:13:00.526688    9056 network.go:290] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00068ac60] amended:true}} dirty:map[192.168.49.0:0xc00068ac60 192.168.58.0:0xc000722348] misses:0}
	I0921 22:13:00.526688    9056 network.go:236] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0921 22:13:00.526688    9056 network_create.go:115] attempt to create docker network default-k8s-different-port-20220921221221-5916 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0921 22:13:00.534692    9056 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=default-k8s-different-port-20220921221221-5916 default-k8s-different-port-20220921221221-5916
	W0921 22:13:00.744561    9056 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=default-k8s-different-port-20220921221221-5916 default-k8s-different-port-20220921221221-5916 returned with exit code 1
	E0921 22:13:00.744671    9056 network_create.go:104] error while trying to create docker network default-k8s-different-port-20220921221221-5916 192.168.58.0/24: create docker network default-k8s-different-port-20220921221221-5916 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=default-k8s-different-port-20220921221221-5916 default-k8s-different-port-20220921221221-5916: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 2c5fcfd9e370471a253c02483c2b5c0b156568000f444ebfb74e6b1f01880efe (br-2c5fcfd9e370): conflicts with network 8a3cd8d165a4af032276277dbee93b52b8ee4aed7ca6d8aceee6814901b31792 (br-8a3cd8d165a4): networks have overlapping IPv4
	W0921 22:13:00.744671    9056 out.go:239] ! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network default-k8s-different-port-20220921221221-5916 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=default-k8s-different-port-20220921221221-5916 default-k8s-different-port-20220921221221-5916: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 2c5fcfd9e370471a253c02483c2b5c0b156568000f444ebfb74e6b1f01880efe (br-2c5fcfd9e370): conflicts with network 8a3cd8d165a4af032276277dbee93b52b8ee4aed7ca6d8aceee6814901b31792 (br-8a3cd8d165a4): networks have overlapping IPv4
	
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network default-k8s-different-port-20220921221221-5916 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=default-k8s-different-port-20220921221221-5916 default-k8s-different-port-20220921221221-5916: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 2c5fcfd9e370471a253c02483c2b5c0b156568000f444ebfb74e6b1f01880efe (br-2c5fcfd9e370): conflicts with network 8a3cd8d165a4af032276277dbee93b52b8ee4aed7ca6d8aceee6814901b31792 (br-8a3cd8d165a4): networks have overlapping IPv4
	
	I0921 22:13:00.758578    9056 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0921 22:13:00.970823    9056 cli_runner.go:164] Run: docker volume create default-k8s-different-port-20220921221221-5916 --label name.minikube.sigs.k8s.io=default-k8s-different-port-20220921221221-5916 --label created_by.minikube.sigs.k8s.io=true
	W0921 22:13:01.166342    9056 cli_runner.go:211] docker volume create default-k8s-different-port-20220921221221-5916 --label name.minikube.sigs.k8s.io=default-k8s-different-port-20220921221221-5916 --label created_by.minikube.sigs.k8s.io=true returned with exit code 1
	I0921 22:13:01.166558    9056 client.go:171] LocalClient.Create took 1.3242679s
	I0921 22:13:03.187934    9056 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0921 22:13:03.195986    9056 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220921221221-5916
	W0921 22:13:03.398425    9056 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220921221221-5916 returned with exit code 1
	I0921 22:13:03.398646    9056 retry.go:31] will retry after 242.222461ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-different-port-20220921221221-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220921221221-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220921221221-5916
	I0921 22:13:03.663502    9056 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220921221221-5916
	W0921 22:13:03.860475    9056 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220921221221-5916 returned with exit code 1
	I0921 22:13:03.860564    9056 retry.go:31] will retry after 293.637806ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-different-port-20220921221221-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220921221221-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220921221221-5916
	I0921 22:13:04.169629    9056 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220921221221-5916
	W0921 22:13:04.391729    9056 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220921221221-5916 returned with exit code 1
	I0921 22:13:04.392079    9056 retry.go:31] will retry after 446.119795ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-different-port-20220921221221-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220921221221-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220921221221-5916
	I0921 22:13:04.860089    9056 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220921221221-5916
	W0921 22:13:05.038964    9056 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220921221221-5916 returned with exit code 1
	W0921 22:13:05.038964    9056 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-different-port-20220921221221-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220921221221-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220921221221-5916
	
	W0921 22:13:05.038964    9056 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-different-port-20220921221221-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220921221221-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220921221221-5916
	I0921 22:13:05.048961    9056 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0921 22:13:05.055960    9056 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220921221221-5916
	W0921 22:13:05.245506    9056 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220921221221-5916 returned with exit code 1
	I0921 22:13:05.245506    9056 retry.go:31] will retry after 179.638263ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-different-port-20220921221221-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220921221221-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220921221221-5916
	I0921 22:13:05.438992    9056 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220921221221-5916
	W0921 22:13:05.618944    9056 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220921221221-5916 returned with exit code 1
	I0921 22:13:05.618944    9056 retry.go:31] will retry after 263.695078ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-different-port-20220921221221-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220921221221-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220921221221-5916
	I0921 22:13:05.891960    9056 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220921221221-5916
	W0921 22:13:06.100622    9056 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220921221221-5916 returned with exit code 1
	I0921 22:13:06.100622    9056 retry.go:31] will retry after 484.240172ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-different-port-20220921221221-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220921221221-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220921221221-5916
	I0921 22:13:06.609569    9056 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220921221221-5916
	W0921 22:13:06.820276    9056 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220921221221-5916 returned with exit code 1
	W0921 22:13:06.820332    9056 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-different-port-20220921221221-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220921221221-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220921221221-5916
	
	W0921 22:13:06.820332    9056 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-different-port-20220921221221-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220921221221-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220921221221-5916
	I0921 22:13:06.820332    9056 start.go:128] duration metric: createHost completed in 6.9846803s
	I0921 22:13:06.835257    9056 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0921 22:13:06.843624    9056 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220921221221-5916
	W0921 22:13:07.059111    9056 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220921221221-5916 returned with exit code 1
	I0921 22:13:07.059111    9056 retry.go:31] will retry after 340.62286ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-different-port-20220921221221-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220921221221-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220921221221-5916
	I0921 22:13:07.406766    9056 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220921221221-5916
	W0921 22:13:07.603875    9056 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220921221221-5916 returned with exit code 1
	I0921 22:13:07.603875    9056 retry.go:31] will retry after 297.417842ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-different-port-20220921221221-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220921221221-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220921221221-5916
	I0921 22:13:07.924238    9056 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220921221221-5916
	W0921 22:13:08.108261    9056 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220921221221-5916 returned with exit code 1
	I0921 22:13:08.108261    9056 retry.go:31] will retry after 448.358942ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-different-port-20220921221221-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220921221221-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220921221221-5916
	I0921 22:13:08.574566    9056 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220921221221-5916
	W0921 22:13:08.758598    9056 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220921221221-5916 returned with exit code 1
	W0921 22:13:08.758598    9056 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-different-port-20220921221221-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220921221221-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220921221221-5916
	
	W0921 22:13:08.758598    9056 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-different-port-20220921221221-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220921221221-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220921221221-5916
	I0921 22:13:08.768567    9056 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0921 22:13:08.775579    9056 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220921221221-5916
	W0921 22:13:08.950577    9056 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220921221221-5916 returned with exit code 1
	I0921 22:13:08.950577    9056 retry.go:31] will retry after 176.645665ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-different-port-20220921221221-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220921221221-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220921221221-5916
	I0921 22:13:09.144614    9056 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220921221221-5916
	W0921 22:13:09.359034    9056 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220921221221-5916 returned with exit code 1
	I0921 22:13:09.359034    9056 retry.go:31] will retry after 512.00063ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-different-port-20220921221221-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220921221221-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220921221221-5916
	I0921 22:13:09.895464    9056 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220921221221-5916
	W0921 22:13:10.104186    9056 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220921221221-5916 returned with exit code 1
	W0921 22:13:10.104186    9056 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-different-port-20220921221221-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220921221221-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220921221221-5916
	
	W0921 22:13:10.104186    9056 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-different-port-20220921221221-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220921221221-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220921221221-5916
	I0921 22:13:10.104186    9056 fix.go:57] fixHost completed within 30.512617s
	I0921 22:13:10.104186    9056 start.go:83] releasing machines lock for "default-k8s-different-port-20220921221221-5916", held for 30.5131473s
	W0921 22:13:10.105223    9056 out.go:239] * Failed to start docker container. Running "minikube delete -p default-k8s-different-port-20220921221221-5916" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for default-k8s-different-port-20220921221221-5916 container: docker volume create default-k8s-different-port-20220921221221-5916 --label name.minikube.sigs.k8s.io=default-k8s-different-port-20220921221221-5916 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create default-k8s-different-port-20220921221221-5916: error while creating volume root path '/var/lib/docker/volumes/default-k8s-different-port-20220921221221-5916': mkdir /var/lib/docker/volumes/default-k8s-different-port-20220921221221-5916: read-only file system
	
	* Failed to start docker container. Running "minikube delete -p default-k8s-different-port-20220921221221-5916" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for default-k8s-different-port-20220921221221-5916 container: docker volume create default-k8s-different-port-20220921221221-5916 --label name.minikube.sigs.k8s.io=default-k8s-different-port-20220921221221-5916 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create default-k8s-different-port-20220921221221-5916: error while creating volume root path '/var/lib/docker/volumes/default-k8s-different-port-20220921221221-5916': mkdir /var/lib/docker/volumes/default-k8s-different-port-20220921221221-5916: read-only file system
	
	I0921 22:13:10.111634    9056 out.go:177] 
	W0921 22:13:10.113715    9056 out.go:239] X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for default-k8s-different-port-20220921221221-5916 container: docker volume create default-k8s-different-port-20220921221221-5916 --label name.minikube.sigs.k8s.io=default-k8s-different-port-20220921221221-5916 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create default-k8s-different-port-20220921221221-5916: error while creating volume root path '/var/lib/docker/volumes/default-k8s-different-port-20220921221221-5916': mkdir /var/lib/docker/volumes/default-k8s-different-port-20220921221221-5916: read-only file system
	
	X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for default-k8s-different-port-20220921221221-5916 container: docker volume create default-k8s-different-port-20220921221221-5916 --label name.minikube.sigs.k8s.io=default-k8s-different-port-20220921221221-5916 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create default-k8s-different-port-20220921221221-5916: error while creating volume root path '/var/lib/docker/volumes/default-k8s-different-port-20220921221221-5916': mkdir /var/lib/docker/volumes/default-k8s-different-port-20220921221221-5916: read-only file system
	
	W0921 22:13:10.113715    9056 out.go:239] * Suggestion: Restart Docker
	* Suggestion: Restart Docker
	W0921 22:13:10.113715    9056 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/6825
	* Related issue: https://github.com/kubernetes/minikube/issues/6825
	I0921 22:13:10.118175    9056 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-windows-amd64.exe start -p default-k8s-different-port-20220921221221-5916 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker --kubernetes-version=v1.25.2": exit status 60
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/default-k8s-different-port/serial/FirstStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect default-k8s-different-port-20220921221221-5916
helpers_test.go:231: (dbg) Non-zero exit: docker inspect default-k8s-different-port-20220921221221-5916: exit status 1 (256.8117ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: default-k8s-different-port-20220921221221-5916

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p default-k8s-different-port-20220921221221-5916 -n default-k8s-different-port-20220921221221-5916
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p default-k8s-different-port-20220921221221-5916 -n default-k8s-different-port-20220921221221-5916: exit status 7 (605.2111ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0921 22:13:11.101915    1856 status.go:247] status error: host: state: unknown state "default-k8s-different-port-20220921221221-5916": docker container inspect default-k8s-different-port-20220921221221-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220921221221-5916

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-different-port-20220921221221-5916" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/default-k8s-different-port/serial/FirstStart (50.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.87s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-20220921220947-5916" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect embed-certs-20220921220947-5916
helpers_test.go:231: (dbg) Non-zero exit: docker inspect embed-certs-20220921220947-5916: exit status 1 (246.0029ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: embed-certs-20220921220947-5916

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p embed-certs-20220921220947-5916 -n embed-certs-20220921220947-5916

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p embed-certs-20220921220947-5916 -n embed-certs-20220921220947-5916: exit status 7 (612.0562ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0921 22:12:22.254601    8148 status.go:247] status error: host: state: unknown state "embed-certs-20220921220947-5916": docker container inspect embed-certs-20220921220947-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220921220947-5916

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-20220921220947-5916" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.87s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (50.8s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-windows-amd64.exe start -p newest-cni-20220921221222-5916 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker --kubernetes-version=v1.25.2

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p newest-cni-20220921221222-5916 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker --kubernetes-version=v1.25.2: exit status 60 (49.7984661s)

                                                
                                                
-- stdout --
	* [newest-cni-20220921221222-5916] minikube v1.27.0 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	  - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=14995
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting control plane node newest-cni-20220921221222-5916 in cluster newest-cni-20220921221222-5916
	* Pulling base image ...
	* Another minikube instance is downloading dependencies... 
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* docker "newest-cni-20220921221222-5916" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0921 22:12:22.486812    5640 out.go:296] Setting OutFile to fd 2012 ...
	I0921 22:12:22.556810    5640 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0921 22:12:22.556810    5640 out.go:309] Setting ErrFile to fd 2032...
	I0921 22:12:22.556810    5640 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0921 22:12:22.576809    5640 out.go:303] Setting JSON to false
	I0921 22:12:22.580805    5640 start.go:115] hostinfo: {"hostname":"minikube2","uptime":4411,"bootTime":1663793931,"procs":156,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19044 Build 19044","kernelVersion":"10.0.19044 Build 19044","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"29dced62-21fb-45d8-a34e-472b66ced002"}
	W0921 22:12:22.580805    5640 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0921 22:12:22.591818    5640 out.go:177] * [newest-cni-20220921221222-5916] minikube v1.27.0 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	I0921 22:12:22.594804    5640 notify.go:214] Checking for updates...
	I0921 22:12:22.596813    5640 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	I0921 22:12:22.599838    5640 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	I0921 22:12:22.601802    5640 out.go:177]   - MINIKUBE_LOCATION=14995
	I0921 22:12:22.604804    5640 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0921 22:12:22.608812    5640 config.go:180] Loaded profile config "cert-expiration-20220921220719-5916": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.2
	I0921 22:12:22.608812    5640 config.go:180] Loaded profile config "embed-certs-20220921220947-5916": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.2
	I0921 22:12:22.609810    5640 config.go:180] Loaded profile config "multinode-20220921215635-5916-m01": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.2
	I0921 22:12:22.609810    5640 driver.go:365] Setting default libvirt URI to qemu:///system
	I0921 22:12:22.898681    5640 docker.go:137] docker version: linux-20.10.17
	I0921 22:12:22.905725    5640 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0921 22:12:23.423408    5640 info.go:265] docker info: {ID:6VFI:N2PO:PLBP:KXLR:EN7R:OR7P:564S:UY7N:ZWDA:5V2U:KGL2:KJZ7 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:43 OomKillDisable:true NGoroutines:85 SystemTime:2022-09-21 22:12:23.0539501 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.10.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.9] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-
plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.19.0]] Warnings:<nil>}}
	I0921 22:12:23.426397    5640 out.go:177] * Using the docker driver based on user configuration
	I0921 22:12:23.429396    5640 start.go:284] selected driver: docker
	I0921 22:12:23.429396    5640 start.go:808] validating driver "docker" against <nil>
	I0921 22:12:23.429396    5640 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0921 22:12:23.495999    5640 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0921 22:12:24.112739    5640 info.go:265] docker info: {ID:6VFI:N2PO:PLBP:KXLR:EN7R:OR7P:564S:UY7N:ZWDA:5V2U:KGL2:KJZ7 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:44 OomKillDisable:true NGoroutines:86 SystemTime:2022-09-21 22:12:23.6759253 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.10.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.9] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-
plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.19.0]] Warnings:<nil>}}
	I0921 22:12:24.113450    5640 start_flags.go:302] no existing cluster config was found, will generate one from the flags 
	W0921 22:12:24.113450    5640 out.go:239] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I0921 22:12:24.114648    5640 start_flags.go:886] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0921 22:12:24.119085    5640 out.go:177] * Using Docker Desktop driver with root privileges
	I0921 22:12:24.121297    5640 cni.go:95] Creating CNI manager for ""
	I0921 22:12:24.121355    5640 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0921 22:12:24.121407    5640 start_flags.go:316] config:
	{Name:newest-cni-20220921221222-5916 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.2 ClusterName:newest-cni-20220921221222-5916 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/s
ocket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I0921 22:12:24.125844    5640 out.go:177] * Starting control plane node newest-cni-20220921221222-5916 in cluster newest-cni-20220921221222-5916
	I0921 22:12:24.127743    5640 cache.go:120] Beginning downloading kic base image for docker with docker
	I0921 22:12:24.131054    5640 out.go:177] * Pulling base image ...
	I0921 22:12:24.134756    5640 preload.go:132] Checking if preload exists for k8s version v1.25.2 and runtime docker
	I0921 22:12:24.134814    5640 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c in local docker daemon
	I0921 22:12:24.135017    5640 preload.go:148] Found local preload: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.25.2-docker-overlay2-amd64.tar.lz4
	I0921 22:12:24.135075    5640 cache.go:57] Caching tarball of preloaded images
	I0921 22:12:24.135635    5640 preload.go:174] Found C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.25.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0921 22:12:24.135832    5640 cache.go:60] Finished verifying existence of preloaded tar for  v1.25.2 on docker
	I0921 22:12:24.135998    5640 profile.go:148] Saving config to C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\newest-cni-20220921221222-5916\config.json ...
	I0921 22:12:24.135998    5640 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\newest-cni-20220921221222-5916\config.json: {Name:mkb2efe571f76f92307e6464e8f44355e92c48f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0921 22:12:24.347321    5640 cache.go:147] Downloading gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c to local cache
	I0921 22:12:24.347321    5640 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase_v0.0.34@sha256_f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c.tar
	I0921 22:12:24.347321    5640 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase_v0.0.34@sha256_f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c.tar
	I0921 22:12:24.347321    5640 image.go:59] Checking for gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c in local cache directory
	I0921 22:12:24.347321    5640 image.go:62] Found gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c in local cache directory, skipping pull
	I0921 22:12:24.347321    5640 image.go:103] gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c exists in cache, skipping pull
	I0921 22:12:24.347321    5640 cache.go:150] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c as a tarball
	I0921 22:12:24.347321    5640 cache.go:161] Loading gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c from local cache
	I0921 22:12:24.347321    5640 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase_v0.0.34@sha256_f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c.tar
	I0921 22:12:26.918926    5640 cache.go:164] successfully loaded gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c from cached tarball
	I0921 22:12:26.918926    5640 cache.go:170] Downloading gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c to local daemon
	I0921 22:12:26.919031    5640 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c.d.lock -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase_v0.0.34@sha256_f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c.d.lock
	I0921 22:12:27.028030    5640 out.go:204] * Another minikube instance is downloading dependencies... 
	I0921 22:12:27.493132    5640 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c in local docker daemon
	I0921 22:12:27.732308    5640 image.go:243] Writing gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c to local daemon
	    > gcr.io/k8s-minikube/kicbase:  0 B [__________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase:  0 B [__________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase:  0 B [__________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase:  0 B [__________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase:  0 B [______________________] ?% ? p/s 800msI0921 22:12:29.253208    5640 cache.go:177] use image loaded from cache gcr.io/k8s-minikube/kicbase:v0.0.34
	W0921 22:12:29.253208    5640 out.go:239] ! minikube was unable to download gcr.io/k8s-minikube/kicbase:v0.0.34, but successfully downloaded gcr.io/k8s-minikube/kicbase:v0.0.34 as a fallback image
	! minikube was unable to download gcr.io/k8s-minikube/kicbase:v0.0.34, but successfully downloaded gcr.io/k8s-minikube/kicbase:v0.0.34 as a fallback image
	I0921 22:12:29.253208    5640 cache.go:208] Successfully downloaded all kic artifacts
	I0921 22:12:29.253208    5640 start.go:364] acquiring machines lock for newest-cni-20220921221222-5916: {Name:mkba2d573750337952145210e595be8251a49600 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0921 22:12:29.253208    5640 start.go:368] acquired machines lock for "newest-cni-20220921221222-5916" in 0s
	I0921 22:12:29.254163    5640 start.go:93] Provisioning new machine with config: &{Name:newest-cni-20220921221222-5916 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.34 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.2 ClusterName:newest-cni-20220921221222-5916 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Conta
inerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.25.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bin
aryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet} &{Name: IP: Port:8443 KubernetesVersion:v1.25.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0921 22:12:29.254163    5640 start.go:125] createHost starting for "" (driver="docker")
	I0921 22:12:29.258201    5640 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0921 22:12:29.258201    5640 start.go:159] libmachine.API.Create for "newest-cni-20220921221222-5916" (driver="docker")
	I0921 22:12:29.258201    5640 client.go:168] LocalClient.Create starting
	I0921 22:12:29.259167    5640 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I0921 22:12:29.259167    5640 main.go:134] libmachine: Decoding PEM data...
	I0921 22:12:29.259167    5640 main.go:134] libmachine: Parsing certificate...
	I0921 22:12:29.259167    5640 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I0921 22:12:29.259167    5640 main.go:134] libmachine: Decoding PEM data...
	I0921 22:12:29.259167    5640 main.go:134] libmachine: Parsing certificate...
	I0921 22:12:29.268169    5640 cli_runner.go:164] Run: docker network inspect newest-cni-20220921221222-5916 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0921 22:12:29.457524    5640 cli_runner.go:211] docker network inspect newest-cni-20220921221222-5916 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0921 22:12:29.465521    5640 network_create.go:272] running [docker network inspect newest-cni-20220921221222-5916] to gather additional debugging logs...
	I0921 22:12:29.465521    5640 cli_runner.go:164] Run: docker network inspect newest-cni-20220921221222-5916
	W0921 22:12:29.666973    5640 cli_runner.go:211] docker network inspect newest-cni-20220921221222-5916 returned with exit code 1
	I0921 22:12:29.666973    5640 network_create.go:275] error running [docker network inspect newest-cni-20220921221222-5916]: docker network inspect newest-cni-20220921221222-5916: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: newest-cni-20220921221222-5916
	I0921 22:12:29.666973    5640 network_create.go:277] output of [docker network inspect newest-cni-20220921221222-5916]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: newest-cni-20220921221222-5916
	
	** /stderr **
	I0921 22:12:29.676971    5640 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0921 22:12:29.892969    5640 network.go:290] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc0006068c8] misses:0}
	I0921 22:12:29.892969    5640 network.go:236] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0921 22:12:29.892969    5640 network_create.go:115] attempt to create docker network newest-cni-20220921221222-5916 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0921 22:12:29.899970    5640 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-20220921221222-5916 newest-cni-20220921221222-5916
	W0921 22:12:30.108172    5640 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-20220921221222-5916 newest-cni-20220921221222-5916 returned with exit code 1
	E0921 22:12:30.108243    5640 network_create.go:104] error while trying to create docker network newest-cni-20220921221222-5916 192.168.49.0/24: create docker network newest-cni-20220921221222-5916 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-20220921221222-5916 newest-cni-20220921221222-5916: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network be1891cb2a6f438376f8358c10d7fb6762d59a31d15a2a92f4e4b4b33a85c721 (br-be1891cb2a6f): conflicts with network a04d36bfb3cf4b1b1ab643a2ec70fadfc4a485878b32a275b4c7653037d66dd0 (br-a04d36bfb3cf): networks have overlapping IPv4
	W0921 22:12:30.108243    5640 out.go:239] ! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network newest-cni-20220921221222-5916 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-20220921221222-5916 newest-cni-20220921221222-5916: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network be1891cb2a6f438376f8358c10d7fb6762d59a31d15a2a92f4e4b4b33a85c721 (br-be1891cb2a6f): conflicts with network a04d36bfb3cf4b1b1ab643a2ec70fadfc4a485878b32a275b4c7653037d66dd0 (br-a04d36bfb3cf): networks have overlapping IPv4
	
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network newest-cni-20220921221222-5916 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-20220921221222-5916 newest-cni-20220921221222-5916: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network be1891cb2a6f438376f8358c10d7fb6762d59a31d15a2a92f4e4b4b33a85c721 (br-be1891cb2a6f): conflicts with network a04d36bfb3cf4b1b1ab643a2ec70fadfc4a485878b32a275b4c7653037d66dd0 (br-a04d36bfb3cf): networks have overlapping IPv4
	
	I0921 22:12:30.125080    5640 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0921 22:12:30.341187    5640 cli_runner.go:164] Run: docker volume create newest-cni-20220921221222-5916 --label name.minikube.sigs.k8s.io=newest-cni-20220921221222-5916 --label created_by.minikube.sigs.k8s.io=true
	W0921 22:12:30.535230    5640 cli_runner.go:211] docker volume create newest-cni-20220921221222-5916 --label name.minikube.sigs.k8s.io=newest-cni-20220921221222-5916 --label created_by.minikube.sigs.k8s.io=true returned with exit code 1
	I0921 22:12:30.535230    5640 client.go:171] LocalClient.Create took 1.2770192s
	I0921 22:12:32.555734    5640 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0921 22:12:32.565021    5640 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220921221222-5916
	W0921 22:12:32.746590    5640 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220921221222-5916 returned with exit code 1
	I0921 22:12:32.746590    5640 retry.go:31] will retry after 276.165072ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-20220921221222-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220921221222-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220921221222-5916
	I0921 22:12:33.033430    5640 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220921221222-5916
	W0921 22:12:33.228233    5640 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220921221222-5916 returned with exit code 1
	I0921 22:12:33.228233    5640 retry.go:31] will retry after 540.190908ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-20220921221222-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220921221222-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220921221222-5916
	I0921 22:12:33.790331    5640 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220921221222-5916
	W0921 22:12:33.998110    5640 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220921221222-5916 returned with exit code 1
	W0921 22:12:33.998110    5640 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-20220921221222-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220921221222-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220921221222-5916
	
	W0921 22:12:33.998110    5640 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-20220921221222-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220921221222-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220921221222-5916
	I0921 22:12:34.011329    5640 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0921 22:12:34.019302    5640 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220921221222-5916
	W0921 22:12:34.219144    5640 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220921221222-5916 returned with exit code 1
	I0921 22:12:34.219144    5640 retry.go:31] will retry after 234.428547ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-20220921221222-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220921221222-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220921221222-5916
	I0921 22:12:34.467766    5640 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220921221222-5916
	W0921 22:12:34.662565    5640 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220921221222-5916 returned with exit code 1
	I0921 22:12:34.662917    5640 retry.go:31] will retry after 346.739061ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-20220921221222-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220921221222-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220921221222-5916
	I0921 22:12:35.028414    5640 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220921221222-5916
	W0921 22:12:35.267798    5640 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220921221222-5916 returned with exit code 1
	I0921 22:12:35.267798    5640 retry.go:31] will retry after 667.587979ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-20220921221222-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220921221222-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220921221222-5916
	I0921 22:12:35.956759    5640 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220921221222-5916
	W0921 22:12:36.151109    5640 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220921221222-5916 returned with exit code 1
	W0921 22:12:36.151109    5640 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-20220921221222-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220921221222-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220921221222-5916
	
	W0921 22:12:36.151109    5640 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-20220921221222-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220921221222-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220921221222-5916
	I0921 22:12:36.151109    5640 start.go:128] duration metric: createHost completed in 6.8968921s
	I0921 22:12:36.151109    5640 start.go:83] releasing machines lock for "newest-cni-20220921221222-5916", held for 6.8978474s
	W0921 22:12:36.151109    5640 start.go:602] error starting host: creating host: create: creating: setting up container node: creating volume for newest-cni-20220921221222-5916 container: docker volume create newest-cni-20220921221222-5916 --label name.minikube.sigs.k8s.io=newest-cni-20220921221222-5916 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create newest-cni-20220921221222-5916: error while creating volume root path '/var/lib/docker/volumes/newest-cni-20220921221222-5916': mkdir /var/lib/docker/volumes/newest-cni-20220921221222-5916: read-only file system
	I0921 22:12:36.166931    5640 cli_runner.go:164] Run: docker container inspect newest-cni-20220921221222-5916 --format={{.State.Status}}
	W0921 22:12:36.368992    5640 cli_runner.go:211] docker container inspect newest-cni-20220921221222-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:12:36.368992    5640 delete.go:82] Unable to get host status for newest-cni-20220921221222-5916, assuming it has already been deleted: state: unknown state "newest-cni-20220921221222-5916": docker container inspect newest-cni-20220921221222-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220921221222-5916
	W0921 22:12:36.368992    5640 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: setting up container node: creating volume for newest-cni-20220921221222-5916 container: docker volume create newest-cni-20220921221222-5916 --label name.minikube.sigs.k8s.io=newest-cni-20220921221222-5916 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create newest-cni-20220921221222-5916: error while creating volume root path '/var/lib/docker/volumes/newest-cni-20220921221222-5916': mkdir /var/lib/docker/volumes/newest-cni-20220921221222-5916: read-only file system
	
	! StartHost failed, but will try again: creating host: create: creating: setting up container node: creating volume for newest-cni-20220921221222-5916 container: docker volume create newest-cni-20220921221222-5916 --label name.minikube.sigs.k8s.io=newest-cni-20220921221222-5916 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create newest-cni-20220921221222-5916: error while creating volume root path '/var/lib/docker/volumes/newest-cni-20220921221222-5916': mkdir /var/lib/docker/volumes/newest-cni-20220921221222-5916: read-only file system
	
	I0921 22:12:36.368992    5640 start.go:617] Will try again in 5 seconds ...
	I0921 22:12:41.369751    5640 start.go:364] acquiring machines lock for newest-cni-20220921221222-5916: {Name:mkba2d573750337952145210e595be8251a49600 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0921 22:12:41.369811    5640 start.go:368] acquired machines lock for "newest-cni-20220921221222-5916" in 0s
	I0921 22:12:41.370337    5640 start.go:96] Skipping create...Using existing machine configuration
	I0921 22:12:41.370337    5640 fix.go:55] fixHost starting: 
	I0921 22:12:41.385083    5640 cli_runner.go:164] Run: docker container inspect newest-cni-20220921221222-5916 --format={{.State.Status}}
	W0921 22:12:41.572114    5640 cli_runner.go:211] docker container inspect newest-cni-20220921221222-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:12:41.572114    5640 fix.go:103] recreateIfNeeded on newest-cni-20220921221222-5916: state= err=unknown state "newest-cni-20220921221222-5916": docker container inspect newest-cni-20220921221222-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220921221222-5916
	I0921 22:12:41.572114    5640 fix.go:108] machineExists: false. err=machine does not exist
	I0921 22:12:41.577115    5640 out.go:177] * docker "newest-cni-20220921221222-5916" container is missing, will recreate.
	I0921 22:12:41.579117    5640 delete.go:124] DEMOLISHING newest-cni-20220921221222-5916 ...
	I0921 22:12:41.593126    5640 cli_runner.go:164] Run: docker container inspect newest-cni-20220921221222-5916 --format={{.State.Status}}
	W0921 22:12:41.777574    5640 cli_runner.go:211] docker container inspect newest-cni-20220921221222-5916 --format={{.State.Status}} returned with exit code 1
	W0921 22:12:41.777801    5640 stop.go:75] unable to get state: unknown state "newest-cni-20220921221222-5916": docker container inspect newest-cni-20220921221222-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220921221222-5916
	I0921 22:12:41.777865    5640 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "newest-cni-20220921221222-5916": docker container inspect newest-cni-20220921221222-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220921221222-5916
	I0921 22:12:41.792463    5640 cli_runner.go:164] Run: docker container inspect newest-cni-20220921221222-5916 --format={{.State.Status}}
	W0921 22:12:41.979995    5640 cli_runner.go:211] docker container inspect newest-cni-20220921221222-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:12:41.979995    5640 delete.go:82] Unable to get host status for newest-cni-20220921221222-5916, assuming it has already been deleted: state: unknown state "newest-cni-20220921221222-5916": docker container inspect newest-cni-20220921221222-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220921221222-5916
	I0921 22:12:41.991710    5640 cli_runner.go:164] Run: docker container inspect -f {{.Id}} newest-cni-20220921221222-5916
	W0921 22:12:42.211701    5640 cli_runner.go:211] docker container inspect -f {{.Id}} newest-cni-20220921221222-5916 returned with exit code 1
	I0921 22:12:42.211701    5640 kic.go:356] could not find the container newest-cni-20220921221222-5916 to remove it. will try anyways
	I0921 22:12:42.218705    5640 cli_runner.go:164] Run: docker container inspect newest-cni-20220921221222-5916 --format={{.State.Status}}
	W0921 22:12:42.443914    5640 cli_runner.go:211] docker container inspect newest-cni-20220921221222-5916 --format={{.State.Status}} returned with exit code 1
	W0921 22:12:42.443914    5640 oci.go:84] error getting container status, will try to delete anyways: unknown state "newest-cni-20220921221222-5916": docker container inspect newest-cni-20220921221222-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220921221222-5916
	I0921 22:12:42.456589    5640 cli_runner.go:164] Run: docker exec --privileged -t newest-cni-20220921221222-5916 /bin/bash -c "sudo init 0"
	W0921 22:12:42.676890    5640 cli_runner.go:211] docker exec --privileged -t newest-cni-20220921221222-5916 /bin/bash -c "sudo init 0" returned with exit code 1
	I0921 22:12:42.676890    5640 oci.go:646] error shutdown newest-cni-20220921221222-5916: docker exec --privileged -t newest-cni-20220921221222-5916 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: newest-cni-20220921221222-5916
	I0921 22:12:43.691131    5640 cli_runner.go:164] Run: docker container inspect newest-cni-20220921221222-5916 --format={{.State.Status}}
	W0921 22:12:43.875163    5640 cli_runner.go:211] docker container inspect newest-cni-20220921221222-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:12:43.875163    5640 oci.go:658] temporary error verifying shutdown: unknown state "newest-cni-20220921221222-5916": docker container inspect newest-cni-20220921221222-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220921221222-5916
	I0921 22:12:43.875163    5640 oci.go:660] temporary error: container newest-cni-20220921221222-5916 status is  but expect it to be exited
	I0921 22:12:43.875163    5640 retry.go:31] will retry after 328.259627ms: couldn't verify container is exited. %v: unknown state "newest-cni-20220921221222-5916": docker container inspect newest-cni-20220921221222-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220921221222-5916
	I0921 22:12:44.225594    5640 cli_runner.go:164] Run: docker container inspect newest-cni-20220921221222-5916 --format={{.State.Status}}
	W0921 22:12:44.418835    5640 cli_runner.go:211] docker container inspect newest-cni-20220921221222-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:12:44.419016    5640 oci.go:658] temporary error verifying shutdown: unknown state "newest-cni-20220921221222-5916": docker container inspect newest-cni-20220921221222-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220921221222-5916
	I0921 22:12:44.419204    5640 oci.go:660] temporary error: container newest-cni-20220921221222-5916 status is  but expect it to be exited
	I0921 22:12:44.419270    5640 retry.go:31] will retry after 447.727139ms: couldn't verify container is exited. %v: unknown state "newest-cni-20220921221222-5916": docker container inspect newest-cni-20220921221222-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220921221222-5916
	I0921 22:12:44.885166    5640 cli_runner.go:164] Run: docker container inspect newest-cni-20220921221222-5916 --format={{.State.Status}}
	W0921 22:12:45.078803    5640 cli_runner.go:211] docker container inspect newest-cni-20220921221222-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:12:45.078957    5640 oci.go:658] temporary error verifying shutdown: unknown state "newest-cni-20220921221222-5916": docker container inspect newest-cni-20220921221222-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220921221222-5916
	I0921 22:12:45.078957    5640 oci.go:660] temporary error: container newest-cni-20220921221222-5916 status is  but expect it to be exited
	I0921 22:12:45.079047    5640 retry.go:31] will retry after 901.025843ms: couldn't verify container is exited. %v: unknown state "newest-cni-20220921221222-5916": docker container inspect newest-cni-20220921221222-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220921221222-5916
	I0921 22:12:45.994400    5640 cli_runner.go:164] Run: docker container inspect newest-cni-20220921221222-5916 --format={{.State.Status}}
	W0921 22:12:46.205738    5640 cli_runner.go:211] docker container inspect newest-cni-20220921221222-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:12:46.205738    5640 oci.go:658] temporary error verifying shutdown: unknown state "newest-cni-20220921221222-5916": docker container inspect newest-cni-20220921221222-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220921221222-5916
	I0921 22:12:46.205738    5640 oci.go:660] temporary error: container newest-cni-20220921221222-5916 status is  but expect it to be exited
	I0921 22:12:46.205738    5640 retry.go:31] will retry after 1.713171311s: couldn't verify container is exited. %v: unknown state "newest-cni-20220921221222-5916": docker container inspect newest-cni-20220921221222-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220921221222-5916
	I0921 22:12:47.939328    5640 cli_runner.go:164] Run: docker container inspect newest-cni-20220921221222-5916 --format={{.State.Status}}
	W0921 22:12:48.146548    5640 cli_runner.go:211] docker container inspect newest-cni-20220921221222-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:12:48.146677    5640 oci.go:658] temporary error verifying shutdown: unknown state "newest-cni-20220921221222-5916": docker container inspect newest-cni-20220921221222-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220921221222-5916
	I0921 22:12:48.146832    5640 oci.go:660] temporary error: container newest-cni-20220921221222-5916 status is  but expect it to be exited
	I0921 22:12:48.146892    5640 retry.go:31] will retry after 3.325151152s: couldn't verify container is exited. %v: unknown state "newest-cni-20220921221222-5916": docker container inspect newest-cni-20220921221222-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220921221222-5916
	I0921 22:12:51.492848    5640 cli_runner.go:164] Run: docker container inspect newest-cni-20220921221222-5916 --format={{.State.Status}}
	W0921 22:12:51.683033    5640 cli_runner.go:211] docker container inspect newest-cni-20220921221222-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:12:51.683033    5640 oci.go:658] temporary error verifying shutdown: unknown state "newest-cni-20220921221222-5916": docker container inspect newest-cni-20220921221222-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220921221222-5916
	I0921 22:12:51.683033    5640 oci.go:660] temporary error: container newest-cni-20220921221222-5916 status is  but expect it to be exited
	I0921 22:12:51.683033    5640 retry.go:31] will retry after 2.711970641s: couldn't verify container is exited. %v: unknown state "newest-cni-20220921221222-5916": docker container inspect newest-cni-20220921221222-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220921221222-5916
	I0921 22:12:54.411080    5640 cli_runner.go:164] Run: docker container inspect newest-cni-20220921221222-5916 --format={{.State.Status}}
	W0921 22:12:54.588029    5640 cli_runner.go:211] docker container inspect newest-cni-20220921221222-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:12:54.588324    5640 oci.go:658] temporary error verifying shutdown: unknown state "newest-cni-20220921221222-5916": docker container inspect newest-cni-20220921221222-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220921221222-5916
	I0921 22:12:54.588365    5640 oci.go:660] temporary error: container newest-cni-20220921221222-5916 status is  but expect it to be exited
	I0921 22:12:54.588399    5640 retry.go:31] will retry after 5.015617898s: couldn't verify container is exited. %v: unknown state "newest-cni-20220921221222-5916": docker container inspect newest-cni-20220921221222-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220921221222-5916
	I0921 22:12:59.622775    5640 cli_runner.go:164] Run: docker container inspect newest-cni-20220921221222-5916 --format={{.State.Status}}
	W0921 22:12:59.819700    5640 cli_runner.go:211] docker container inspect newest-cni-20220921221222-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:12:59.819829    5640 oci.go:658] temporary error verifying shutdown: unknown state "newest-cni-20220921221222-5916": docker container inspect newest-cni-20220921221222-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220921221222-5916
	I0921 22:12:59.819829    5640 oci.go:660] temporary error: container newest-cni-20220921221222-5916 status is  but expect it to be exited
	I0921 22:12:59.819829    5640 oci.go:88] couldn't shut down newest-cni-20220921221222-5916 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "newest-cni-20220921221222-5916": docker container inspect newest-cni-20220921221222-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220921221222-5916
	 
	I0921 22:12:59.827494    5640 cli_runner.go:164] Run: docker rm -f -v newest-cni-20220921221222-5916
	I0921 22:13:00.043444    5640 cli_runner.go:164] Run: docker container inspect -f {{.Id}} newest-cni-20220921221222-5916
	W0921 22:13:00.255175    5640 cli_runner.go:211] docker container inspect -f {{.Id}} newest-cni-20220921221222-5916 returned with exit code 1
	I0921 22:13:00.264898    5640 cli_runner.go:164] Run: docker network inspect newest-cni-20220921221222-5916 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0921 22:13:00.459738    5640 cli_runner.go:211] docker network inspect newest-cni-20220921221222-5916 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0921 22:13:00.468703    5640 network_create.go:272] running [docker network inspect newest-cni-20220921221222-5916] to gather additional debugging logs...
	I0921 22:13:00.468703    5640 cli_runner.go:164] Run: docker network inspect newest-cni-20220921221222-5916
	W0921 22:13:00.650892    5640 cli_runner.go:211] docker network inspect newest-cni-20220921221222-5916 returned with exit code 1
	I0921 22:13:00.651076    5640 network_create.go:275] error running [docker network inspect newest-cni-20220921221222-5916]: docker network inspect newest-cni-20220921221222-5916: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: newest-cni-20220921221222-5916
	I0921 22:13:00.651122    5640 network_create.go:277] output of [docker network inspect newest-cni-20220921221222-5916]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: newest-cni-20220921221222-5916
	
	** /stderr **
	W0921 22:13:00.652138    5640 delete.go:139] delete failed (probably ok) <nil>
	I0921 22:13:00.652138    5640 fix.go:115] Sleeping 1 second for extra luck!
	I0921 22:13:01.664748    5640 start.go:125] createHost starting for "" (driver="docker")
	I0921 22:13:01.669094    5640 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0921 22:13:01.669235    5640 start.go:159] libmachine.API.Create for "newest-cni-20220921221222-5916" (driver="docker")
	I0921 22:13:01.669235    5640 client.go:168] LocalClient.Create starting
	I0921 22:13:01.669931    5640 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I0921 22:13:01.669931    5640 main.go:134] libmachine: Decoding PEM data...
	I0921 22:13:01.669931    5640 main.go:134] libmachine: Parsing certificate...
	I0921 22:13:01.669931    5640 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I0921 22:13:01.670506    5640 main.go:134] libmachine: Decoding PEM data...
	I0921 22:13:01.670506    5640 main.go:134] libmachine: Parsing certificate...
	I0921 22:13:01.678044    5640 cli_runner.go:164] Run: docker network inspect newest-cni-20220921221222-5916 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0921 22:13:01.867038    5640 cli_runner.go:211] docker network inspect newest-cni-20220921221222-5916 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0921 22:13:01.875073    5640 network_create.go:272] running [docker network inspect newest-cni-20220921221222-5916] to gather additional debugging logs...
	I0921 22:13:01.875073    5640 cli_runner.go:164] Run: docker network inspect newest-cni-20220921221222-5916
	W0921 22:13:02.069916    5640 cli_runner.go:211] docker network inspect newest-cni-20220921221222-5916 returned with exit code 1
	I0921 22:13:02.069916    5640 network_create.go:275] error running [docker network inspect newest-cni-20220921221222-5916]: docker network inspect newest-cni-20220921221222-5916: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: newest-cni-20220921221222-5916
	I0921 22:13:02.069916    5640 network_create.go:277] output of [docker network inspect newest-cni-20220921221222-5916]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: newest-cni-20220921221222-5916
	
	** /stderr **
	I0921 22:13:02.077505    5640 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0921 22:13:02.287752    5640 network.go:281] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0006068c8] amended:false}} dirty:map[] misses:0}
	I0921 22:13:02.287989    5640 network.go:239] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0921 22:13:02.302837    5640 network.go:290] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0006068c8] amended:true}} dirty:map[192.168.49.0:0xc0006068c8 192.168.58.0:0xc000606e00] misses:0}
	I0921 22:13:02.302837    5640 network.go:236] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0921 22:13:02.302837    5640 network_create.go:115] attempt to create docker network newest-cni-20220921221222-5916 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0921 22:13:02.311030    5640 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-20220921221222-5916 newest-cni-20220921221222-5916
	W0921 22:13:02.505730    5640 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-20220921221222-5916 newest-cni-20220921221222-5916 returned with exit code 1
	E0921 22:13:02.505828    5640 network_create.go:104] error while trying to create docker network newest-cni-20220921221222-5916 192.168.58.0/24: create docker network newest-cni-20220921221222-5916 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-20220921221222-5916 newest-cni-20220921221222-5916: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 7b17dd5f53df3b453cf8c533828fbe4c6e59b47680a1d8a6f039bf44e86feb3d (br-7b17dd5f53df): conflicts with network 8a3cd8d165a4af032276277dbee93b52b8ee4aed7ca6d8aceee6814901b31792 (br-8a3cd8d165a4): networks have overlapping IPv4
	W0921 22:13:02.505852    5640 out.go:239] ! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network newest-cni-20220921221222-5916 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-20220921221222-5916 newest-cni-20220921221222-5916: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 7b17dd5f53df3b453cf8c533828fbe4c6e59b47680a1d8a6f039bf44e86feb3d (br-7b17dd5f53df): conflicts with network 8a3cd8d165a4af032276277dbee93b52b8ee4aed7ca6d8aceee6814901b31792 (br-8a3cd8d165a4): networks have overlapping IPv4
	
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network newest-cni-20220921221222-5916 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-20220921221222-5916 newest-cni-20220921221222-5916: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 7b17dd5f53df3b453cf8c533828fbe4c6e59b47680a1d8a6f039bf44e86feb3d (br-7b17dd5f53df): conflicts with network 8a3cd8d165a4af032276277dbee93b52b8ee4aed7ca6d8aceee6814901b31792 (br-8a3cd8d165a4): networks have overlapping IPv4
	
	I0921 22:13:02.521782    5640 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0921 22:13:02.729759    5640 cli_runner.go:164] Run: docker volume create newest-cni-20220921221222-5916 --label name.minikube.sigs.k8s.io=newest-cni-20220921221222-5916 --label created_by.minikube.sigs.k8s.io=true
	W0921 22:13:02.955318    5640 cli_runner.go:211] docker volume create newest-cni-20220921221222-5916 --label name.minikube.sigs.k8s.io=newest-cni-20220921221222-5916 --label created_by.minikube.sigs.k8s.io=true returned with exit code 1
	I0921 22:13:02.955318    5640 client.go:171] LocalClient.Create took 1.2860727s
	I0921 22:13:04.969962    5640 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0921 22:13:04.976969    5640 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220921221222-5916
	W0921 22:13:05.193508    5640 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220921221222-5916 returned with exit code 1
	I0921 22:13:05.193508    5640 retry.go:31] will retry after 242.222461ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-20220921221222-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220921221222-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220921221222-5916
	I0921 22:13:05.452214    5640 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220921221222-5916
	W0921 22:13:05.634931    5640 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220921221222-5916 returned with exit code 1
	I0921 22:13:05.634931    5640 retry.go:31] will retry after 293.637806ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-20220921221222-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220921221222-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220921221222-5916
	I0921 22:13:05.940389    5640 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220921221222-5916
	W0921 22:13:06.132236    5640 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220921221222-5916 returned with exit code 1
	I0921 22:13:06.132236    5640 retry.go:31] will retry after 446.119795ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-20220921221222-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220921221222-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220921221222-5916
	I0921 22:13:06.592176    5640 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220921221222-5916
	W0921 22:13:06.820082    5640 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220921221222-5916 returned with exit code 1
	W0921 22:13:06.820332    5640 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-20220921221222-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220921221222-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220921221222-5916
	
	W0921 22:13:06.820332    5640 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-20220921221222-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220921221222-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220921221222-5916
	I0921 22:13:06.832904    5640 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0921 22:13:06.840632    5640 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220921221222-5916
	W0921 22:13:07.043553    5640 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220921221222-5916 returned with exit code 1
	I0921 22:13:07.043615    5640 retry.go:31] will retry after 179.638263ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-20220921221222-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220921221222-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220921221222-5916
	I0921 22:13:07.241115    5640 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220921221222-5916
	W0921 22:13:07.431772    5640 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220921221222-5916 returned with exit code 1
	I0921 22:13:07.431772    5640 retry.go:31] will retry after 263.695078ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-20220921221222-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220921221222-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220921221222-5916
	I0921 22:13:07.703909    5640 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220921221222-5916
	W0921 22:13:07.901240    5640 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220921221222-5916 returned with exit code 1
	I0921 22:13:07.901240    5640 retry.go:31] will retry after 484.240172ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-20220921221222-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220921221222-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220921221222-5916
	I0921 22:13:08.406936    5640 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220921221222-5916
	W0921 22:13:08.646563    5640 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220921221222-5916 returned with exit code 1
	W0921 22:13:08.646563    5640 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-20220921221222-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220921221222-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220921221222-5916
	
	W0921 22:13:08.646563    5640 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-20220921221222-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220921221222-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220921221222-5916
	I0921 22:13:08.646563    5640 start.go:128] duration metric: createHost completed in 6.9815711s
	I0921 22:13:08.657563    5640 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0921 22:13:08.663582    5640 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220921221222-5916
	W0921 22:13:08.870592    5640 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220921221222-5916 returned with exit code 1
	I0921 22:13:08.870592    5640 retry.go:31] will retry after 340.62286ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-20220921221222-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220921221222-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220921221222-5916
	I0921 22:13:09.222359    5640 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220921221222-5916
	W0921 22:13:09.437460    5640 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220921221222-5916 returned with exit code 1
	I0921 22:13:09.437460    5640 retry.go:31] will retry after 297.417842ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-20220921221222-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220921221222-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220921221222-5916
	I0921 22:13:09.751745    5640 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220921221222-5916
	W0921 22:13:09.948136    5640 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220921221222-5916 returned with exit code 1
	I0921 22:13:09.948622    5640 retry.go:31] will retry after 448.358942ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-20220921221222-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220921221222-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220921221222-5916
	I0921 22:13:10.409884    5640 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220921221222-5916
	W0921 22:13:10.608798    5640 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220921221222-5916 returned with exit code 1
	W0921 22:13:10.608798    5640 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-20220921221222-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220921221222-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220921221222-5916
	
	W0921 22:13:10.608798    5640 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-20220921221222-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220921221222-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220921221222-5916
	I0921 22:13:10.620723    5640 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0921 22:13:10.626717    5640 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220921221222-5916
	W0921 22:13:10.852980    5640 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220921221222-5916 returned with exit code 1
	I0921 22:13:10.852980    5640 retry.go:31] will retry after 176.645665ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-20220921221222-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220921221222-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220921221222-5916
	I0921 22:13:11.049323    5640 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220921221222-5916
	W0921 22:13:11.241566    5640 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220921221222-5916 returned with exit code 1
	I0921 22:13:11.241566    5640 retry.go:31] will retry after 512.00063ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-20220921221222-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220921221222-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220921221222-5916
	I0921 22:13:11.768002    5640 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220921221222-5916
	W0921 22:13:11.975744    5640 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220921221222-5916 returned with exit code 1
	W0921 22:13:11.975744    5640 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-20220921221222-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220921221222-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220921221222-5916
	
	W0921 22:13:11.975744    5640 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-20220921221222-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220921221222-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220921221222-5916
	I0921 22:13:11.975744    5640 fix.go:57] fixHost completed within 30.6051658s
	I0921 22:13:11.975744    5640 start.go:83] releasing machines lock for "newest-cni-20220921221222-5916", held for 30.6056915s
	W0921 22:13:11.976560    5640 out.go:239] * Failed to start docker container. Running "minikube delete -p newest-cni-20220921221222-5916" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for newest-cni-20220921221222-5916 container: docker volume create newest-cni-20220921221222-5916 --label name.minikube.sigs.k8s.io=newest-cni-20220921221222-5916 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create newest-cni-20220921221222-5916: error while creating volume root path '/var/lib/docker/volumes/newest-cni-20220921221222-5916': mkdir /var/lib/docker/volumes/newest-cni-20220921221222-5916: read-only file system
	
	* Failed to start docker container. Running "minikube delete -p newest-cni-20220921221222-5916" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for newest-cni-20220921221222-5916 container: docker volume create newest-cni-20220921221222-5916 --label name.minikube.sigs.k8s.io=newest-cni-20220921221222-5916 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create newest-cni-20220921221222-5916: error while creating volume root path '/var/lib/docker/volumes/newest-cni-20220921221222-5916': mkdir /var/lib/docker/volumes/newest-cni-20220921221222-5916: read-only file system
	
	I0921 22:13:11.981815    5640 out.go:177] 
	W0921 22:13:11.984007    5640 out.go:239] X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for newest-cni-20220921221222-5916 container: docker volume create newest-cni-20220921221222-5916 --label name.minikube.sigs.k8s.io=newest-cni-20220921221222-5916 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create newest-cni-20220921221222-5916: error while creating volume root path '/var/lib/docker/volumes/newest-cni-20220921221222-5916': mkdir /var/lib/docker/volumes/newest-cni-20220921221222-5916: read-only file system
	
	X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for newest-cni-20220921221222-5916 container: docker volume create newest-cni-20220921221222-5916 --label name.minikube.sigs.k8s.io=newest-cni-20220921221222-5916 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create newest-cni-20220921221222-5916: error while creating volume root path '/var/lib/docker/volumes/newest-cni-20220921221222-5916': mkdir /var/lib/docker/volumes/newest-cni-20220921221222-5916: read-only file system
	
	W0921 22:13:11.984535    5640 out.go:239] * Suggestion: Restart Docker
	* Suggestion: Restart Docker
	W0921 22:13:11.984761    5640 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/6825
	* Related issue: https://github.com/kubernetes/minikube/issues/6825
	I0921 22:13:11.988003    5640 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-windows-amd64.exe start -p newest-cni-20220921221222-5916 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker --kubernetes-version=v1.25.2": exit status 60
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/newest-cni/serial/FirstStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect newest-cni-20220921221222-5916

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/FirstStart
helpers_test.go:231: (dbg) Non-zero exit: docker inspect newest-cni-20220921221222-5916: exit status 1 (265.0451ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: newest-cni-20220921221222-5916

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p newest-cni-20220921221222-5916 -n newest-cni-20220921221222-5916

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/FirstStart
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p newest-cni-20220921221222-5916 -n newest-cni-20220921221222-5916: exit status 7 (639.3438ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0921 22:13:12.996989    8864 status.go:247] status error: host: state: unknown state "newest-cni-20220921221222-5916": docker container inspect newest-cni-20220921221222-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220921221222-5916

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-20220921221222-5916" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/newest-cni/serial/FirstStart (50.80s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (1.04s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-20220921220947-5916" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-20220921220947-5916 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-20220921220947-5916 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (187.1183ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-20220921220947-5916" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-20220921220947-5916 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " k8s.gcr.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect embed-certs-20220921220947-5916
helpers_test.go:231: (dbg) Non-zero exit: docker inspect embed-certs-20220921220947-5916: exit status 1 (236.5627ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: embed-certs-20220921220947-5916

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p embed-certs-20220921220947-5916 -n embed-certs-20220921220947-5916
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p embed-certs-20220921220947-5916 -n embed-certs-20220921220947-5916: exit status 7 (602.0235ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0921 22:12:23.296399    5420 status.go:247] status error: host: state: unknown state "embed-certs-20220921220947-5916": docker container inspect embed-certs-20220921220947-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220921220947-5916

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-20220921220947-5916" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (1.04s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (2.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p embed-certs-20220921220947-5916 "sudo crictl images -o json"
start_stop_delete_test.go:304: (dbg) Non-zero exit: out/minikube-windows-amd64.exe ssh -p embed-certs-20220921220947-5916 "sudo crictl images -o json": exit status 80 (1.1597824s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "embed-certs-20220921220947-5916": docker container inspect embed-certs-20220921220947-5916 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220921220947-5916
	
	* 
	╭──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                      │
	│    * If the above advice does not help, please let us know:                                                          │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                        │
	│                                                                                                                      │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                             │
	│    * Please also attach the following file to the GitHub issue:                                                      │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_ssh_2ebd0b017f5d88727e5083393ee181280e239d1d_18.log    │
	│                                                                                                                      │
	╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:304: failed tp get images inside minikube. args "out/minikube-windows-amd64.exe ssh -p embed-certs-20220921220947-5916 \"sudo crictl images -o json\"": exit status 80
start_stop_delete_test.go:304: failed to decode images json unexpected end of JSON input. output:

                                                
                                                

                                                
                                                
start_stop_delete_test.go:304: v1.25.2 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.9.3",
- 	"registry.k8s.io/etcd:3.5.4-0",
- 	"registry.k8s.io/kube-apiserver:v1.25.2",
- 	"registry.k8s.io/kube-controller-manager:v1.25.2",
- 	"registry.k8s.io/kube-proxy:v1.25.2",
- 	"registry.k8s.io/kube-scheduler:v1.25.2",
- 	"registry.k8s.io/pause:3.8",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/embed-certs/serial/VerifyKubernetesImages]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect embed-certs-20220921220947-5916
helpers_test.go:231: (dbg) Non-zero exit: docker inspect embed-certs-20220921220947-5916: exit status 1 (278.468ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: embed-certs-20220921220947-5916

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p embed-certs-20220921220947-5916 -n embed-certs-20220921220947-5916
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p embed-certs-20220921220947-5916 -n embed-certs-20220921220947-5916: exit status 7 (637.7089ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0921 22:12:25.391076    9148 status.go:247] status error: host: state: unknown state "embed-certs-20220921220947-5916": docker container inspect embed-certs-20220921220947-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220921220947-5916

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-20220921220947-5916" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (2.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.87s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe pause -p embed-certs-20220921220947-5916 --alsologtostderr -v=1

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-windows-amd64.exe pause -p embed-certs-20220921220947-5916 --alsologtostderr -v=1: exit status 80 (1.1246567s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0921 22:12:25.693975    8896 out.go:296] Setting OutFile to fd 2044 ...
	I0921 22:12:25.757849    8896 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0921 22:12:25.758845    8896 out.go:309] Setting ErrFile to fd 1624...
	I0921 22:12:25.758845    8896 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0921 22:12:25.770844    8896 out.go:303] Setting JSON to false
	I0921 22:12:25.772103    8896 mustload.go:65] Loading cluster: embed-certs-20220921220947-5916
	I0921 22:12:25.772103    8896 config.go:180] Loaded profile config "embed-certs-20220921220947-5916": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.2
	I0921 22:12:25.791184    8896 cli_runner.go:164] Run: docker container inspect embed-certs-20220921220947-5916 --format={{.State.Status}}
	W0921 22:12:25.980376    8896 cli_runner.go:211] docker container inspect embed-certs-20220921220947-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:12:25.996376    8896 out.go:177] 
	W0921 22:12:25.999375    8896 out.go:239] X Exiting due to GUEST_STATUS: state: unknown state "embed-certs-20220921220947-5916": docker container inspect embed-certs-20220921220947-5916 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220921220947-5916
	
	X Exiting due to GUEST_STATUS: state: unknown state "embed-certs-20220921220947-5916": docker container inspect embed-certs-20220921220947-5916 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220921220947-5916
	
	W0921 22:12:25.999375    8896 out.go:239] * 
	* 
	W0921 22:12:26.503415    8896 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_status_abcabdb3ea89e0e0cb5bb0e0976767ebe71062f4_70.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_status_abcabdb3ea89e0e0cb5bb0e0976767ebe71062f4_70.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0921 22:12:26.506415    8896 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-windows-amd64.exe pause -p embed-certs-20220921220947-5916 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect embed-certs-20220921220947-5916
helpers_test.go:231: (dbg) Non-zero exit: docker inspect embed-certs-20220921220947-5916: exit status 1 (283.1132ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: embed-certs-20220921220947-5916

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p embed-certs-20220921220947-5916 -n embed-certs-20220921220947-5916
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p embed-certs-20220921220947-5916 -n embed-certs-20220921220947-5916: exit status 7 (607.5188ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0921 22:12:27.415154    6292 status.go:247] status error: host: state: unknown state "embed-certs-20220921220947-5916": docker container inspect embed-certs-20220921220947-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220921220947-5916

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-20220921220947-5916" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect embed-certs-20220921220947-5916
helpers_test.go:231: (dbg) Non-zero exit: docker inspect embed-certs-20220921220947-5916: exit status 1 (240.7047ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: embed-certs-20220921220947-5916

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p embed-certs-20220921220947-5916 -n embed-certs-20220921220947-5916

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/Pause
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p embed-certs-20220921220947-5916 -n embed-certs-20220921220947-5916: exit status 7 (595.935ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0921 22:12:28.264951    6452 status.go:247] status error: host: state: unknown state "embed-certs-20220921220947-5916": docker container inspect embed-certs-20220921220947-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20220921220947-5916

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-20220921220947-5916" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (2.87s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (49.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:101: (dbg) Run:  out/minikube-windows-amd64.exe start -p auto-20220921220528-5916 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --driver=docker

                                                
                                                
=== CONT  TestNetworkPlugins/group/auto/Start
net_test.go:101: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p auto-20220921220528-5916 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --driver=docker: exit status 60 (49.1806306s)

                                                
                                                
-- stdout --
	* [auto-20220921220528-5916] minikube v1.27.0 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	  - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=14995
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting control plane node auto-20220921220528-5916 in cluster auto-20220921220528-5916
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* docker "auto-20220921220528-5916" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0921 22:12:28.305260    4228 out.go:296] Setting OutFile to fd 1724 ...
	I0921 22:12:28.391361    4228 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0921 22:12:28.391361    4228 out.go:309] Setting ErrFile to fd 1572...
	I0921 22:12:28.391361    4228 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0921 22:12:28.418354    4228 out.go:303] Setting JSON to false
	I0921 22:12:28.421350    4228 start.go:115] hostinfo: {"hostname":"minikube2","uptime":4417,"bootTime":1663793931,"procs":152,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19044 Build 19044","kernelVersion":"10.0.19044 Build 19044","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"29dced62-21fb-45d8-a34e-472b66ced002"}
	W0921 22:12:28.421350    4228 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0921 22:12:28.426405    4228 out.go:177] * [auto-20220921220528-5916] minikube v1.27.0 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	I0921 22:12:28.430388    4228 notify.go:214] Checking for updates...
	I0921 22:12:28.432367    4228 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	I0921 22:12:28.435356    4228 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	I0921 22:12:28.437354    4228 out.go:177]   - MINIKUBE_LOCATION=14995
	I0921 22:12:28.439351    4228 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0921 22:12:28.442349    4228 config.go:180] Loaded profile config "default-k8s-different-port-20220921221221-5916": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.2
	I0921 22:12:28.443347    4228 config.go:180] Loaded profile config "embed-certs-20220921220947-5916": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.2
	I0921 22:12:28.443347    4228 config.go:180] Loaded profile config "multinode-20220921215635-5916-m01": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.2
	I0921 22:12:28.443347    4228 config.go:180] Loaded profile config "newest-cni-20220921221222-5916": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.2
	I0921 22:12:28.443347    4228 driver.go:365] Setting default libvirt URI to qemu:///system
	I0921 22:12:28.721894    4228 docker.go:137] docker version: linux-20.10.17
	I0921 22:12:28.728947    4228 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0921 22:12:29.269171    4228 info.go:265] docker info: {ID:6VFI:N2PO:PLBP:KXLR:EN7R:OR7P:564S:UY7N:ZWDA:5V2U:KGL2:KJZ7 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:44 OomKillDisable:true NGoroutines:87 SystemTime:2022-09-21 22:12:28.8948769 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.10.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.9] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-
plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.19.0]] Warnings:<nil>}}
	I0921 22:12:29.272166    4228 out.go:177] * Using the docker driver based on user configuration
	I0921 22:12:29.275212    4228 start.go:284] selected driver: docker
	I0921 22:12:29.275212    4228 start.go:808] validating driver "docker" against <nil>
	I0921 22:12:29.275212    4228 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0921 22:12:29.335178    4228 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0921 22:12:29.904972    4228 info.go:265] docker info: {ID:6VFI:N2PO:PLBP:KXLR:EN7R:OR7P:564S:UY7N:ZWDA:5V2U:KGL2:KJZ7 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:43 OomKillDisable:true NGoroutines:87 SystemTime:2022-09-21 22:12:29.4890031 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.10.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.9] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-
plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.19.0]] Warnings:<nil>}}
	I0921 22:12:29.904972    4228 start_flags.go:302] no existing cluster config was found, will generate one from the flags 
	I0921 22:12:29.905976    4228 start_flags.go:867] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0921 22:12:29.908971    4228 out.go:177] * Using Docker Desktop driver with root privileges
	I0921 22:12:29.910974    4228 cni.go:95] Creating CNI manager for ""
	I0921 22:12:29.911970    4228 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0921 22:12:29.911970    4228 start_flags.go:316] config:
	{Name:auto-20220921220528-5916 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.2 ClusterName:auto-20220921220528-5916 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRI
Socket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I0921 22:12:29.914970    4228 out.go:177] * Starting control plane node auto-20220921220528-5916 in cluster auto-20220921220528-5916
	I0921 22:12:29.917022    4228 cache.go:120] Beginning downloading kic base image for docker with docker
	I0921 22:12:29.919983    4228 out.go:177] * Pulling base image ...
	I0921 22:12:29.922970    4228 preload.go:132] Checking if preload exists for k8s version v1.25.2 and runtime docker
	I0921 22:12:29.922970    4228 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c in local docker daemon
	I0921 22:12:29.922970    4228 preload.go:148] Found local preload: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.25.2-docker-overlay2-amd64.tar.lz4
	I0921 22:12:29.922970    4228 cache.go:57] Caching tarball of preloaded images
	I0921 22:12:29.922970    4228 preload.go:174] Found C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.25.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0921 22:12:29.922970    4228 cache.go:60] Finished verifying existence of preloaded tar for  v1.25.2 on docker
	I0921 22:12:29.923981    4228 profile.go:148] Saving config to C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\auto-20220921220528-5916\config.json ...
	I0921 22:12:29.923981    4228 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\auto-20220921220528-5916\config.json: {Name:mk7c86cd970e8b53c20b9b799186ede8fcf2504f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0921 22:12:30.138997    4228 cache.go:147] Downloading gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c to local cache
	I0921 22:12:30.138997    4228 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase_v0.0.34@sha256_f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c.tar
	I0921 22:12:30.138997    4228 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase_v0.0.34@sha256_f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c.tar
	I0921 22:12:30.138997    4228 image.go:59] Checking for gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c in local cache directory
	I0921 22:12:30.138997    4228 image.go:62] Found gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c in local cache directory, skipping pull
	I0921 22:12:30.138997    4228 image.go:103] gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c exists in cache, skipping pull
	I0921 22:12:30.138997    4228 cache.go:150] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c as a tarball
	I0921 22:12:30.138997    4228 cache.go:161] Loading gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c from local cache
	I0921 22:12:30.138997    4228 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase_v0.0.34@sha256_f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c.tar
	I0921 22:12:32.548256    4228 cache.go:164] successfully loaded gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c from cached tarball
	I0921 22:12:32.548336    4228 cache.go:170] Downloading gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c to local daemon
	I0921 22:12:32.548411    4228 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c.d.lock -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase_v0.0.34@sha256_f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c.d.lock
	I0921 22:12:32.548836    4228 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c in local docker daemon
	I0921 22:12:32.762316    4228 image.go:243] Writing gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c to local daemon
	    > gcr.io/k8s-minikube/kicbase:  0 B [__________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase:  0 B [__________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase:  0 B [__________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase:  0 B [__________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase:  0 B [______________________] ?% ? p/s 800msI0921 22:12:34.299197    4228 cache.go:177] use image loaded from cache gcr.io/k8s-minikube/kicbase:v0.0.34
	W0921 22:12:34.299197    4228 out.go:239] ! minikube was unable to download gcr.io/k8s-minikube/kicbase:v0.0.34, but successfully downloaded gcr.io/k8s-minikube/kicbase:v0.0.34 as a fallback image
	! minikube was unable to download gcr.io/k8s-minikube/kicbase:v0.0.34, but successfully downloaded gcr.io/k8s-minikube/kicbase:v0.0.34 as a fallback image
	I0921 22:12:34.299197    4228 cache.go:208] Successfully downloaded all kic artifacts
	I0921 22:12:34.299197    4228 start.go:364] acquiring machines lock for auto-20220921220528-5916: {Name:mk63523cc7a6858ea7c2e93a49610b9c0ee7ee51 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0921 22:12:34.299197    4228 start.go:368] acquired machines lock for "auto-20220921220528-5916" in 0s
	I0921 22:12:34.299197    4228 start.go:93] Provisioning new machine with config: &{Name:auto-20220921220528-5916 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.34 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.2 ClusterName:auto-20220921220528-5916 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.25.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath
:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet} &{Name: IP: Port:8443 KubernetesVersion:v1.25.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0921 22:12:34.300174    4228 start.go:125] createHost starting for "" (driver="docker")
	I0921 22:12:34.303207    4228 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0921 22:12:34.304163    4228 start.go:159] libmachine.API.Create for "auto-20220921220528-5916" (driver="docker")
	I0921 22:12:34.304163    4228 client.go:168] LocalClient.Create starting
	I0921 22:12:34.304163    4228 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I0921 22:12:34.304163    4228 main.go:134] libmachine: Decoding PEM data...
	I0921 22:12:34.304163    4228 main.go:134] libmachine: Parsing certificate...
	I0921 22:12:34.304163    4228 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I0921 22:12:34.305160    4228 main.go:134] libmachine: Decoding PEM data...
	I0921 22:12:34.305160    4228 main.go:134] libmachine: Parsing certificate...
	I0921 22:12:34.317156    4228 cli_runner.go:164] Run: docker network inspect auto-20220921220528-5916 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0921 22:12:34.504763    4228 cli_runner.go:211] docker network inspect auto-20220921220528-5916 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0921 22:12:34.511775    4228 network_create.go:272] running [docker network inspect auto-20220921220528-5916] to gather additional debugging logs...
	I0921 22:12:34.511775    4228 cli_runner.go:164] Run: docker network inspect auto-20220921220528-5916
	W0921 22:12:34.693747    4228 cli_runner.go:211] docker network inspect auto-20220921220528-5916 returned with exit code 1
	I0921 22:12:34.693904    4228 network_create.go:275] error running [docker network inspect auto-20220921220528-5916]: docker network inspect auto-20220921220528-5916: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: auto-20220921220528-5916
	I0921 22:12:34.693979    4228 network_create.go:277] output of [docker network inspect auto-20220921220528-5916]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: auto-20220921220528-5916
	
	** /stderr **
	I0921 22:12:34.702998    4228 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0921 22:12:34.917805    4228 network.go:290] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc00014b3a8] misses:0}
	I0921 22:12:34.917805    4228 network.go:236] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0921 22:12:34.917805    4228 network_create.go:115] attempt to create docker network auto-20220921220528-5916 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0921 22:12:34.924176    4228 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=auto-20220921220528-5916 auto-20220921220528-5916
	W0921 22:12:35.141832    4228 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=auto-20220921220528-5916 auto-20220921220528-5916 returned with exit code 1
	E0921 22:12:35.141832    4228 network_create.go:104] error while trying to create docker network auto-20220921220528-5916 192.168.49.0/24: create docker network auto-20220921220528-5916 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=auto-20220921220528-5916 auto-20220921220528-5916: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 19f39cb6cbb8f90471e81f8bd014b085aa3a7b70acd2b64bdb21505e092560dd (br-19f39cb6cbb8): conflicts with network a04d36bfb3cf4b1b1ab643a2ec70fadfc4a485878b32a275b4c7653037d66dd0 (br-a04d36bfb3cf): networks have overlapping IPv4
	W0921 22:12:35.141832    4228 out.go:239] ! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network auto-20220921220528-5916 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=auto-20220921220528-5916 auto-20220921220528-5916: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 19f39cb6cbb8f90471e81f8bd014b085aa3a7b70acd2b64bdb21505e092560dd (br-19f39cb6cbb8): conflicts with network a04d36bfb3cf4b1b1ab643a2ec70fadfc4a485878b32a275b4c7653037d66dd0 (br-a04d36bfb3cf): networks have overlapping IPv4
	
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network auto-20220921220528-5916 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=auto-20220921220528-5916 auto-20220921220528-5916: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 19f39cb6cbb8f90471e81f8bd014b085aa3a7b70acd2b64bdb21505e092560dd (br-19f39cb6cbb8): conflicts with network a04d36bfb3cf4b1b1ab643a2ec70fadfc4a485878b32a275b4c7653037d66dd0 (br-a04d36bfb3cf): networks have overlapping IPv4
	
	I0921 22:12:35.157773    4228 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0921 22:12:35.399764    4228 cli_runner.go:164] Run: docker volume create auto-20220921220528-5916 --label name.minikube.sigs.k8s.io=auto-20220921220528-5916 --label created_by.minikube.sigs.k8s.io=true
	W0921 22:12:35.602345    4228 cli_runner.go:211] docker volume create auto-20220921220528-5916 --label name.minikube.sigs.k8s.io=auto-20220921220528-5916 --label created_by.minikube.sigs.k8s.io=true returned with exit code 1
	I0921 22:12:35.602494    4228 client.go:171] LocalClient.Create took 1.2983204s
	I0921 22:12:37.615768    4228 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0921 22:12:37.622524    4228 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220921220528-5916
	W0921 22:12:37.828695    4228 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220921220528-5916 returned with exit code 1
	I0921 22:12:37.829725    4228 retry.go:31] will retry after 276.165072ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "auto-20220921220528-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220921220528-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-20220921220528-5916
	I0921 22:12:38.127387    4228 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220921220528-5916
	W0921 22:12:38.318527    4228 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220921220528-5916 returned with exit code 1
	I0921 22:12:38.318754    4228 retry.go:31] will retry after 540.190908ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "auto-20220921220528-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220921220528-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-20220921220528-5916
	I0921 22:12:38.871969    4228 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220921220528-5916
	W0921 22:12:39.062892    4228 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220921220528-5916 returned with exit code 1
	W0921 22:12:39.062980    4228 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "auto-20220921220528-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220921220528-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-20220921220528-5916
	
	W0921 22:12:39.062980    4228 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "auto-20220921220528-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220921220528-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-20220921220528-5916
	I0921 22:12:39.074697    4228 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0921 22:12:39.082890    4228 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220921220528-5916
	W0921 22:12:39.310286    4228 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220921220528-5916 returned with exit code 1
	I0921 22:12:39.310286    4228 retry.go:31] will retry after 234.428547ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "auto-20220921220528-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220921220528-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-20220921220528-5916
	I0921 22:12:39.568369    4228 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220921220528-5916
	W0921 22:12:39.776128    4228 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220921220528-5916 returned with exit code 1
	I0921 22:12:39.776331    4228 retry.go:31] will retry after 346.739061ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "auto-20220921220528-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220921220528-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-20220921220528-5916
	I0921 22:12:40.141653    4228 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220921220528-5916
	W0921 22:12:40.351569    4228 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220921220528-5916 returned with exit code 1
	I0921 22:12:40.351569    4228 retry.go:31] will retry after 667.587979ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "auto-20220921220528-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220921220528-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-20220921220528-5916
	I0921 22:12:41.034089    4228 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220921220528-5916
	W0921 22:12:41.259820    4228 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220921220528-5916 returned with exit code 1
	W0921 22:12:41.259820    4228 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "auto-20220921220528-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220921220528-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-20220921220528-5916
	
	W0921 22:12:41.259820    4228 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "auto-20220921220528-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220921220528-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-20220921220528-5916
	I0921 22:12:41.259820    4228 start.go:128] duration metric: createHost completed in 6.9595911s
	I0921 22:12:41.259820    4228 start.go:83] releasing machines lock for "auto-20220921220528-5916", held for 6.9605673s
	W0921 22:12:41.260389    4228 start.go:602] error starting host: creating host: create: creating: setting up container node: creating volume for auto-20220921220528-5916 container: docker volume create auto-20220921220528-5916 --label name.minikube.sigs.k8s.io=auto-20220921220528-5916 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create auto-20220921220528-5916: error while creating volume root path '/var/lib/docker/volumes/auto-20220921220528-5916': mkdir /var/lib/docker/volumes/auto-20220921220528-5916: read-only file system
	I0921 22:12:41.277181    4228 cli_runner.go:164] Run: docker container inspect auto-20220921220528-5916 --format={{.State.Status}}
	W0921 22:12:41.479118    4228 cli_runner.go:211] docker container inspect auto-20220921220528-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:12:41.479118    4228 delete.go:82] Unable to get host status for auto-20220921220528-5916, assuming it has already been deleted: state: unknown state "auto-20220921220528-5916": docker container inspect auto-20220921220528-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-20220921220528-5916
	W0921 22:12:41.479118    4228 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: setting up container node: creating volume for auto-20220921220528-5916 container: docker volume create auto-20220921220528-5916 --label name.minikube.sigs.k8s.io=auto-20220921220528-5916 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create auto-20220921220528-5916: error while creating volume root path '/var/lib/docker/volumes/auto-20220921220528-5916': mkdir /var/lib/docker/volumes/auto-20220921220528-5916: read-only file system
	
	! StartHost failed, but will try again: creating host: create: creating: setting up container node: creating volume for auto-20220921220528-5916 container: docker volume create auto-20220921220528-5916 --label name.minikube.sigs.k8s.io=auto-20220921220528-5916 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create auto-20220921220528-5916: error while creating volume root path '/var/lib/docker/volumes/auto-20220921220528-5916': mkdir /var/lib/docker/volumes/auto-20220921220528-5916: read-only file system
	
	I0921 22:12:41.479118    4228 start.go:617] Will try again in 5 seconds ...
	I0921 22:12:46.485988    4228 start.go:364] acquiring machines lock for auto-20220921220528-5916: {Name:mk63523cc7a6858ea7c2e93a49610b9c0ee7ee51 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0921 22:12:46.485988    4228 start.go:368] acquired machines lock for "auto-20220921220528-5916" in 0s
	I0921 22:12:46.486590    4228 start.go:96] Skipping create...Using existing machine configuration
	I0921 22:12:46.486662    4228 fix.go:55] fixHost starting: 
	I0921 22:12:46.502005    4228 cli_runner.go:164] Run: docker container inspect auto-20220921220528-5916 --format={{.State.Status}}
	W0921 22:12:46.690858    4228 cli_runner.go:211] docker container inspect auto-20220921220528-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:12:46.690858    4228 fix.go:103] recreateIfNeeded on auto-20220921220528-5916: state= err=unknown state "auto-20220921220528-5916": docker container inspect auto-20220921220528-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-20220921220528-5916
	I0921 22:12:46.690858    4228 fix.go:108] machineExists: false. err=machine does not exist
	I0921 22:12:46.698250    4228 out.go:177] * docker "auto-20220921220528-5916" container is missing, will recreate.
	I0921 22:12:46.705045    4228 delete.go:124] DEMOLISHING auto-20220921220528-5916 ...
	I0921 22:12:46.720987    4228 cli_runner.go:164] Run: docker container inspect auto-20220921220528-5916 --format={{.State.Status}}
	W0921 22:12:46.906919    4228 cli_runner.go:211] docker container inspect auto-20220921220528-5916 --format={{.State.Status}} returned with exit code 1
	W0921 22:12:46.906919    4228 stop.go:75] unable to get state: unknown state "auto-20220921220528-5916": docker container inspect auto-20220921220528-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-20220921220528-5916
	I0921 22:12:46.906919    4228 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "auto-20220921220528-5916": docker container inspect auto-20220921220528-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-20220921220528-5916
	I0921 22:12:46.921143    4228 cli_runner.go:164] Run: docker container inspect auto-20220921220528-5916 --format={{.State.Status}}
	W0921 22:12:47.124876    4228 cli_runner.go:211] docker container inspect auto-20220921220528-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:12:47.124876    4228 delete.go:82] Unable to get host status for auto-20220921220528-5916, assuming it has already been deleted: state: unknown state "auto-20220921220528-5916": docker container inspect auto-20220921220528-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-20220921220528-5916
	I0921 22:12:47.135036    4228 cli_runner.go:164] Run: docker container inspect -f {{.Id}} auto-20220921220528-5916
	W0921 22:12:47.328486    4228 cli_runner.go:211] docker container inspect -f {{.Id}} auto-20220921220528-5916 returned with exit code 1
	I0921 22:12:47.328611    4228 kic.go:356] could not find the container auto-20220921220528-5916 to remove it. will try anyways
	I0921 22:12:47.335639    4228 cli_runner.go:164] Run: docker container inspect auto-20220921220528-5916 --format={{.State.Status}}
	W0921 22:12:47.546916    4228 cli_runner.go:211] docker container inspect auto-20220921220528-5916 --format={{.State.Status}} returned with exit code 1
	W0921 22:12:47.547100    4228 oci.go:84] error getting container status, will try to delete anyways: unknown state "auto-20220921220528-5916": docker container inspect auto-20220921220528-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-20220921220528-5916
	I0921 22:12:47.555820    4228 cli_runner.go:164] Run: docker exec --privileged -t auto-20220921220528-5916 /bin/bash -c "sudo init 0"
	W0921 22:12:47.763752    4228 cli_runner.go:211] docker exec --privileged -t auto-20220921220528-5916 /bin/bash -c "sudo init 0" returned with exit code 1
	I0921 22:12:47.763752    4228 oci.go:646] error shutdown auto-20220921220528-5916: docker exec --privileged -t auto-20220921220528-5916 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: auto-20220921220528-5916
	I0921 22:12:48.788366    4228 cli_runner.go:164] Run: docker container inspect auto-20220921220528-5916 --format={{.State.Status}}
	W0921 22:12:48.976913    4228 cli_runner.go:211] docker container inspect auto-20220921220528-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:12:48.977038    4228 oci.go:658] temporary error verifying shutdown: unknown state "auto-20220921220528-5916": docker container inspect auto-20220921220528-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-20220921220528-5916
	I0921 22:12:48.977066    4228 oci.go:660] temporary error: container auto-20220921220528-5916 status is  but expect it to be exited
	I0921 22:12:48.977105    4228 retry.go:31] will retry after 328.259627ms: couldn't verify container is exited. %v: unknown state "auto-20220921220528-5916": docker container inspect auto-20220921220528-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-20220921220528-5916
	I0921 22:12:49.328541    4228 cli_runner.go:164] Run: docker container inspect auto-20220921220528-5916 --format={{.State.Status}}
	W0921 22:12:49.535004    4228 cli_runner.go:211] docker container inspect auto-20220921220528-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:12:49.535348    4228 oci.go:658] temporary error verifying shutdown: unknown state "auto-20220921220528-5916": docker container inspect auto-20220921220528-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-20220921220528-5916
	I0921 22:12:49.535375    4228 oci.go:660] temporary error: container auto-20220921220528-5916 status is  but expect it to be exited
	I0921 22:12:49.535421    4228 retry.go:31] will retry after 447.727139ms: couldn't verify container is exited. %v: unknown state "auto-20220921220528-5916": docker container inspect auto-20220921220528-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-20220921220528-5916
	I0921 22:12:49.995241    4228 cli_runner.go:164] Run: docker container inspect auto-20220921220528-5916 --format={{.State.Status}}
	W0921 22:12:50.203640    4228 cli_runner.go:211] docker container inspect auto-20220921220528-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:12:50.203640    4228 oci.go:658] temporary error verifying shutdown: unknown state "auto-20220921220528-5916": docker container inspect auto-20220921220528-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-20220921220528-5916
	I0921 22:12:50.203640    4228 oci.go:660] temporary error: container auto-20220921220528-5916 status is  but expect it to be exited
	I0921 22:12:50.203640    4228 retry.go:31] will retry after 901.025843ms: couldn't verify container is exited. %v: unknown state "auto-20220921220528-5916": docker container inspect auto-20220921220528-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-20220921220528-5916
	I0921 22:12:51.117162    4228 cli_runner.go:164] Run: docker container inspect auto-20220921220528-5916 --format={{.State.Status}}
	W0921 22:12:51.324669    4228 cli_runner.go:211] docker container inspect auto-20220921220528-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:12:51.324669    4228 oci.go:658] temporary error verifying shutdown: unknown state "auto-20220921220528-5916": docker container inspect auto-20220921220528-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-20220921220528-5916
	I0921 22:12:51.324669    4228 oci.go:660] temporary error: container auto-20220921220528-5916 status is  but expect it to be exited
	I0921 22:12:51.324669    4228 retry.go:31] will retry after 1.713171311s: couldn't verify container is exited. %v: unknown state "auto-20220921220528-5916": docker container inspect auto-20220921220528-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-20220921220528-5916
	I0921 22:12:53.056307    4228 cli_runner.go:164] Run: docker container inspect auto-20220921220528-5916 --format={{.State.Status}}
	W0921 22:12:53.265956    4228 cli_runner.go:211] docker container inspect auto-20220921220528-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:12:53.266190    4228 oci.go:658] temporary error verifying shutdown: unknown state "auto-20220921220528-5916": docker container inspect auto-20220921220528-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-20220921220528-5916
	I0921 22:12:53.266190    4228 oci.go:660] temporary error: container auto-20220921220528-5916 status is  but expect it to be exited
	I0921 22:12:53.266190    4228 retry.go:31] will retry after 3.325151152s: couldn't verify container is exited. %v: unknown state "auto-20220921220528-5916": docker container inspect auto-20220921220528-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-20220921220528-5916
	I0921 22:12:56.607790    4228 cli_runner.go:164] Run: docker container inspect auto-20220921220528-5916 --format={{.State.Status}}
	W0921 22:12:56.788042    4228 cli_runner.go:211] docker container inspect auto-20220921220528-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:12:56.788042    4228 oci.go:658] temporary error verifying shutdown: unknown state "auto-20220921220528-5916": docker container inspect auto-20220921220528-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-20220921220528-5916
	I0921 22:12:56.788042    4228 oci.go:660] temporary error: container auto-20220921220528-5916 status is  but expect it to be exited
	I0921 22:12:56.788042    4228 retry.go:31] will retry after 2.711970641s: couldn't verify container is exited. %v: unknown state "auto-20220921220528-5916": docker container inspect auto-20220921220528-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-20220921220528-5916
	I0921 22:12:59.519527    4228 cli_runner.go:164] Run: docker container inspect auto-20220921220528-5916 --format={{.State.Status}}
	W0921 22:12:59.740745    4228 cli_runner.go:211] docker container inspect auto-20220921220528-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:12:59.740745    4228 oci.go:658] temporary error verifying shutdown: unknown state "auto-20220921220528-5916": docker container inspect auto-20220921220528-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-20220921220528-5916
	I0921 22:12:59.740745    4228 oci.go:660] temporary error: container auto-20220921220528-5916 status is  but expect it to be exited
	I0921 22:12:59.740745    4228 retry.go:31] will retry after 5.015617898s: couldn't verify container is exited. %v: unknown state "auto-20220921220528-5916": docker container inspect auto-20220921220528-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-20220921220528-5916
	I0921 22:13:04.771112    4228 cli_runner.go:164] Run: docker container inspect auto-20220921220528-5916 --format={{.State.Status}}
	W0921 22:13:04.975959    4228 cli_runner.go:211] docker container inspect auto-20220921220528-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:13:04.975959    4228 oci.go:658] temporary error verifying shutdown: unknown state "auto-20220921220528-5916": docker container inspect auto-20220921220528-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-20220921220528-5916
	I0921 22:13:04.975959    4228 oci.go:660] temporary error: container auto-20220921220528-5916 status is  but expect it to be exited
	I0921 22:13:04.975959    4228 oci.go:88] couldn't shut down auto-20220921220528-5916 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "auto-20220921220528-5916": docker container inspect auto-20220921220528-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-20220921220528-5916
	 
	I0921 22:13:04.982980    4228 cli_runner.go:164] Run: docker rm -f -v auto-20220921220528-5916
	I0921 22:13:05.191098    4228 cli_runner.go:164] Run: docker container inspect -f {{.Id}} auto-20220921220528-5916
	W0921 22:13:05.415193    4228 cli_runner.go:211] docker container inspect -f {{.Id}} auto-20220921220528-5916 returned with exit code 1
	I0921 22:13:05.424993    4228 cli_runner.go:164] Run: docker network inspect auto-20220921220528-5916 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0921 22:13:05.650931    4228 cli_runner.go:211] docker network inspect auto-20220921220528-5916 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0921 22:13:05.656931    4228 network_create.go:272] running [docker network inspect auto-20220921220528-5916] to gather additional debugging logs...
	I0921 22:13:05.656931    4228 cli_runner.go:164] Run: docker network inspect auto-20220921220528-5916
	W0921 22:13:05.868474    4228 cli_runner.go:211] docker network inspect auto-20220921220528-5916 returned with exit code 1
	I0921 22:13:05.868681    4228 network_create.go:275] error running [docker network inspect auto-20220921220528-5916]: docker network inspect auto-20220921220528-5916: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: auto-20220921220528-5916
	I0921 22:13:05.868681    4228 network_create.go:277] output of [docker network inspect auto-20220921220528-5916]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: auto-20220921220528-5916
	
	** /stderr **
	W0921 22:13:05.869626    4228 delete.go:139] delete failed (probably ok) <nil>
	I0921 22:13:05.869626    4228 fix.go:115] Sleeping 1 second for extra luck!
	I0921 22:13:06.881789    4228 start.go:125] createHost starting for "" (driver="docker")
	I0921 22:13:06.884056    4228 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0921 22:13:06.885083    4228 start.go:159] libmachine.API.Create for "auto-20220921220528-5916" (driver="docker")
	I0921 22:13:06.885083    4228 client.go:168] LocalClient.Create starting
	I0921 22:13:06.885912    4228 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I0921 22:13:06.886282    4228 main.go:134] libmachine: Decoding PEM data...
	I0921 22:13:06.886282    4228 main.go:134] libmachine: Parsing certificate...
	I0921 22:13:06.886282    4228 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I0921 22:13:06.886282    4228 main.go:134] libmachine: Decoding PEM data...
	I0921 22:13:06.886282    4228 main.go:134] libmachine: Parsing certificate...
	I0921 22:13:06.899341    4228 cli_runner.go:164] Run: docker network inspect auto-20220921220528-5916 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0921 22:13:07.090452    4228 cli_runner.go:211] docker network inspect auto-20220921220528-5916 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0921 22:13:07.099557    4228 network_create.go:272] running [docker network inspect auto-20220921220528-5916] to gather additional debugging logs...
	I0921 22:13:07.099557    4228 cli_runner.go:164] Run: docker network inspect auto-20220921220528-5916
	W0921 22:13:07.291243    4228 cli_runner.go:211] docker network inspect auto-20220921220528-5916 returned with exit code 1
	I0921 22:13:07.291476    4228 network_create.go:275] error running [docker network inspect auto-20220921220528-5916]: docker network inspect auto-20220921220528-5916: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: auto-20220921220528-5916
	I0921 22:13:07.291476    4228 network_create.go:277] output of [docker network inspect auto-20220921220528-5916]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: auto-20220921220528-5916
	
	** /stderr **
	I0921 22:13:07.302032    4228 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0921 22:13:07.513179    4228 network.go:281] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00014b3a8] amended:false}} dirty:map[] misses:0}
	I0921 22:13:07.513179    4228 network.go:239] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0921 22:13:07.530341    4228 network.go:290] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00014b3a8] amended:true}} dirty:map[192.168.49.0:0xc00014b3a8 192.168.58.0:0xc000f081c8] misses:0}
	I0921 22:13:07.530406    4228 network.go:236] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0921 22:13:07.530406    4228 network_create.go:115] attempt to create docker network auto-20220921220528-5916 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0921 22:13:07.538476    4228 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=auto-20220921220528-5916 auto-20220921220528-5916
	W0921 22:13:07.742684    4228 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=auto-20220921220528-5916 auto-20220921220528-5916 returned with exit code 1
	E0921 22:13:07.742799    4228 network_create.go:104] error while trying to create docker network auto-20220921220528-5916 192.168.58.0/24: create docker network auto-20220921220528-5916 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=auto-20220921220528-5916 auto-20220921220528-5916: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network bee0f6eb0899e04c3d558120cd8af3d8a95f428157985fe8b9f058c3e0288f74 (br-bee0f6eb0899): conflicts with network 8a3cd8d165a4af032276277dbee93b52b8ee4aed7ca6d8aceee6814901b31792 (br-8a3cd8d165a4): networks have overlapping IPv4
	W0921 22:13:07.742799    4228 out.go:239] ! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network auto-20220921220528-5916 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=auto-20220921220528-5916 auto-20220921220528-5916: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network bee0f6eb0899e04c3d558120cd8af3d8a95f428157985fe8b9f058c3e0288f74 (br-bee0f6eb0899): conflicts with network 8a3cd8d165a4af032276277dbee93b52b8ee4aed7ca6d8aceee6814901b31792 (br-8a3cd8d165a4): networks have overlapping IPv4
	
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network auto-20220921220528-5916 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=auto-20220921220528-5916 auto-20220921220528-5916: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network bee0f6eb0899e04c3d558120cd8af3d8a95f428157985fe8b9f058c3e0288f74 (br-bee0f6eb0899): conflicts with network 8a3cd8d165a4af032276277dbee93b52b8ee4aed7ca6d8aceee6814901b31792 (br-8a3cd8d165a4): networks have overlapping IPv4
	
	I0921 22:13:07.753767    4228 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0921 22:13:07.972240    4228 cli_runner.go:164] Run: docker volume create auto-20220921220528-5916 --label name.minikube.sigs.k8s.io=auto-20220921220528-5916 --label created_by.minikube.sigs.k8s.io=true
	W0921 22:13:08.147223    4228 cli_runner.go:211] docker volume create auto-20220921220528-5916 --label name.minikube.sigs.k8s.io=auto-20220921220528-5916 --label created_by.minikube.sigs.k8s.io=true returned with exit code 1
	I0921 22:13:08.147223    4228 client.go:171] LocalClient.Create took 1.26213s
	I0921 22:13:10.163400    4228 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0921 22:13:10.172977    4228 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220921220528-5916
	W0921 22:13:10.369894    4228 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220921220528-5916 returned with exit code 1
	I0921 22:13:10.369894    4228 retry.go:31] will retry after 242.222461ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "auto-20220921220528-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220921220528-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-20220921220528-5916
	I0921 22:13:10.631717    4228 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220921220528-5916
	W0921 22:13:10.869011    4228 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220921220528-5916 returned with exit code 1
	I0921 22:13:10.869011    4228 retry.go:31] will retry after 293.637806ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "auto-20220921220528-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220921220528-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-20220921220528-5916
	I0921 22:13:11.175347    4228 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220921220528-5916
	W0921 22:13:11.385598    4228 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220921220528-5916 returned with exit code 1
	I0921 22:13:11.385598    4228 retry.go:31] will retry after 446.119795ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "auto-20220921220528-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220921220528-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-20220921220528-5916
	I0921 22:13:11.844982    4228 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220921220528-5916
	W0921 22:13:12.054610    4228 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220921220528-5916 returned with exit code 1
	W0921 22:13:12.054610    4228 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "auto-20220921220528-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220921220528-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-20220921220528-5916
	
	W0921 22:13:12.054610    4228 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "auto-20220921220528-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220921220528-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-20220921220528-5916
	I0921 22:13:12.066616    4228 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0921 22:13:12.074615    4228 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220921220528-5916
	W0921 22:13:12.278608    4228 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220921220528-5916 returned with exit code 1
	I0921 22:13:12.278608    4228 retry.go:31] will retry after 179.638263ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "auto-20220921220528-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220921220528-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-20220921220528-5916
	I0921 22:13:12.476757    4228 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220921220528-5916
	W0921 22:13:12.655574    4228 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220921220528-5916 returned with exit code 1
	I0921 22:13:12.655574    4228 retry.go:31] will retry after 263.695078ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "auto-20220921220528-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220921220528-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-20220921220528-5916
	I0921 22:13:12.949147    4228 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220921220528-5916
	W0921 22:13:13.139993    4228 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220921220528-5916 returned with exit code 1
	I0921 22:13:13.139993    4228 retry.go:31] will retry after 484.240172ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "auto-20220921220528-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220921220528-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-20220921220528-5916
	I0921 22:13:13.646945    4228 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220921220528-5916
	W0921 22:13:13.827292    4228 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220921220528-5916 returned with exit code 1
	W0921 22:13:13.827292    4228 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "auto-20220921220528-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220921220528-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-20220921220528-5916
	
	W0921 22:13:13.827292    4228 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "auto-20220921220528-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220921220528-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-20220921220528-5916
	I0921 22:13:13.827292    4228 start.go:128] duration metric: createHost completed in 6.9454117s
	I0921 22:13:13.841256    4228 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0921 22:13:13.854187    4228 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220921220528-5916
	W0921 22:13:14.063447    4228 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220921220528-5916 returned with exit code 1
	I0921 22:13:14.063447    4228 retry.go:31] will retry after 340.62286ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "auto-20220921220528-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220921220528-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-20220921220528-5916
	I0921 22:13:14.414468    4228 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220921220528-5916
	W0921 22:13:14.625717    4228 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220921220528-5916 returned with exit code 1
	I0921 22:13:14.625717    4228 retry.go:31] will retry after 297.417842ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "auto-20220921220528-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220921220528-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-20220921220528-5916
	I0921 22:13:14.932830    4228 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220921220528-5916
	W0921 22:13:15.142868    4228 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220921220528-5916 returned with exit code 1
	I0921 22:13:15.142868    4228 retry.go:31] will retry after 448.358942ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "auto-20220921220528-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220921220528-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-20220921220528-5916
	I0921 22:13:15.603295    4228 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220921220528-5916
	W0921 22:13:15.799801    4228 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220921220528-5916 returned with exit code 1
	W0921 22:13:15.800175    4228 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "auto-20220921220528-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220921220528-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-20220921220528-5916
	
	W0921 22:13:15.800241    4228 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "auto-20220921220528-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220921220528-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-20220921220528-5916
	I0921 22:13:15.813714    4228 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0921 22:13:15.825042    4228 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220921220528-5916
	W0921 22:13:16.034945    4228 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220921220528-5916 returned with exit code 1
	I0921 22:13:16.034945    4228 retry.go:31] will retry after 176.645665ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "auto-20220921220528-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220921220528-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-20220921220528-5916
	I0921 22:13:16.229587    4228 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220921220528-5916
	W0921 22:13:16.429158    4228 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220921220528-5916 returned with exit code 1
	I0921 22:13:16.429158    4228 retry.go:31] will retry after 512.00063ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "auto-20220921220528-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220921220528-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-20220921220528-5916
	I0921 22:13:16.958832    4228 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220921220528-5916
	W0921 22:13:17.179099    4228 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220921220528-5916 returned with exit code 1
	W0921 22:13:17.179099    4228 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "auto-20220921220528-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220921220528-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-20220921220528-5916
	
	W0921 22:13:17.179099    4228 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "auto-20220921220528-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220921220528-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-20220921220528-5916
	I0921 22:13:17.179099    4228 fix.go:57] fixHost completed within 30.6921943s
	I0921 22:13:17.179099    4228 start.go:83] releasing machines lock for "auto-20220921220528-5916", held for 30.6928686s
	W0921 22:13:17.180075    4228 out.go:239] * Failed to start docker container. Running "minikube delete -p auto-20220921220528-5916" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for auto-20220921220528-5916 container: docker volume create auto-20220921220528-5916 --label name.minikube.sigs.k8s.io=auto-20220921220528-5916 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create auto-20220921220528-5916: error while creating volume root path '/var/lib/docker/volumes/auto-20220921220528-5916': mkdir /var/lib/docker/volumes/auto-20220921220528-5916: read-only file system
	
	* Failed to start docker container. Running "minikube delete -p auto-20220921220528-5916" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for auto-20220921220528-5916 container: docker volume create auto-20220921220528-5916 --label name.minikube.sigs.k8s.io=auto-20220921220528-5916 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create auto-20220921220528-5916: error while creating volume root path '/var/lib/docker/volumes/auto-20220921220528-5916': mkdir /var/lib/docker/volumes/auto-20220921220528-5916: read-only file system
	
	I0921 22:13:17.184105    4228 out.go:177] 
	W0921 22:13:17.187077    4228 out.go:239] X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for auto-20220921220528-5916 container: docker volume create auto-20220921220528-5916 --label name.minikube.sigs.k8s.io=auto-20220921220528-5916 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create auto-20220921220528-5916: error while creating volume root path '/var/lib/docker/volumes/auto-20220921220528-5916': mkdir /var/lib/docker/volumes/auto-20220921220528-5916: read-only file system
	
	X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for auto-20220921220528-5916 container: docker volume create auto-20220921220528-5916 --label name.minikube.sigs.k8s.io=auto-20220921220528-5916 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create auto-20220921220528-5916: error while creating volume root path '/var/lib/docker/volumes/auto-20220921220528-5916': mkdir /var/lib/docker/volumes/auto-20220921220528-5916: read-only file system
	
	W0921 22:13:17.187077    4228 out.go:239] * Suggestion: Restart Docker
	* Suggestion: Restart Docker
	W0921 22:13:17.187077    4228 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/6825
	* Related issue: https://github.com/kubernetes/minikube/issues/6825
	I0921 22:13:17.190188    4228 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:103: failed start: exit status 60
--- FAIL: TestNetworkPlugins/group/auto/Start (49.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (49.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:101: (dbg) Run:  out/minikube-windows-amd64.exe start -p calico-20220921220531-5916 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=calico --driver=docker

                                                
                                                
=== CONT  TestNetworkPlugins/group/calico/Start
net_test.go:101: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p calico-20220921220531-5916 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=calico --driver=docker: exit status 60 (49.0654582s)

                                                
                                                
-- stdout --
	* [calico-20220921220531-5916] minikube v1.27.0 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	  - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=14995
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting control plane node calico-20220921220531-5916 in cluster calico-20220921220531-5916
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* docker "calico-20220921220531-5916" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0921 22:12:32.005931    8700 out.go:296] Setting OutFile to fd 1852 ...
	I0921 22:12:32.074708    8700 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0921 22:12:32.074708    8700 out.go:309] Setting ErrFile to fd 1936...
	I0921 22:12:32.074708    8700 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0921 22:12:32.095007    8700 out.go:303] Setting JSON to false
	I0921 22:12:32.097056    8700 start.go:115] hostinfo: {"hostname":"minikube2","uptime":4420,"bootTime":1663793932,"procs":153,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19044 Build 19044","kernelVersion":"10.0.19044 Build 19044","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"29dced62-21fb-45d8-a34e-472b66ced002"}
	W0921 22:12:32.097056    8700 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0921 22:12:32.107046    8700 out.go:177] * [calico-20220921220531-5916] minikube v1.27.0 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	I0921 22:12:32.110024    8700 notify.go:214] Checking for updates...
	I0921 22:12:32.114012    8700 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	I0921 22:12:32.116059    8700 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	I0921 22:12:32.119050    8700 out.go:177]   - MINIKUBE_LOCATION=14995
	I0921 22:12:32.122054    8700 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0921 22:12:32.125089    8700 config.go:180] Loaded profile config "auto-20220921220528-5916": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.2
	I0921 22:12:32.125781    8700 config.go:180] Loaded profile config "default-k8s-different-port-20220921221221-5916": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.2
	I0921 22:12:32.126332    8700 config.go:180] Loaded profile config "multinode-20220921215635-5916-m01": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.2
	I0921 22:12:32.126701    8700 config.go:180] Loaded profile config "newest-cni-20220921221222-5916": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.2
	I0921 22:12:32.126701    8700 driver.go:365] Setting default libvirt URI to qemu:///system
	I0921 22:12:32.399649    8700 docker.go:137] docker version: linux-20.10.17
	I0921 22:12:32.407923    8700 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0921 22:12:32.949220    8700 info.go:265] docker info: {ID:6VFI:N2PO:PLBP:KXLR:EN7R:OR7P:564S:UY7N:ZWDA:5V2U:KGL2:KJZ7 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:43 OomKillDisable:true NGoroutines:87 SystemTime:2022-09-21 22:12:32.5697936 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.10.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.9] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-
plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.19.0]] Warnings:<nil>}}
	I0921 22:12:32.952208    8700 out.go:177] * Using the docker driver based on user configuration
	I0921 22:12:32.954258    8700 start.go:284] selected driver: docker
	I0921 22:12:32.954258    8700 start.go:808] validating driver "docker" against <nil>
	I0921 22:12:32.955260    8700 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0921 22:12:33.014690    8700 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0921 22:12:33.625308    8700 info.go:265] docker info: {ID:6VFI:N2PO:PLBP:KXLR:EN7R:OR7P:564S:UY7N:ZWDA:5V2U:KGL2:KJZ7 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:44 OomKillDisable:true NGoroutines:88 SystemTime:2022-09-21 22:12:33.1873382 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.10.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.9] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-
plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.19.0]] Warnings:<nil>}}
	I0921 22:12:33.625308    8700 start_flags.go:302] no existing cluster config was found, will generate one from the flags 
	I0921 22:12:33.626513    8700 start_flags.go:867] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0921 22:12:33.630360    8700 out.go:177] * Using Docker Desktop driver with root privileges
	I0921 22:12:33.634265    8700 cni.go:95] Creating CNI manager for "calico"
	I0921 22:12:33.634265    8700 start_flags.go:311] Found "Calico" CNI - setting NetworkPlugin=cni
	I0921 22:12:33.634265    8700 start_flags.go:316] config:
	{Name:calico-20220921220531-5916 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.2 ClusterName:calico-20220921220531-5916 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I0921 22:12:33.637291    8700 out.go:177] * Starting control plane node calico-20220921220531-5916 in cluster calico-20220921220531-5916
	I0921 22:12:33.640572    8700 cache.go:120] Beginning downloading kic base image for docker with docker
	I0921 22:12:33.643163    8700 out.go:177] * Pulling base image ...
	I0921 22:12:33.646634    8700 preload.go:132] Checking if preload exists for k8s version v1.25.2 and runtime docker
	I0921 22:12:33.646634    8700 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c in local docker daemon
	I0921 22:12:33.647036    8700 preload.go:148] Found local preload: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.25.2-docker-overlay2-amd64.tar.lz4
	I0921 22:12:33.647036    8700 cache.go:57] Caching tarball of preloaded images
	I0921 22:12:33.647036    8700 preload.go:174] Found C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.25.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0921 22:12:33.647671    8700 cache.go:60] Finished verifying existence of preloaded tar for  v1.25.2 on docker
	I0921 22:12:33.647756    8700 profile.go:148] Saving config to C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\calico-20220921220531-5916\config.json ...
	I0921 22:12:33.647756    8700 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\calico-20220921220531-5916\config.json: {Name:mk4e8166a933448431bc2a3d18f8ef1a83fa370c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0921 22:12:33.904422    8700 cache.go:147] Downloading gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c to local cache
	I0921 22:12:33.904422    8700 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase_v0.0.34@sha256_f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c.tar
	I0921 22:12:33.904422    8700 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase_v0.0.34@sha256_f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c.tar
	I0921 22:12:33.904422    8700 image.go:59] Checking for gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c in local cache directory
	I0921 22:12:33.904951    8700 image.go:62] Found gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c in local cache directory, skipping pull
	I0921 22:12:33.904951    8700 image.go:103] gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c exists in cache, skipping pull
	I0921 22:12:33.905160    8700 cache.go:150] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c as a tarball
	I0921 22:12:33.905182    8700 cache.go:161] Loading gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c from local cache
	I0921 22:12:33.905219    8700 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase_v0.0.34@sha256_f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c.tar
	I0921 22:12:36.345030    8700 cache.go:164] successfully loaded gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c from cached tarball
	I0921 22:12:36.345030    8700 cache.go:170] Downloading gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c to local daemon
	I0921 22:12:36.345030    8700 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c.d.lock -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase_v0.0.34@sha256_f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c.d.lock
	I0921 22:12:36.345030    8700 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c in local docker daemon
	I0921 22:12:36.555275    8700 image.go:243] Writing gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c to local daemon
	    > gcr.io/k8s-minikube/kicbase:  0 B [__________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase:  0 B [__________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase:  0 B [__________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase:  0 B [__________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase:  0 B [______________________] ?% ? p/s 800msI0921 22:12:38.116135    8700 cache.go:177] use image loaded from cache gcr.io/k8s-minikube/kicbase:v0.0.34
	W0921 22:12:38.116309    8700 out.go:239] ! minikube was unable to download gcr.io/k8s-minikube/kicbase:v0.0.34, but successfully downloaded gcr.io/k8s-minikube/kicbase:v0.0.34 as a fallback image
	! minikube was unable to download gcr.io/k8s-minikube/kicbase:v0.0.34, but successfully downloaded gcr.io/k8s-minikube/kicbase:v0.0.34 as a fallback image
	I0921 22:12:38.116355    8700 cache.go:208] Successfully downloaded all kic artifacts
	I0921 22:12:38.116428    8700 start.go:364] acquiring machines lock for calico-20220921220531-5916: {Name:mkf59236360812803f5230055efcc927d9ea89a3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0921 22:12:38.116684    8700 start.go:368] acquired machines lock for "calico-20220921220531-5916" in 173.4µs
	I0921 22:12:38.116684    8700 start.go:93] Provisioning new machine with config: &{Name:calico-20220921220531-5916 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.34 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.2 ClusterName:calico-20220921220531-5916 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunt
ime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.25.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVM
netClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet} &{Name: IP: Port:8443 KubernetesVersion:v1.25.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0921 22:12:38.116684    8700 start.go:125] createHost starting for "" (driver="docker")
	I0921 22:12:38.125851    8700 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0921 22:12:38.125928    8700 start.go:159] libmachine.API.Create for "calico-20220921220531-5916" (driver="docker")
	I0921 22:12:38.125928    8700 client.go:168] LocalClient.Create starting
	I0921 22:12:38.125928    8700 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I0921 22:12:38.127037    8700 main.go:134] libmachine: Decoding PEM data...
	I0921 22:12:38.127037    8700 main.go:134] libmachine: Parsing certificate...
	I0921 22:12:38.127304    8700 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I0921 22:12:38.127515    8700 main.go:134] libmachine: Decoding PEM data...
	I0921 22:12:38.127587    8700 main.go:134] libmachine: Parsing certificate...
	I0921 22:12:38.137303    8700 cli_runner.go:164] Run: docker network inspect calico-20220921220531-5916 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0921 22:12:38.349922    8700 cli_runner.go:211] docker network inspect calico-20220921220531-5916 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0921 22:12:38.357172    8700 network_create.go:272] running [docker network inspect calico-20220921220531-5916] to gather additional debugging logs...
	I0921 22:12:38.357172    8700 cli_runner.go:164] Run: docker network inspect calico-20220921220531-5916
	W0921 22:12:38.551438    8700 cli_runner.go:211] docker network inspect calico-20220921220531-5916 returned with exit code 1
	I0921 22:12:38.551438    8700 network_create.go:275] error running [docker network inspect calico-20220921220531-5916]: docker network inspect calico-20220921220531-5916: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: calico-20220921220531-5916
	I0921 22:12:38.551438    8700 network_create.go:277] output of [docker network inspect calico-20220921220531-5916]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: calico-20220921220531-5916
	
	** /stderr **
	I0921 22:12:38.560198    8700 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0921 22:12:38.788242    8700 network.go:290] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc0008b80c0] misses:0}
	I0921 22:12:38.788242    8700 network.go:236] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0921 22:12:38.788242    8700 network_create.go:115] attempt to create docker network calico-20220921220531-5916 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0921 22:12:38.800049    8700 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=calico-20220921220531-5916 calico-20220921220531-5916
	W0921 22:12:39.001042    8700 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=calico-20220921220531-5916 calico-20220921220531-5916 returned with exit code 1
	E0921 22:12:39.001042    8700 network_create.go:104] error while trying to create docker network calico-20220921220531-5916 192.168.49.0/24: create docker network calico-20220921220531-5916 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=calico-20220921220531-5916 calico-20220921220531-5916: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 192ff8cd02088e78ef8b5e216d9356c1a0a7ec3c7612ec08a7076fd4f7a5efef (br-192ff8cd0208): conflicts with network a04d36bfb3cf4b1b1ab643a2ec70fadfc4a485878b32a275b4c7653037d66dd0 (br-a04d36bfb3cf): networks have overlapping IPv4
	W0921 22:12:39.001042    8700 out.go:239] ! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network calico-20220921220531-5916 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=calico-20220921220531-5916 calico-20220921220531-5916: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 192ff8cd02088e78ef8b5e216d9356c1a0a7ec3c7612ec08a7076fd4f7a5efef (br-192ff8cd0208): conflicts with network a04d36bfb3cf4b1b1ab643a2ec70fadfc4a485878b32a275b4c7653037d66dd0 (br-a04d36bfb3cf): networks have overlapping IPv4
	
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network calico-20220921220531-5916 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=calico-20220921220531-5916 calico-20220921220531-5916: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 192ff8cd02088e78ef8b5e216d9356c1a0a7ec3c7612ec08a7076fd4f7a5efef (br-192ff8cd0208): conflicts with network a04d36bfb3cf4b1b1ab643a2ec70fadfc4a485878b32a275b4c7653037d66dd0 (br-a04d36bfb3cf): networks have overlapping IPv4
	
	I0921 22:12:39.020606    8700 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0921 22:12:39.242505    8700 cli_runner.go:164] Run: docker volume create calico-20220921220531-5916 --label name.minikube.sigs.k8s.io=calico-20220921220531-5916 --label created_by.minikube.sigs.k8s.io=true
	W0921 22:12:39.481021    8700 cli_runner.go:211] docker volume create calico-20220921220531-5916 --label name.minikube.sigs.k8s.io=calico-20220921220531-5916 --label created_by.minikube.sigs.k8s.io=true returned with exit code 1
	I0921 22:12:39.481200    8700 client.go:171] LocalClient.Create took 1.3552618s
	I0921 22:12:41.505383    8700 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0921 22:12:41.512125    8700 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220921220531-5916
	W0921 22:12:41.698271    8700 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220921220531-5916 returned with exit code 1
	I0921 22:12:41.698271    8700 retry.go:31] will retry after 276.165072ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "calico-20220921220531-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220921220531-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-20220921220531-5916
	I0921 22:12:41.988500    8700 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220921220531-5916
	W0921 22:12:42.227668    8700 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220921220531-5916 returned with exit code 1
	I0921 22:12:42.227668    8700 retry.go:31] will retry after 540.190908ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "calico-20220921220531-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220921220531-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-20220921220531-5916
	I0921 22:12:42.778746    8700 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220921220531-5916
	W0921 22:12:42.971785    8700 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220921220531-5916 returned with exit code 1
	W0921 22:12:42.972034    8700 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "calico-20220921220531-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220921220531-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-20220921220531-5916
	
	W0921 22:12:42.972093    8700 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "calico-20220921220531-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220921220531-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-20220921220531-5916
	I0921 22:12:42.983804    8700 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0921 22:12:42.990794    8700 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220921220531-5916
	W0921 22:12:43.187936    8700 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220921220531-5916 returned with exit code 1
	I0921 22:12:43.187936    8700 retry.go:31] will retry after 234.428547ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "calico-20220921220531-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220921220531-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-20220921220531-5916
	I0921 22:12:43.443331    8700 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220921220531-5916
	W0921 22:12:43.636695    8700 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220921220531-5916 returned with exit code 1
	I0921 22:12:43.636942    8700 retry.go:31] will retry after 346.739061ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "calico-20220921220531-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220921220531-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-20220921220531-5916
	I0921 22:12:43.992718    8700 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220921220531-5916
	W0921 22:12:44.186834    8700 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220921220531-5916 returned with exit code 1
	I0921 22:12:44.186893    8700 retry.go:31] will retry after 667.587979ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "calico-20220921220531-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220921220531-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-20220921220531-5916
	I0921 22:12:44.870713    8700 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220921220531-5916
	W0921 22:12:45.078803    8700 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220921220531-5916 returned with exit code 1
	W0921 22:12:45.079118    8700 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "calico-20220921220531-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220921220531-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-20220921220531-5916
	
	W0921 22:12:45.079257    8700 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "calico-20220921220531-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220921220531-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-20220921220531-5916
	I0921 22:12:45.079309    8700 start.go:128] duration metric: createHost completed in 6.962518s
	I0921 22:12:45.079309    8700 start.go:83] releasing machines lock for "calico-20220921220531-5916", held for 6.9625699s
	W0921 22:12:45.079341    8700 start.go:602] error starting host: creating host: create: creating: setting up container node: creating volume for calico-20220921220531-5916 container: docker volume create calico-20220921220531-5916 --label name.minikube.sigs.k8s.io=calico-20220921220531-5916 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create calico-20220921220531-5916: error while creating volume root path '/var/lib/docker/volumes/calico-20220921220531-5916': mkdir /var/lib/docker/volumes/calico-20220921220531-5916: read-only file system
	I0921 22:12:45.094047    8700 cli_runner.go:164] Run: docker container inspect calico-20220921220531-5916 --format={{.State.Status}}
	W0921 22:12:45.281030    8700 cli_runner.go:211] docker container inspect calico-20220921220531-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:12:45.281221    8700 delete.go:82] Unable to get host status for calico-20220921220531-5916, assuming it has already been deleted: state: unknown state "calico-20220921220531-5916": docker container inspect calico-20220921220531-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-20220921220531-5916
	W0921 22:12:45.281472    8700 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: setting up container node: creating volume for calico-20220921220531-5916 container: docker volume create calico-20220921220531-5916 --label name.minikube.sigs.k8s.io=calico-20220921220531-5916 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create calico-20220921220531-5916: error while creating volume root path '/var/lib/docker/volumes/calico-20220921220531-5916': mkdir /var/lib/docker/volumes/calico-20220921220531-5916: read-only file system
	
	! StartHost failed, but will try again: creating host: create: creating: setting up container node: creating volume for calico-20220921220531-5916 container: docker volume create calico-20220921220531-5916 --label name.minikube.sigs.k8s.io=calico-20220921220531-5916 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create calico-20220921220531-5916: error while creating volume root path '/var/lib/docker/volumes/calico-20220921220531-5916': mkdir /var/lib/docker/volumes/calico-20220921220531-5916: read-only file system
	
	I0921 22:12:45.281617    8700 start.go:617] Will try again in 5 seconds ...
	I0921 22:12:50.281966    8700 start.go:364] acquiring machines lock for calico-20220921220531-5916: {Name:mkf59236360812803f5230055efcc927d9ea89a3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0921 22:12:50.281966    8700 start.go:368] acquired machines lock for "calico-20220921220531-5916" in 0s
	I0921 22:12:50.282632    8700 start.go:96] Skipping create...Using existing machine configuration
	I0921 22:12:50.282632    8700 fix.go:55] fixHost starting: 
	I0921 22:12:50.296830    8700 cli_runner.go:164] Run: docker container inspect calico-20220921220531-5916 --format={{.State.Status}}
	W0921 22:12:50.485455    8700 cli_runner.go:211] docker container inspect calico-20220921220531-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:12:50.485604    8700 fix.go:103] recreateIfNeeded on calico-20220921220531-5916: state= err=unknown state "calico-20220921220531-5916": docker container inspect calico-20220921220531-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-20220921220531-5916
	I0921 22:12:50.485604    8700 fix.go:108] machineExists: false. err=machine does not exist
	I0921 22:12:50.489575    8700 out.go:177] * docker "calico-20220921220531-5916" container is missing, will recreate.
	I0921 22:12:50.494234    8700 delete.go:124] DEMOLISHING calico-20220921220531-5916 ...
	I0921 22:12:50.505501    8700 cli_runner.go:164] Run: docker container inspect calico-20220921220531-5916 --format={{.State.Status}}
	W0921 22:12:50.702568    8700 cli_runner.go:211] docker container inspect calico-20220921220531-5916 --format={{.State.Status}} returned with exit code 1
	W0921 22:12:50.702568    8700 stop.go:75] unable to get state: unknown state "calico-20220921220531-5916": docker container inspect calico-20220921220531-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-20220921220531-5916
	I0921 22:12:50.702568    8700 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "calico-20220921220531-5916": docker container inspect calico-20220921220531-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-20220921220531-5916
	I0921 22:12:50.718163    8700 cli_runner.go:164] Run: docker container inspect calico-20220921220531-5916 --format={{.State.Status}}
	W0921 22:12:50.891089    8700 cli_runner.go:211] docker container inspect calico-20220921220531-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:12:50.891089    8700 delete.go:82] Unable to get host status for calico-20220921220531-5916, assuming it has already been deleted: state: unknown state "calico-20220921220531-5916": docker container inspect calico-20220921220531-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-20220921220531-5916
	I0921 22:12:50.897089    8700 cli_runner.go:164] Run: docker container inspect -f {{.Id}} calico-20220921220531-5916
	W0921 22:12:51.093179    8700 cli_runner.go:211] docker container inspect -f {{.Id}} calico-20220921220531-5916 returned with exit code 1
	I0921 22:12:51.093275    8700 kic.go:356] could not find the container calico-20220921220531-5916 to remove it. will try anyways
	I0921 22:12:51.102671    8700 cli_runner.go:164] Run: docker container inspect calico-20220921220531-5916 --format={{.State.Status}}
	W0921 22:12:51.324669    8700 cli_runner.go:211] docker container inspect calico-20220921220531-5916 --format={{.State.Status}} returned with exit code 1
	W0921 22:12:51.324669    8700 oci.go:84] error getting container status, will try to delete anyways: unknown state "calico-20220921220531-5916": docker container inspect calico-20220921220531-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-20220921220531-5916
	I0921 22:12:51.333216    8700 cli_runner.go:164] Run: docker exec --privileged -t calico-20220921220531-5916 /bin/bash -c "sudo init 0"
	W0921 22:12:51.528244    8700 cli_runner.go:211] docker exec --privileged -t calico-20220921220531-5916 /bin/bash -c "sudo init 0" returned with exit code 1
	I0921 22:12:51.528384    8700 oci.go:646] error shutdown calico-20220921220531-5916: docker exec --privileged -t calico-20220921220531-5916 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: calico-20220921220531-5916
	I0921 22:12:52.541611    8700 cli_runner.go:164] Run: docker container inspect calico-20220921220531-5916 --format={{.State.Status}}
	W0921 22:12:52.749304    8700 cli_runner.go:211] docker container inspect calico-20220921220531-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:12:52.749304    8700 oci.go:658] temporary error verifying shutdown: unknown state "calico-20220921220531-5916": docker container inspect calico-20220921220531-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-20220921220531-5916
	I0921 22:12:52.749304    8700 oci.go:660] temporary error: container calico-20220921220531-5916 status is  but expect it to be exited
	I0921 22:12:52.749304    8700 retry.go:31] will retry after 328.259627ms: couldn't verify container is exited. %v: unknown state "calico-20220921220531-5916": docker container inspect calico-20220921220531-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-20220921220531-5916
	I0921 22:12:53.085939    8700 cli_runner.go:164] Run: docker container inspect calico-20220921220531-5916 --format={{.State.Status}}
	W0921 22:12:53.281295    8700 cli_runner.go:211] docker container inspect calico-20220921220531-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:12:53.281645    8700 oci.go:658] temporary error verifying shutdown: unknown state "calico-20220921220531-5916": docker container inspect calico-20220921220531-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-20220921220531-5916
	I0921 22:12:53.281693    8700 oci.go:660] temporary error: container calico-20220921220531-5916 status is  but expect it to be exited
	I0921 22:12:53.281727    8700 retry.go:31] will retry after 447.727139ms: couldn't verify container is exited. %v: unknown state "calico-20220921220531-5916": docker container inspect calico-20220921220531-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-20220921220531-5916
	I0921 22:12:53.751382    8700 cli_runner.go:164] Run: docker container inspect calico-20220921220531-5916 --format={{.State.Status}}
	W0921 22:12:53.946496    8700 cli_runner.go:211] docker container inspect calico-20220921220531-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:12:53.946496    8700 oci.go:658] temporary error verifying shutdown: unknown state "calico-20220921220531-5916": docker container inspect calico-20220921220531-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-20220921220531-5916
	I0921 22:12:53.946496    8700 oci.go:660] temporary error: container calico-20220921220531-5916 status is  but expect it to be exited
	I0921 22:12:53.946496    8700 retry.go:31] will retry after 901.025843ms: couldn't verify container is exited. %v: unknown state "calico-20220921220531-5916": docker container inspect calico-20220921220531-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-20220921220531-5916
	I0921 22:12:54.867720    8700 cli_runner.go:164] Run: docker container inspect calico-20220921220531-5916 --format={{.State.Status}}
	W0921 22:12:55.076340    8700 cli_runner.go:211] docker container inspect calico-20220921220531-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:12:55.076457    8700 oci.go:658] temporary error verifying shutdown: unknown state "calico-20220921220531-5916": docker container inspect calico-20220921220531-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-20220921220531-5916
	I0921 22:12:55.076457    8700 oci.go:660] temporary error: container calico-20220921220531-5916 status is  but expect it to be exited
	I0921 22:12:55.076521    8700 retry.go:31] will retry after 1.713171311s: couldn't verify container is exited. %v: unknown state "calico-20220921220531-5916": docker container inspect calico-20220921220531-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-20220921220531-5916
	I0921 22:12:56.812146    8700 cli_runner.go:164] Run: docker container inspect calico-20220921220531-5916 --format={{.State.Status}}
	W0921 22:12:57.004117    8700 cli_runner.go:211] docker container inspect calico-20220921220531-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:12:57.004231    8700 oci.go:658] temporary error verifying shutdown: unknown state "calico-20220921220531-5916": docker container inspect calico-20220921220531-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-20220921220531-5916
	I0921 22:12:57.004231    8700 oci.go:660] temporary error: container calico-20220921220531-5916 status is  but expect it to be exited
	I0921 22:12:57.004231    8700 retry.go:31] will retry after 3.325151152s: couldn't verify container is exited. %v: unknown state "calico-20220921220531-5916": docker container inspect calico-20220921220531-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-20220921220531-5916
	I0921 22:13:00.343838    8700 cli_runner.go:164] Run: docker container inspect calico-20220921220531-5916 --format={{.State.Status}}
	W0921 22:13:00.523729    8700 cli_runner.go:211] docker container inspect calico-20220921220531-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:13:00.523729    8700 oci.go:658] temporary error verifying shutdown: unknown state "calico-20220921220531-5916": docker container inspect calico-20220921220531-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-20220921220531-5916
	I0921 22:13:00.523729    8700 oci.go:660] temporary error: container calico-20220921220531-5916 status is  but expect it to be exited
	I0921 22:13:00.523729    8700 retry.go:31] will retry after 2.711970641s: couldn't verify container is exited. %v: unknown state "calico-20220921220531-5916": docker container inspect calico-20220921220531-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-20220921220531-5916
	I0921 22:13:03.249019    8700 cli_runner.go:164] Run: docker container inspect calico-20220921220531-5916 --format={{.State.Status}}
	W0921 22:13:03.428944    8700 cli_runner.go:211] docker container inspect calico-20220921220531-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:13:03.429127    8700 oci.go:658] temporary error verifying shutdown: unknown state "calico-20220921220531-5916": docker container inspect calico-20220921220531-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-20220921220531-5916
	I0921 22:13:03.429177    8700 oci.go:660] temporary error: container calico-20220921220531-5916 status is  but expect it to be exited
	I0921 22:13:03.429208    8700 retry.go:31] will retry after 5.015617898s: couldn't verify container is exited. %v: unknown state "calico-20220921220531-5916": docker container inspect calico-20220921220531-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-20220921220531-5916
	I0921 22:13:08.451640    8700 cli_runner.go:164] Run: docker container inspect calico-20220921220531-5916 --format={{.State.Status}}
	W0921 22:13:08.646563    8700 cli_runner.go:211] docker container inspect calico-20220921220531-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:13:08.646563    8700 oci.go:658] temporary error verifying shutdown: unknown state "calico-20220921220531-5916": docker container inspect calico-20220921220531-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-20220921220531-5916
	I0921 22:13:08.646563    8700 oci.go:660] temporary error: container calico-20220921220531-5916 status is  but expect it to be exited
	I0921 22:13:08.646563    8700 oci.go:88] couldn't shut down calico-20220921220531-5916 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "calico-20220921220531-5916": docker container inspect calico-20220921220531-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-20220921220531-5916
	 
	I0921 22:13:08.655569    8700 cli_runner.go:164] Run: docker rm -f -v calico-20220921220531-5916
	I0921 22:13:08.862567    8700 cli_runner.go:164] Run: docker container inspect -f {{.Id}} calico-20220921220531-5916
	W0921 22:13:09.043443    8700 cli_runner.go:211] docker container inspect -f {{.Id}} calico-20220921220531-5916 returned with exit code 1
	I0921 22:13:09.051971    8700 cli_runner.go:164] Run: docker network inspect calico-20220921220531-5916 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0921 22:13:09.262032    8700 cli_runner.go:211] docker network inspect calico-20220921220531-5916 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0921 22:13:09.268026    8700 network_create.go:272] running [docker network inspect calico-20220921220531-5916] to gather additional debugging logs...
	I0921 22:13:09.268026    8700 cli_runner.go:164] Run: docker network inspect calico-20220921220531-5916
	W0921 22:13:09.453637    8700 cli_runner.go:211] docker network inspect calico-20220921220531-5916 returned with exit code 1
	I0921 22:13:09.453815    8700 network_create.go:275] error running [docker network inspect calico-20220921220531-5916]: docker network inspect calico-20220921220531-5916: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: calico-20220921220531-5916
	I0921 22:13:09.453815    8700 network_create.go:277] output of [docker network inspect calico-20220921220531-5916]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: calico-20220921220531-5916
	
	** /stderr **
	W0921 22:13:09.454762    8700 delete.go:139] delete failed (probably ok) <nil>
	I0921 22:13:09.454762    8700 fix.go:115] Sleeping 1 second for extra luck!
	I0921 22:13:10.465270    8700 start.go:125] createHost starting for "" (driver="docker")
	I0921 22:13:10.470388    8700 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0921 22:13:10.470388    8700 start.go:159] libmachine.API.Create for "calico-20220921220531-5916" (driver="docker")
	I0921 22:13:10.470388    8700 client.go:168] LocalClient.Create starting
	I0921 22:13:10.471332    8700 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I0921 22:13:10.471332    8700 main.go:134] libmachine: Decoding PEM data...
	I0921 22:13:10.471332    8700 main.go:134] libmachine: Parsing certificate...
	I0921 22:13:10.471332    8700 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I0921 22:13:10.471947    8700 main.go:134] libmachine: Decoding PEM data...
	I0921 22:13:10.472037    8700 main.go:134] libmachine: Parsing certificate...
	I0921 22:13:10.482774    8700 cli_runner.go:164] Run: docker network inspect calico-20220921220531-5916 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0921 22:13:10.672714    8700 cli_runner.go:211] docker network inspect calico-20220921220531-5916 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0921 22:13:10.679746    8700 network_create.go:272] running [docker network inspect calico-20220921220531-5916] to gather additional debugging logs...
	I0921 22:13:10.679746    8700 cli_runner.go:164] Run: docker network inspect calico-20220921220531-5916
	W0921 22:13:10.884991    8700 cli_runner.go:211] docker network inspect calico-20220921220531-5916 returned with exit code 1
	I0921 22:13:10.884991    8700 network_create.go:275] error running [docker network inspect calico-20220921220531-5916]: docker network inspect calico-20220921220531-5916: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: calico-20220921220531-5916
	I0921 22:13:10.884991    8700 network_create.go:277] output of [docker network inspect calico-20220921220531-5916]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: calico-20220921220531-5916
	
	** /stderr **
	I0921 22:13:10.892012    8700 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0921 22:13:11.104920    8700 network.go:281] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0008b80c0] amended:false}} dirty:map[] misses:0}
	I0921 22:13:11.104920    8700 network.go:239] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0921 22:13:11.121914    8700 network.go:290] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0008b80c0] amended:true}} dirty:map[192.168.49.0:0xc0008b80c0 192.168.58.0:0xc00000aa58] misses:0}
	I0921 22:13:11.121914    8700 network.go:236] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0921 22:13:11.121914    8700 network_create.go:115] attempt to create docker network calico-20220921220531-5916 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0921 22:13:11.128912    8700 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=calico-20220921220531-5916 calico-20220921220531-5916
	W0921 22:13:11.337604    8700 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=calico-20220921220531-5916 calico-20220921220531-5916 returned with exit code 1
	E0921 22:13:11.337604    8700 network_create.go:104] error while trying to create docker network calico-20220921220531-5916 192.168.58.0/24: create docker network calico-20220921220531-5916 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=calico-20220921220531-5916 calico-20220921220531-5916: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 357fbe782ac666c8bffe8164b94677c0369313670cf646ed07db49a7999ebfb2 (br-357fbe782ac6): conflicts with network 8a3cd8d165a4af032276277dbee93b52b8ee4aed7ca6d8aceee6814901b31792 (br-8a3cd8d165a4): networks have overlapping IPv4
	W0921 22:13:11.337604    8700 out.go:239] ! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network calico-20220921220531-5916 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=calico-20220921220531-5916 calico-20220921220531-5916: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 357fbe782ac666c8bffe8164b94677c0369313670cf646ed07db49a7999ebfb2 (br-357fbe782ac6): conflicts with network 8a3cd8d165a4af032276277dbee93b52b8ee4aed7ca6d8aceee6814901b31792 (br-8a3cd8d165a4): networks have overlapping IPv4
	
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network calico-20220921220531-5916 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=calico-20220921220531-5916 calico-20220921220531-5916: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 357fbe782ac666c8bffe8164b94677c0369313670cf646ed07db49a7999ebfb2 (br-357fbe782ac6): conflicts with network 8a3cd8d165a4af032276277dbee93b52b8ee4aed7ca6d8aceee6814901b31792 (br-8a3cd8d165a4): networks have overlapping IPv4
	
	I0921 22:13:11.353597    8700 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0921 22:13:11.588552    8700 cli_runner.go:164] Run: docker volume create calico-20220921220531-5916 --label name.minikube.sigs.k8s.io=calico-20220921220531-5916 --label created_by.minikube.sigs.k8s.io=true
	W0921 22:13:11.791149    8700 cli_runner.go:211] docker volume create calico-20220921220531-5916 --label name.minikube.sigs.k8s.io=calico-20220921220531-5916 --label created_by.minikube.sigs.k8s.io=true returned with exit code 1
	I0921 22:13:11.791149    8700 client.go:171] LocalClient.Create took 1.3207506s
	I0921 22:13:13.808251    8700 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0921 22:13:13.821305    8700 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220921220531-5916
	W0921 22:13:14.031460    8700 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220921220531-5916 returned with exit code 1
	I0921 22:13:14.031460    8700 retry.go:31] will retry after 242.222461ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "calico-20220921220531-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220921220531-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-20220921220531-5916
	I0921 22:13:14.292303    8700 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220921220531-5916
	W0921 22:13:14.484249    8700 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220921220531-5916 returned with exit code 1
	I0921 22:13:14.484249    8700 retry.go:31] will retry after 293.637806ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "calico-20220921220531-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220921220531-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-20220921220531-5916
	I0921 22:13:14.792219    8700 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220921220531-5916
	W0921 22:13:14.971851    8700 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220921220531-5916 returned with exit code 1
	I0921 22:13:14.971851    8700 retry.go:31] will retry after 446.119795ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "calico-20220921220531-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220921220531-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-20220921220531-5916
	I0921 22:13:15.428539    8700 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220921220531-5916
	W0921 22:13:15.627298    8700 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220921220531-5916 returned with exit code 1
	W0921 22:13:15.627298    8700 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "calico-20220921220531-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220921220531-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-20220921220531-5916
	
	W0921 22:13:15.627298    8700 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "calico-20220921220531-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220921220531-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-20220921220531-5916
	I0921 22:13:15.644788    8700 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0921 22:13:15.656131    8700 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220921220531-5916
	W0921 22:13:15.862595    8700 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220921220531-5916 returned with exit code 1
	I0921 22:13:15.862595    8700 retry.go:31] will retry after 179.638263ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "calico-20220921220531-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220921220531-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-20220921220531-5916
	I0921 22:13:16.060694    8700 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220921220531-5916
	W0921 22:13:16.252457    8700 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220921220531-5916 returned with exit code 1
	I0921 22:13:16.252457    8700 retry.go:31] will retry after 263.695078ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "calico-20220921220531-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220921220531-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-20220921220531-5916
	I0921 22:13:16.533119    8700 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220921220531-5916
	W0921 22:13:16.728515    8700 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220921220531-5916 returned with exit code 1
	I0921 22:13:16.728515    8700 retry.go:31] will retry after 484.240172ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "calico-20220921220531-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220921220531-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-20220921220531-5916
	I0921 22:13:17.235853    8700 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220921220531-5916
	W0921 22:13:17.478396    8700 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220921220531-5916 returned with exit code 1
	W0921 22:13:17.478396    8700 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "calico-20220921220531-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220921220531-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-20220921220531-5916
	
	W0921 22:13:17.478396    8700 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "calico-20220921220531-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220921220531-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-20220921220531-5916
	I0921 22:13:17.478396    8700 start.go:128] duration metric: createHost completed in 7.0130143s
	I0921 22:13:17.500401    8700 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0921 22:13:17.511395    8700 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220921220531-5916
	W0921 22:13:17.702402    8700 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220921220531-5916 returned with exit code 1
	I0921 22:13:17.702402    8700 retry.go:31] will retry after 340.62286ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "calico-20220921220531-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220921220531-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-20220921220531-5916
	I0921 22:13:18.055978    8700 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220921220531-5916
	W0921 22:13:18.277521    8700 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220921220531-5916 returned with exit code 1
	I0921 22:13:18.277521    8700 retry.go:31] will retry after 297.417842ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "calico-20220921220531-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220921220531-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-20220921220531-5916
	I0921 22:13:18.596333    8700 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220921220531-5916
	W0921 22:13:18.780411    8700 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220921220531-5916 returned with exit code 1
	I0921 22:13:18.780411    8700 retry.go:31] will retry after 448.358942ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "calico-20220921220531-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220921220531-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-20220921220531-5916
	I0921 22:13:19.243654    8700 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220921220531-5916
	W0921 22:13:19.450242    8700 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220921220531-5916 returned with exit code 1
	W0921 22:13:19.450242    8700 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "calico-20220921220531-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220921220531-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-20220921220531-5916
	
	W0921 22:13:19.450242    8700 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "calico-20220921220531-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220921220531-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-20220921220531-5916
	I0921 22:13:19.460277    8700 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0921 22:13:19.466277    8700 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220921220531-5916
	W0921 22:13:19.690578    8700 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220921220531-5916 returned with exit code 1
	I0921 22:13:19.690949    8700 retry.go:31] will retry after 176.645665ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "calico-20220921220531-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220921220531-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-20220921220531-5916
	I0921 22:13:19.889125    8700 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220921220531-5916
	W0921 22:13:20.080432    8700 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220921220531-5916 returned with exit code 1
	I0921 22:13:20.080703    8700 retry.go:31] will retry after 512.00063ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "calico-20220921220531-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220921220531-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-20220921220531-5916
	I0921 22:13:20.601137    8700 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220921220531-5916
	W0921 22:13:20.796436    8700 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220921220531-5916 returned with exit code 1
	W0921 22:13:20.796436    8700 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "calico-20220921220531-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220921220531-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-20220921220531-5916
	
	W0921 22:13:20.796436    8700 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "calico-20220921220531-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220921220531-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-20220921220531-5916
	I0921 22:13:20.796436    8700 fix.go:57] fixHost completed within 30.5135627s
	I0921 22:13:20.796436    8700 start.go:83] releasing machines lock for "calico-20220921220531-5916", held for 30.5142285s
	W0921 22:13:20.796436    8700 out.go:239] * Failed to start docker container. Running "minikube delete -p calico-20220921220531-5916" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for calico-20220921220531-5916 container: docker volume create calico-20220921220531-5916 --label name.minikube.sigs.k8s.io=calico-20220921220531-5916 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create calico-20220921220531-5916: error while creating volume root path '/var/lib/docker/volumes/calico-20220921220531-5916': mkdir /var/lib/docker/volumes/calico-20220921220531-5916: read-only file system
	
	* Failed to start docker container. Running "minikube delete -p calico-20220921220531-5916" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for calico-20220921220531-5916 container: docker volume create calico-20220921220531-5916 --label name.minikube.sigs.k8s.io=calico-20220921220531-5916 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create calico-20220921220531-5916: error while creating volume root path '/var/lib/docker/volumes/calico-20220921220531-5916': mkdir /var/lib/docker/volumes/calico-20220921220531-5916: read-only file system
	
	I0921 22:13:20.801433    8700 out.go:177] 
	W0921 22:13:20.803445    8700 out.go:239] X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for calico-20220921220531-5916 container: docker volume create calico-20220921220531-5916 --label name.minikube.sigs.k8s.io=calico-20220921220531-5916 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create calico-20220921220531-5916: error while creating volume root path '/var/lib/docker/volumes/calico-20220921220531-5916': mkdir /var/lib/docker/volumes/calico-20220921220531-5916: read-only file system
	
	X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for calico-20220921220531-5916 container: docker volume create calico-20220921220531-5916 --label name.minikube.sigs.k8s.io=calico-20220921220531-5916 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create calico-20220921220531-5916: error while creating volume root path '/var/lib/docker/volumes/calico-20220921220531-5916': mkdir /var/lib/docker/volumes/calico-20220921220531-5916: read-only file system
	
	W0921 22:13:20.803445    8700 out.go:239] * Suggestion: Restart Docker
	* Suggestion: Restart Docker
	W0921 22:13:20.804459    8700 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/6825
	* Related issue: https://github.com/kubernetes/minikube/issues/6825
	I0921 22:13:20.807456    8700 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:103: failed start: exit status 60
--- FAIL: TestNetworkPlugins/group/calico/Start (49.18s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/DeployApp (1.93s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-different-port-20220921221221-5916 create -f testdata\busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context default-k8s-different-port-20220921221221-5916 create -f testdata\busybox.yaml: exit status 1 (183.6761ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-different-port-20220921221221-5916" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context default-k8s-different-port-20220921221221-5916 create -f testdata\busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/default-k8s-different-port/serial/DeployApp]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect default-k8s-different-port-20220921221221-5916
helpers_test.go:231: (dbg) Non-zero exit: docker inspect default-k8s-different-port-20220921221221-5916: exit status 1 (259.5665ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: default-k8s-different-port-20220921221221-5916

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p default-k8s-different-port-20220921221221-5916 -n default-k8s-different-port-20220921221221-5916

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/DeployApp
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p default-k8s-different-port-20220921221221-5916 -n default-k8s-different-port-20220921221221-5916: exit status 7 (609.1556ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0921 22:13:12.153077     760 status.go:247] status error: host: state: unknown state "default-k8s-different-port-20220921221221-5916": docker container inspect default-k8s-different-port-20220921221221-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220921221221-5916

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-different-port-20220921221221-5916" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/default-k8s-different-port/serial/DeployApp]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect default-k8s-different-port-20220921221221-5916

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/DeployApp
helpers_test.go:231: (dbg) Non-zero exit: docker inspect default-k8s-different-port-20220921221221-5916: exit status 1 (261.1738ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: default-k8s-different-port-20220921221221-5916

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p default-k8s-different-port-20220921221221-5916 -n default-k8s-different-port-20220921221221-5916

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/DeployApp
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p default-k8s-different-port-20220921221221-5916 -n default-k8s-different-port-20220921221221-5916: exit status 7 (593.1988ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0921 22:13:13.029007    8760 status.go:247] status error: host: state: unknown state "default-k8s-different-port-20220921221221-5916": docker container inspect default-k8s-different-port-20220921221221-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220921221221-5916

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-different-port-20220921221221-5916" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/default-k8s-different-port/serial/DeployApp (1.93s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive (1.68s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-windows-amd64.exe addons enable metrics-server -p default-k8s-different-port-20220921221221-5916 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-different-port-20220921221221-5916 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context default-k8s-different-port-20220921221221-5916 describe deploy/metrics-server -n kube-system: exit status 1 (182.6134ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-different-port-20220921221221-5916" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-different-port-20220921221221-5916 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/k8s.gcr.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect default-k8s-different-port-20220921221221-5916
helpers_test.go:231: (dbg) Non-zero exit: docker inspect default-k8s-different-port-20220921221221-5916: exit status 1 (257.1104ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: default-k8s-different-port-20220921221221-5916

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p default-k8s-different-port-20220921221221-5916 -n default-k8s-different-port-20220921221221-5916
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p default-k8s-different-port-20220921221221-5916 -n default-k8s-different-port-20220921221221-5916: exit status 7 (595.6868ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0921 22:13:14.689784    7868 status.go:247] status error: host: state: unknown state "default-k8s-different-port-20220921221221-5916": docker container inspect default-k8s-different-port-20220921221221-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220921221221-5916

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-different-port-20220921221221-5916" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive (1.68s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (20.25s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-windows-amd64.exe stop -p newest-cni-20220921221222-5916 --alsologtostderr -v=3

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-windows-amd64.exe stop -p newest-cni-20220921221222-5916 --alsologtostderr -v=3: exit status 82 (19.3659158s)

                                                
                                                
-- stdout --
	* Stopping node "newest-cni-20220921221222-5916"  ...
	* Stopping node "newest-cni-20220921221222-5916"  ...
	* Stopping node "newest-cni-20220921221222-5916"  ...
	* Stopping node "newest-cni-20220921221222-5916"  ...
	* Stopping node "newest-cni-20220921221222-5916"  ...
	* Stopping node "newest-cni-20220921221222-5916"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0921 22:13:13.896178    6344 out.go:296] Setting OutFile to fd 1732 ...
	I0921 22:13:13.960456    6344 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0921 22:13:13.960456    6344 out.go:309] Setting ErrFile to fd 1728...
	I0921 22:13:13.960456    6344 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0921 22:13:13.993445    6344 out.go:303] Setting JSON to false
	I0921 22:13:13.993445    6344 daemonize_windows.go:44] trying to kill existing schedule stop for profile newest-cni-20220921221222-5916...
	I0921 22:13:14.010465    6344 ssh_runner.go:195] Run: systemctl --version
	I0921 22:13:14.019452    6344 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220921221222-5916
	W0921 22:13:14.218179    6344 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220921221222-5916 returned with exit code 1
	I0921 22:13:14.218179    6344 retry.go:31] will retry after 276.165072ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-20220921221222-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220921221222-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220921221222-5916
	I0921 22:13:14.507262    6344 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220921221222-5916
	W0921 22:13:14.705161    6344 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220921221222-5916 returned with exit code 1
	I0921 22:13:14.705281    6344 retry.go:31] will retry after 540.190908ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-20220921221222-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220921221222-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220921221222-5916
	I0921 22:13:15.260119    6344 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220921221222-5916
	W0921 22:13:15.454538    6344 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220921221222-5916 returned with exit code 1
	I0921 22:13:15.464538    6344 ssh_runner.go:195] Run: sudo service minikube-scheduled-stop stop
	I0921 22:13:15.471500    6344 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220921221222-5916
	W0921 22:13:15.672941    6344 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220921221222-5916 returned with exit code 1
	I0921 22:13:15.673256    6344 retry.go:31] will retry after 234.428547ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-20220921221222-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220921221222-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220921221222-5916
	I0921 22:13:15.918722    6344 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220921221222-5916
	W0921 22:13:16.127928    6344 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220921221222-5916 returned with exit code 1
	I0921 22:13:16.127928    6344 retry.go:31] will retry after 346.739061ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-20220921221222-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220921221222-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220921221222-5916
	I0921 22:13:16.483922    6344 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220921221222-5916
	W0921 22:13:16.712512    6344 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220921221222-5916 returned with exit code 1
	I0921 22:13:16.712512    6344 retry.go:31] will retry after 667.587979ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-20220921221222-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220921221222-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220921221222-5916
	I0921 22:13:17.394655    6344 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220921221222-5916
	W0921 22:13:17.607414    6344 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220921221222-5916 returned with exit code 1
	E0921 22:13:17.607414    6344 daemonize_windows.go:38] error terminating scheduled stop for profile newest-cni-20220921221222-5916: stopping schedule-stop service for profile newest-cni-20220921221222-5916: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-20220921221222-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220921221222-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220921221222-5916
	I0921 22:13:17.607414    6344 mustload.go:65] Loading cluster: newest-cni-20220921221222-5916
	I0921 22:13:17.608424    6344 config.go:180] Loaded profile config "newest-cni-20220921221222-5916": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.2
	I0921 22:13:17.608424    6344 stop.go:39] StopHost: newest-cni-20220921221222-5916
	I0921 22:13:17.613402    6344 out.go:177] * Stopping node "newest-cni-20220921221222-5916"  ...
	I0921 22:13:17.631401    6344 cli_runner.go:164] Run: docker container inspect newest-cni-20220921221222-5916 --format={{.State.Status}}
	W0921 22:13:17.829140    6344 cli_runner.go:211] docker container inspect newest-cni-20220921221222-5916 --format={{.State.Status}} returned with exit code 1
	W0921 22:13:17.829140    6344 stop.go:75] unable to get state: unknown state "newest-cni-20220921221222-5916": docker container inspect newest-cni-20220921221222-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220921221222-5916
	W0921 22:13:17.829140    6344 stop.go:163] stop host returned error: ssh power off: unknown state "newest-cni-20220921221222-5916": docker container inspect newest-cni-20220921221222-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220921221222-5916
	I0921 22:13:17.829140    6344 retry.go:31] will retry after 656.519254ms: docker container inspect newest-cni-20220921221222-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220921221222-5916
	I0921 22:13:18.496558    6344 stop.go:39] StopHost: newest-cni-20220921221222-5916
	I0921 22:13:18.501128    6344 out.go:177] * Stopping node "newest-cni-20220921221222-5916"  ...
	I0921 22:13:18.518522    6344 cli_runner.go:164] Run: docker container inspect newest-cni-20220921221222-5916 --format={{.State.Status}}
	W0921 22:13:18.732343    6344 cli_runner.go:211] docker container inspect newest-cni-20220921221222-5916 --format={{.State.Status}} returned with exit code 1
	W0921 22:13:18.732343    6344 stop.go:75] unable to get state: unknown state "newest-cni-20220921221222-5916": docker container inspect newest-cni-20220921221222-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220921221222-5916
	W0921 22:13:18.732343    6344 stop.go:163] stop host returned error: ssh power off: unknown state "newest-cni-20220921221222-5916": docker container inspect newest-cni-20220921221222-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220921221222-5916
	I0921 22:13:18.732343    6344 retry.go:31] will retry after 895.454278ms: docker container inspect newest-cni-20220921221222-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220921221222-5916
	I0921 22:13:19.636514    6344 stop.go:39] StopHost: newest-cni-20220921221222-5916
	I0921 22:13:19.641371    6344 out.go:177] * Stopping node "newest-cni-20220921221222-5916"  ...
	I0921 22:13:19.659934    6344 cli_runner.go:164] Run: docker container inspect newest-cni-20220921221222-5916 --format={{.State.Status}}
	W0921 22:13:19.847458    6344 cli_runner.go:211] docker container inspect newest-cni-20220921221222-5916 --format={{.State.Status}} returned with exit code 1
	W0921 22:13:19.847725    6344 stop.go:75] unable to get state: unknown state "newest-cni-20220921221222-5916": docker container inspect newest-cni-20220921221222-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220921221222-5916
	W0921 22:13:19.847763    6344 stop.go:163] stop host returned error: ssh power off: unknown state "newest-cni-20220921221222-5916": docker container inspect newest-cni-20220921221222-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220921221222-5916
	I0921 22:13:19.847799    6344 retry.go:31] will retry after 1.802051686s: docker container inspect newest-cni-20220921221222-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220921221222-5916
	I0921 22:13:21.658508    6344 stop.go:39] StopHost: newest-cni-20220921221222-5916
	I0921 22:13:21.663506    6344 out.go:177] * Stopping node "newest-cni-20220921221222-5916"  ...
	I0921 22:13:21.685056    6344 cli_runner.go:164] Run: docker container inspect newest-cni-20220921221222-5916 --format={{.State.Status}}
	W0921 22:13:21.892245    6344 cli_runner.go:211] docker container inspect newest-cni-20220921221222-5916 --format={{.State.Status}} returned with exit code 1
	W0921 22:13:21.892245    6344 stop.go:75] unable to get state: unknown state "newest-cni-20220921221222-5916": docker container inspect newest-cni-20220921221222-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220921221222-5916
	W0921 22:13:21.892245    6344 stop.go:163] stop host returned error: ssh power off: unknown state "newest-cni-20220921221222-5916": docker container inspect newest-cni-20220921221222-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220921221222-5916
	I0921 22:13:21.892245    6344 retry.go:31] will retry after 3.426342621s: docker container inspect newest-cni-20220921221222-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220921221222-5916
	I0921 22:13:25.331051    6344 stop.go:39] StopHost: newest-cni-20220921221222-5916
	I0921 22:13:25.336589    6344 out.go:177] * Stopping node "newest-cni-20220921221222-5916"  ...
	I0921 22:13:25.359298    6344 cli_runner.go:164] Run: docker container inspect newest-cni-20220921221222-5916 --format={{.State.Status}}
	W0921 22:13:25.563154    6344 cli_runner.go:211] docker container inspect newest-cni-20220921221222-5916 --format={{.State.Status}} returned with exit code 1
	W0921 22:13:25.563154    6344 stop.go:75] unable to get state: unknown state "newest-cni-20220921221222-5916": docker container inspect newest-cni-20220921221222-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220921221222-5916
	W0921 22:13:25.563154    6344 stop.go:163] stop host returned error: ssh power off: unknown state "newest-cni-20220921221222-5916": docker container inspect newest-cni-20220921221222-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220921221222-5916
	I0921 22:13:25.563154    6344 retry.go:31] will retry after 6.650302303s: docker container inspect newest-cni-20220921221222-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220921221222-5916
	I0921 22:13:32.215683    6344 stop.go:39] StopHost: newest-cni-20220921221222-5916
	I0921 22:13:32.229151    6344 out.go:177] * Stopping node "newest-cni-20220921221222-5916"  ...
	I0921 22:13:32.246041    6344 cli_runner.go:164] Run: docker container inspect newest-cni-20220921221222-5916 --format={{.State.Status}}
	W0921 22:13:32.440247    6344 cli_runner.go:211] docker container inspect newest-cni-20220921221222-5916 --format={{.State.Status}} returned with exit code 1
	W0921 22:13:32.440393    6344 stop.go:75] unable to get state: unknown state "newest-cni-20220921221222-5916": docker container inspect newest-cni-20220921221222-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220921221222-5916
	W0921 22:13:32.440393    6344 stop.go:163] stop host returned error: ssh power off: unknown state "newest-cni-20220921221222-5916": docker container inspect newest-cni-20220921221222-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220921221222-5916
	I0921 22:13:32.442752    6344 out.go:177] 
	W0921 22:13:32.443755    6344 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: docker container inspect newest-cni-20220921221222-5916 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220921221222-5916
	
	X Exiting due to GUEST_STOP_TIMEOUT: docker container inspect newest-cni-20220921221222-5916 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220921221222-5916
	
	W0921 22:13:32.443755    6344 out.go:239] * 
	* 
	W0921 22:13:32.940800    6344 out.go:239] ╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                        │
	│    * If the above advice does not help, please let us know:                                                            │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                          │
	│                                                                                                                        │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                               │
	│    * Please also attach the following file to the GitHub issue:                                                        │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_stop_eeafc14e343686f0df1f1d4295ac2d3042636ff8_153.log    │
	│                                                                                                                        │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                        │
	│    * If the above advice does not help, please let us know:                                                            │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                          │
	│                                                                                                                        │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                               │
	│    * Please also attach the following file to the GitHub issue:                                                        │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_stop_eeafc14e343686f0df1f1d4295ac2d3042636ff8_153.log    │
	│                                                                                                                        │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0921 22:13:32.945139    6344 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-windows-amd64.exe stop -p newest-cni-20220921221222-5916 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Stop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect newest-cni-20220921221222-5916
helpers_test.go:231: (dbg) Non-zero exit: docker inspect newest-cni-20220921221222-5916: exit status 1 (269.8261ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: newest-cni-20220921221222-5916

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p newest-cni-20220921221222-5916 -n newest-cni-20220921221222-5916
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p newest-cni-20220921221222-5916 -n newest-cni-20220921221222-5916: exit status 7 (601.1709ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0921 22:13:33.839257    8176 status.go:247] status error: host: state: unknown state "newest-cni-20220921221222-5916": docker container inspect newest-cni-20220921221222-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220921221222-5916

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-20220921221222-5916" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/newest-cni/serial/Stop (20.25s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/Stop (20.19s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-windows-amd64.exe stop -p default-k8s-different-port-20220921221221-5916 --alsologtostderr -v=3

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-windows-amd64.exe stop -p default-k8s-different-port-20220921221221-5916 --alsologtostderr -v=3: exit status 82 (19.3379098s)

                                                
                                                
-- stdout --
	* Stopping node "default-k8s-different-port-20220921221221-5916"  ...
	* Stopping node "default-k8s-different-port-20220921221221-5916"  ...
	* Stopping node "default-k8s-different-port-20220921221221-5916"  ...
	* Stopping node "default-k8s-different-port-20220921221221-5916"  ...
	* Stopping node "default-k8s-different-port-20220921221221-5916"  ...
	* Stopping node "default-k8s-different-port-20220921221221-5916"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0921 22:13:14.974874    2484 out.go:296] Setting OutFile to fd 1936 ...
	I0921 22:13:15.045320    2484 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0921 22:13:15.045320    2484 out.go:309] Setting ErrFile to fd 1904...
	I0921 22:13:15.045320    2484 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0921 22:13:15.056464    2484 out.go:303] Setting JSON to false
	I0921 22:13:15.057529    2484 daemonize_windows.go:44] trying to kill existing schedule stop for profile default-k8s-different-port-20220921221221-5916...
	I0921 22:13:15.069530    2484 ssh_runner.go:195] Run: systemctl --version
	I0921 22:13:15.077530    2484 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220921221221-5916
	W0921 22:13:15.282428    2484 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220921221221-5916 returned with exit code 1
	I0921 22:13:15.282428    2484 retry.go:31] will retry after 276.165072ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-different-port-20220921221221-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220921221221-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220921221221-5916
	I0921 22:13:15.570278    2484 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220921221221-5916
	W0921 22:13:15.784219    2484 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220921221221-5916 returned with exit code 1
	I0921 22:13:15.784219    2484 retry.go:31] will retry after 540.190908ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-different-port-20220921221221-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220921221221-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220921221221-5916
	I0921 22:13:16.344981    2484 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220921221221-5916
	W0921 22:13:16.553338    2484 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220921221221-5916 returned with exit code 1
	I0921 22:13:16.565652    2484 ssh_runner.go:195] Run: sudo service minikube-scheduled-stop stop
	I0921 22:13:16.573382    2484 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220921221221-5916
	W0921 22:13:16.760521    2484 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220921221221-5916 returned with exit code 1
	I0921 22:13:16.760521    2484 retry.go:31] will retry after 234.428547ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-different-port-20220921221221-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220921221221-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220921221221-5916
	I0921 22:13:17.018554    2484 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220921221221-5916
	W0921 22:13:17.211285    2484 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220921221221-5916 returned with exit code 1
	I0921 22:13:17.211349    2484 retry.go:31] will retry after 346.739061ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-different-port-20220921221221-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220921221221-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220921221221-5916
	I0921 22:13:17.566410    2484 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220921221221-5916
	W0921 22:13:17.765145    2484 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220921221221-5916 returned with exit code 1
	I0921 22:13:17.765145    2484 retry.go:31] will retry after 667.587979ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-different-port-20220921221221-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220921221221-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220921221221-5916
	I0921 22:13:18.443261    2484 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220921221221-5916
	W0921 22:13:18.653339    2484 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220921221221-5916 returned with exit code 1
	E0921 22:13:18.653339    2484 daemonize_windows.go:38] error terminating scheduled stop for profile default-k8s-different-port-20220921221221-5916: stopping schedule-stop service for profile default-k8s-different-port-20220921221221-5916: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-different-port-20220921221221-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220921221221-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220921221221-5916
	I0921 22:13:18.653339    2484 mustload.go:65] Loading cluster: default-k8s-different-port-20220921221221-5916
	I0921 22:13:18.654403    2484 config.go:180] Loaded profile config "default-k8s-different-port-20220921221221-5916": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.2
	I0921 22:13:18.654403    2484 stop.go:39] StopHost: default-k8s-different-port-20220921221221-5916
	I0921 22:13:18.658346    2484 out.go:177] * Stopping node "default-k8s-different-port-20220921221221-5916"  ...
	I0921 22:13:18.676346    2484 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220921221221-5916 --format={{.State.Status}}
	W0921 22:13:18.860383    2484 cli_runner.go:211] docker container inspect default-k8s-different-port-20220921221221-5916 --format={{.State.Status}} returned with exit code 1
	W0921 22:13:18.860383    2484 stop.go:75] unable to get state: unknown state "default-k8s-different-port-20220921221221-5916": docker container inspect default-k8s-different-port-20220921221221-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220921221221-5916
	W0921 22:13:18.860383    2484 stop.go:163] stop host returned error: ssh power off: unknown state "default-k8s-different-port-20220921221221-5916": docker container inspect default-k8s-different-port-20220921221221-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220921221221-5916
	I0921 22:13:18.860383    2484 retry.go:31] will retry after 656.519254ms: docker container inspect default-k8s-different-port-20220921221221-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220921221221-5916
	I0921 22:13:19.529356    2484 stop.go:39] StopHost: default-k8s-different-port-20220921221221-5916
	I0921 22:13:19.535000    2484 out.go:177] * Stopping node "default-k8s-different-port-20220921221221-5916"  ...
	I0921 22:13:19.561389    2484 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220921221221-5916 --format={{.State.Status}}
	W0921 22:13:19.769027    2484 cli_runner.go:211] docker container inspect default-k8s-different-port-20220921221221-5916 --format={{.State.Status}} returned with exit code 1
	W0921 22:13:19.769027    2484 stop.go:75] unable to get state: unknown state "default-k8s-different-port-20220921221221-5916": docker container inspect default-k8s-different-port-20220921221221-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220921221221-5916
	W0921 22:13:19.769027    2484 stop.go:163] stop host returned error: ssh power off: unknown state "default-k8s-different-port-20220921221221-5916": docker container inspect default-k8s-different-port-20220921221221-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220921221221-5916
	I0921 22:13:19.769027    2484 retry.go:31] will retry after 895.454278ms: docker container inspect default-k8s-different-port-20220921221221-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220921221221-5916
	I0921 22:13:20.669682    2484 stop.go:39] StopHost: default-k8s-different-port-20220921221221-5916
	I0921 22:13:20.674082    2484 out.go:177] * Stopping node "default-k8s-different-port-20220921221221-5916"  ...
	I0921 22:13:20.702942    2484 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220921221221-5916 --format={{.State.Status}}
	W0921 22:13:20.876433    2484 cli_runner.go:211] docker container inspect default-k8s-different-port-20220921221221-5916 --format={{.State.Status}} returned with exit code 1
	W0921 22:13:20.876433    2484 stop.go:75] unable to get state: unknown state "default-k8s-different-port-20220921221221-5916": docker container inspect default-k8s-different-port-20220921221221-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220921221221-5916
	W0921 22:13:20.876433    2484 stop.go:163] stop host returned error: ssh power off: unknown state "default-k8s-different-port-20220921221221-5916": docker container inspect default-k8s-different-port-20220921221221-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220921221221-5916
	I0921 22:13:20.876433    2484 retry.go:31] will retry after 1.802051686s: docker container inspect default-k8s-different-port-20220921221221-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220921221221-5916
	I0921 22:13:22.689145    2484 stop.go:39] StopHost: default-k8s-different-port-20220921221221-5916
	I0921 22:13:22.694246    2484 out.go:177] * Stopping node "default-k8s-different-port-20220921221221-5916"  ...
	I0921 22:13:22.712336    2484 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220921221221-5916 --format={{.State.Status}}
	W0921 22:13:22.921926    2484 cli_runner.go:211] docker container inspect default-k8s-different-port-20220921221221-5916 --format={{.State.Status}} returned with exit code 1
	W0921 22:13:22.921926    2484 stop.go:75] unable to get state: unknown state "default-k8s-different-port-20220921221221-5916": docker container inspect default-k8s-different-port-20220921221221-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220921221221-5916
	W0921 22:13:22.921926    2484 stop.go:163] stop host returned error: ssh power off: unknown state "default-k8s-different-port-20220921221221-5916": docker container inspect default-k8s-different-port-20220921221221-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220921221221-5916
	I0921 22:13:22.921926    2484 retry.go:31] will retry after 3.426342621s: docker container inspect default-k8s-different-port-20220921221221-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220921221221-5916
	I0921 22:13:26.351932    2484 stop.go:39] StopHost: default-k8s-different-port-20220921221221-5916
	I0921 22:13:26.357934    2484 out.go:177] * Stopping node "default-k8s-different-port-20220921221221-5916"  ...
	I0921 22:13:26.373932    2484 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220921221221-5916 --format={{.State.Status}}
	W0921 22:13:26.589690    2484 cli_runner.go:211] docker container inspect default-k8s-different-port-20220921221221-5916 --format={{.State.Status}} returned with exit code 1
	W0921 22:13:26.589690    2484 stop.go:75] unable to get state: unknown state "default-k8s-different-port-20220921221221-5916": docker container inspect default-k8s-different-port-20220921221221-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220921221221-5916
	W0921 22:13:26.589690    2484 stop.go:163] stop host returned error: ssh power off: unknown state "default-k8s-different-port-20220921221221-5916": docker container inspect default-k8s-different-port-20220921221221-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220921221221-5916
	I0921 22:13:26.589690    2484 retry.go:31] will retry after 6.650302303s: docker container inspect default-k8s-different-port-20220921221221-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220921221221-5916
	I0921 22:13:33.253640    2484 stop.go:39] StopHost: default-k8s-different-port-20220921221221-5916
	I0921 22:13:33.261632    2484 out.go:177] * Stopping node "default-k8s-different-port-20220921221221-5916"  ...
	I0921 22:13:33.303313    2484 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220921221221-5916 --format={{.State.Status}}
	W0921 22:13:33.510460    2484 cli_runner.go:211] docker container inspect default-k8s-different-port-20220921221221-5916 --format={{.State.Status}} returned with exit code 1
	W0921 22:13:33.510460    2484 stop.go:75] unable to get state: unknown state "default-k8s-different-port-20220921221221-5916": docker container inspect default-k8s-different-port-20220921221221-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220921221221-5916
	W0921 22:13:33.510460    2484 stop.go:163] stop host returned error: ssh power off: unknown state "default-k8s-different-port-20220921221221-5916": docker container inspect default-k8s-different-port-20220921221221-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220921221221-5916
	I0921 22:13:33.513448    2484 out.go:177] 
	W0921 22:13:33.515459    2484 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: docker container inspect default-k8s-different-port-20220921221221-5916 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220921221221-5916
	
	X Exiting due to GUEST_STOP_TIMEOUT: docker container inspect default-k8s-different-port-20220921221221-5916 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220921221221-5916
	
	W0921 22:13:33.515459    2484 out.go:239] * 
	* 
	W0921 22:13:34.016467    2484 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_status_abcabdb3ea89e0e0cb5bb0e0976767ebe71062f4_70.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_status_abcabdb3ea89e0e0cb5bb0e0976767ebe71062f4_70.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0921 22:13:34.020466    2484 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-windows-amd64.exe stop -p default-k8s-different-port-20220921221221-5916 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/default-k8s-different-port/serial/Stop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect default-k8s-different-port-20220921221221-5916
helpers_test.go:231: (dbg) Non-zero exit: docker inspect default-k8s-different-port-20220921221221-5916: exit status 1 (255.2066ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: default-k8s-different-port-20220921221221-5916

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p default-k8s-different-port-20220921221221-5916 -n default-k8s-different-port-20220921221221-5916

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/Stop
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p default-k8s-different-port-20220921221221-5916 -n default-k8s-different-port-20220921221221-5916: exit status 7 (578.7455ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0921 22:13:34.875031    6340 status.go:247] status error: host: state: unknown state "default-k8s-different-port-20220921221221-5916": docker container inspect default-k8s-different-port-20220921221221-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220921221221-5916

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-different-port-20220921221221-5916" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/default-k8s-different-port/serial/Stop (20.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/Start (49.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/Start
net_test.go:101: (dbg) Run:  out/minikube-windows-amd64.exe start -p cilium-20220921220531-5916 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=cilium --driver=docker

                                                
                                                
=== CONT  TestNetworkPlugins/group/cilium/Start
net_test.go:101: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p cilium-20220921220531-5916 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=cilium --driver=docker: exit status 60 (49.3255927s)

                                                
                                                
-- stdout --
	* [cilium-20220921220531-5916] minikube v1.27.0 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	  - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=14995
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting control plane node cilium-20220921220531-5916 in cluster cilium-20220921220531-5916
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* docker "cilium-20220921220531-5916" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0921 22:13:20.126777    7444 out.go:296] Setting OutFile to fd 1648 ...
	I0921 22:13:20.186182    7444 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0921 22:13:20.186182    7444 out.go:309] Setting ErrFile to fd 1988...
	I0921 22:13:20.186182    7444 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0921 22:13:20.208293    7444 out.go:303] Setting JSON to false
	I0921 22:13:20.212172    7444 start.go:115] hostinfo: {"hostname":"minikube2","uptime":4468,"bootTime":1663793932,"procs":152,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19044 Build 19044","kernelVersion":"10.0.19044 Build 19044","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"29dced62-21fb-45d8-a34e-472b66ced002"}
	W0921 22:13:20.212172    7444 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0921 22:13:20.217141    7444 out.go:177] * [cilium-20220921220531-5916] minikube v1.27.0 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	I0921 22:13:20.221322    7444 notify.go:214] Checking for updates...
	I0921 22:13:20.223324    7444 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	I0921 22:13:20.226326    7444 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	I0921 22:13:20.228369    7444 out.go:177]   - MINIKUBE_LOCATION=14995
	I0921 22:13:20.231563    7444 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0921 22:13:20.234974    7444 config.go:180] Loaded profile config "calico-20220921220531-5916": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.2
	I0921 22:13:20.234974    7444 config.go:180] Loaded profile config "default-k8s-different-port-20220921221221-5916": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.2
	I0921 22:13:20.235767    7444 config.go:180] Loaded profile config "multinode-20220921215635-5916-m01": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.2
	I0921 22:13:20.236268    7444 config.go:180] Loaded profile config "newest-cni-20220921221222-5916": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.2
	I0921 22:13:20.236369    7444 driver.go:365] Setting default libvirt URI to qemu:///system
	I0921 22:13:20.531958    7444 docker.go:137] docker version: linux-20.10.17
	I0921 22:13:20.541630    7444 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0921 22:13:21.079779    7444 info.go:265] docker info: {ID:6VFI:N2PO:PLBP:KXLR:EN7R:OR7P:564S:UY7N:ZWDA:5V2U:KGL2:KJZ7 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:43 OomKillDisable:true NGoroutines:89 SystemTime:2022-09-21 22:13:20.6945344 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.10.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.9] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-
plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.19.0]] Warnings:<nil>}}
	I0921 22:13:21.083769    7444 out.go:177] * Using the docker driver based on user configuration
	I0921 22:13:21.087767    7444 start.go:284] selected driver: docker
	I0921 22:13:21.087767    7444 start.go:808] validating driver "docker" against <nil>
	I0921 22:13:21.087767    7444 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0921 22:13:21.152773    7444 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0921 22:13:21.734949    7444 info.go:265] docker info: {ID:6VFI:N2PO:PLBP:KXLR:EN7R:OR7P:564S:UY7N:ZWDA:5V2U:KGL2:KJZ7 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:43 OomKillDisable:true NGoroutines:89 SystemTime:2022-09-21 22:13:21.3285756 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.10.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.9] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-
plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.19.0]] Warnings:<nil>}}
	I0921 22:13:21.734949    7444 start_flags.go:302] no existing cluster config was found, will generate one from the flags 
	I0921 22:13:21.735933    7444 start_flags.go:867] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0921 22:13:21.738937    7444 out.go:177] * Using Docker Desktop driver with root privileges
	I0921 22:13:21.740955    7444 cni.go:95] Creating CNI manager for "cilium"
	I0921 22:13:21.740955    7444 start_flags.go:311] Found "Cilium" CNI - setting NetworkPlugin=cni
	I0921 22:13:21.740955    7444 start_flags.go:316] config:
	{Name:cilium-20220921220531-5916 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.2 ClusterName:cilium-20220921220531-5916 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:cilium NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I0921 22:13:21.743934    7444 out.go:177] * Starting control plane node cilium-20220921220531-5916 in cluster cilium-20220921220531-5916
	I0921 22:13:21.746933    7444 cache.go:120] Beginning downloading kic base image for docker with docker
	I0921 22:13:21.748964    7444 out.go:177] * Pulling base image ...
	I0921 22:13:21.751935    7444 preload.go:132] Checking if preload exists for k8s version v1.25.2 and runtime docker
	I0921 22:13:21.751935    7444 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c in local docker daemon
	I0921 22:13:21.751935    7444 preload.go:148] Found local preload: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.25.2-docker-overlay2-amd64.tar.lz4
	I0921 22:13:21.751935    7444 cache.go:57] Caching tarball of preloaded images
	I0921 22:13:21.752935    7444 preload.go:174] Found C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.25.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0921 22:13:21.752935    7444 cache.go:60] Finished verifying existence of preloaded tar for  v1.25.2 on docker
	I0921 22:13:21.752935    7444 profile.go:148] Saving config to C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\cilium-20220921220531-5916\config.json ...
	I0921 22:13:21.752935    7444 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\cilium-20220921220531-5916\config.json: {Name:mk7a99b654d7bbf19ae6c8b1b60b550bad8d78c2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0921 22:13:21.954808    7444 cache.go:147] Downloading gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c to local cache
	I0921 22:13:21.954808    7444 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase_v0.0.34@sha256_f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c.tar
	I0921 22:13:21.954808    7444 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase_v0.0.34@sha256_f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c.tar
	I0921 22:13:21.954808    7444 image.go:59] Checking for gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c in local cache directory
	I0921 22:13:21.954808    7444 image.go:62] Found gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c in local cache directory, skipping pull
	I0921 22:13:21.954808    7444 image.go:103] gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c exists in cache, skipping pull
	I0921 22:13:21.954808    7444 cache.go:150] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c as a tarball
	I0921 22:13:21.954808    7444 cache.go:161] Loading gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c from local cache
	I0921 22:13:21.954808    7444 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase_v0.0.34@sha256_f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c.tar
	I0921 22:13:24.325853    7444 cache.go:164] successfully loaded gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c from cached tarball
	I0921 22:13:24.325853    7444 cache.go:170] Downloading gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c to local daemon
	I0921 22:13:24.325927    7444 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c.d.lock -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase_v0.0.34@sha256_f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c.d.lock
	I0921 22:13:24.326312    7444 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c in local docker daemon
	I0921 22:13:24.568093    7444 image.go:243] Writing gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c to local daemon
	    > gcr.io/k8s-minikube/kicbase:  0 B [__________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase:  0 B [__________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase:  0 B [__________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase:  0 B [__________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase:  0 B [______________________] ?% ? p/s 800msI0921 22:13:26.493743    7444 cache.go:177] use image loaded from cache gcr.io/k8s-minikube/kicbase:v0.0.34
	W0921 22:13:26.493743    7444 out.go:239] ! minikube was unable to download gcr.io/k8s-minikube/kicbase:v0.0.34, but successfully downloaded gcr.io/k8s-minikube/kicbase:v0.0.34 as a fallback image
	! minikube was unable to download gcr.io/k8s-minikube/kicbase:v0.0.34, but successfully downloaded gcr.io/k8s-minikube/kicbase:v0.0.34 as a fallback image
	I0921 22:13:26.493743    7444 cache.go:208] Successfully downloaded all kic artifacts
	I0921 22:13:26.493743    7444 start.go:364] acquiring machines lock for cilium-20220921220531-5916: {Name:mk47f0d266dd6bd46935b6546f672f77684f4148 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0921 22:13:26.493743    7444 start.go:368] acquired machines lock for "cilium-20220921220531-5916" in 0s
	I0921 22:13:26.493743    7444 start.go:93] Provisioning new machine with config: &{Name:cilium-20220921220531-5916 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.34 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.2 ClusterName:cilium-20220921220531-5916 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunt
ime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:cilium NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.25.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVM
netClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet} &{Name: IP: Port:8443 KubernetesVersion:v1.25.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0921 22:13:26.494699    7444 start.go:125] createHost starting for "" (driver="docker")
	I0921 22:13:26.497755    7444 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0921 22:13:26.498738    7444 start.go:159] libmachine.API.Create for "cilium-20220921220531-5916" (driver="docker")
	I0921 22:13:26.498738    7444 client.go:168] LocalClient.Create starting
	I0921 22:13:26.498738    7444 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I0921 22:13:26.498738    7444 main.go:134] libmachine: Decoding PEM data...
	I0921 22:13:26.499706    7444 main.go:134] libmachine: Parsing certificate...
	I0921 22:13:26.499706    7444 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I0921 22:13:26.499706    7444 main.go:134] libmachine: Decoding PEM data...
	I0921 22:13:26.499706    7444 main.go:134] libmachine: Parsing certificate...
	I0921 22:13:26.509740    7444 cli_runner.go:164] Run: docker network inspect cilium-20220921220531-5916 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0921 22:13:26.728350    7444 cli_runner.go:211] docker network inspect cilium-20220921220531-5916 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0921 22:13:26.735419    7444 network_create.go:272] running [docker network inspect cilium-20220921220531-5916] to gather additional debugging logs...
	I0921 22:13:26.735419    7444 cli_runner.go:164] Run: docker network inspect cilium-20220921220531-5916
	W0921 22:13:26.932656    7444 cli_runner.go:211] docker network inspect cilium-20220921220531-5916 returned with exit code 1
	I0921 22:13:26.932656    7444 network_create.go:275] error running [docker network inspect cilium-20220921220531-5916]: docker network inspect cilium-20220921220531-5916: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: cilium-20220921220531-5916
	I0921 22:13:26.932656    7444 network_create.go:277] output of [docker network inspect cilium-20220921220531-5916]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: cilium-20220921220531-5916
	
	** /stderr **
	I0921 22:13:26.938654    7444 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0921 22:13:27.170796    7444 network.go:290] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc0006b8b40] misses:0}
	I0921 22:13:27.170796    7444 network.go:236] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0921 22:13:27.170796    7444 network_create.go:115] attempt to create docker network cilium-20220921220531-5916 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0921 22:13:27.179447    7444 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=cilium-20220921220531-5916 cilium-20220921220531-5916
	W0921 22:13:27.368505    7444 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=cilium-20220921220531-5916 cilium-20220921220531-5916 returned with exit code 1
	E0921 22:13:27.371476    7444 network_create.go:104] error while trying to create docker network cilium-20220921220531-5916 192.168.49.0/24: create docker network cilium-20220921220531-5916 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=cilium-20220921220531-5916 cilium-20220921220531-5916: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network b094e127febe7846ea57c7dad72676508984b4a1466860444d1bc36cbab1e47f (br-b094e127febe): conflicts with network a04d36bfb3cf4b1b1ab643a2ec70fadfc4a485878b32a275b4c7653037d66dd0 (br-a04d36bfb3cf): networks have overlapping IPv4
	W0921 22:13:27.371476    7444 out.go:239] ! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network cilium-20220921220531-5916 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=cilium-20220921220531-5916 cilium-20220921220531-5916: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network b094e127febe7846ea57c7dad72676508984b4a1466860444d1bc36cbab1e47f (br-b094e127febe): conflicts with network a04d36bfb3cf4b1b1ab643a2ec70fadfc4a485878b32a275b4c7653037d66dd0 (br-a04d36bfb3cf): networks have overlapping IPv4
	
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network cilium-20220921220531-5916 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=cilium-20220921220531-5916 cilium-20220921220531-5916: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network b094e127febe7846ea57c7dad72676508984b4a1466860444d1bc36cbab1e47f (br-b094e127febe): conflicts with network a04d36bfb3cf4b1b1ab643a2ec70fadfc4a485878b32a275b4c7653037d66dd0 (br-a04d36bfb3cf): networks have overlapping IPv4
	
	I0921 22:13:27.385476    7444 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0921 22:13:27.591313    7444 cli_runner.go:164] Run: docker volume create cilium-20220921220531-5916 --label name.minikube.sigs.k8s.io=cilium-20220921220531-5916 --label created_by.minikube.sigs.k8s.io=true
	W0921 22:13:27.777783    7444 cli_runner.go:211] docker volume create cilium-20220921220531-5916 --label name.minikube.sigs.k8s.io=cilium-20220921220531-5916 --label created_by.minikube.sigs.k8s.io=true returned with exit code 1
	I0921 22:13:27.777783    7444 client.go:171] LocalClient.Create took 1.2790356s
	I0921 22:13:29.799528    7444 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0921 22:13:29.805654    7444 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220921220531-5916
	W0921 22:13:30.034874    7444 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220921220531-5916 returned with exit code 1
	I0921 22:13:30.035108    7444 retry.go:31] will retry after 276.165072ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "cilium-20220921220531-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220921220531-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cilium-20220921220531-5916
	I0921 22:13:30.323302    7444 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220921220531-5916
	W0921 22:13:30.532193    7444 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220921220531-5916 returned with exit code 1
	I0921 22:13:30.532775    7444 retry.go:31] will retry after 540.190908ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "cilium-20220921220531-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220921220531-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cilium-20220921220531-5916
	I0921 22:13:31.086710    7444 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220921220531-5916
	W0921 22:13:31.297092    7444 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220921220531-5916 returned with exit code 1
	W0921 22:13:31.297092    7444 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "cilium-20220921220531-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220921220531-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cilium-20220921220531-5916
	
	W0921 22:13:31.297092    7444 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "cilium-20220921220531-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220921220531-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cilium-20220921220531-5916
	I0921 22:13:31.307461    7444 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0921 22:13:31.314108    7444 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220921220531-5916
	W0921 22:13:31.500530    7444 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220921220531-5916 returned with exit code 1
	I0921 22:13:31.500530    7444 retry.go:31] will retry after 234.428547ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "cilium-20220921220531-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220921220531-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cilium-20220921220531-5916
	I0921 22:13:31.742667    7444 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220921220531-5916
	W0921 22:13:31.937650    7444 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220921220531-5916 returned with exit code 1
	I0921 22:13:31.937650    7444 retry.go:31] will retry after 346.739061ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "cilium-20220921220531-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220921220531-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cilium-20220921220531-5916
	I0921 22:13:32.303462    7444 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220921220531-5916
	W0921 22:13:32.501555    7444 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220921220531-5916 returned with exit code 1
	I0921 22:13:32.501555    7444 retry.go:31] will retry after 667.587979ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "cilium-20220921220531-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220921220531-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cilium-20220921220531-5916
	I0921 22:13:33.185726    7444 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220921220531-5916
	W0921 22:13:33.380092    7444 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220921220531-5916 returned with exit code 1
	W0921 22:13:33.380092    7444 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "cilium-20220921220531-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220921220531-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cilium-20220921220531-5916
	
	W0921 22:13:33.380092    7444 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "cilium-20220921220531-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220921220531-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cilium-20220921220531-5916
	I0921 22:13:33.380092    7444 start.go:128] duration metric: createHost completed in 6.8853386s
	I0921 22:13:33.380092    7444 start.go:83] releasing machines lock for "cilium-20220921220531-5916", held for 6.8862942s
	W0921 22:13:33.380092    7444 start.go:602] error starting host: creating host: create: creating: setting up container node: creating volume for cilium-20220921220531-5916 container: docker volume create cilium-20220921220531-5916 --label name.minikube.sigs.k8s.io=cilium-20220921220531-5916 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create cilium-20220921220531-5916: error while creating volume root path '/var/lib/docker/volumes/cilium-20220921220531-5916': mkdir /var/lib/docker/volumes/cilium-20220921220531-5916: read-only file system
	I0921 22:13:33.399090    7444 cli_runner.go:164] Run: docker container inspect cilium-20220921220531-5916 --format={{.State.Status}}
	W0921 22:13:33.590459    7444 cli_runner.go:211] docker container inspect cilium-20220921220531-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:13:33.590459    7444 delete.go:82] Unable to get host status for cilium-20220921220531-5916, assuming it has already been deleted: state: unknown state "cilium-20220921220531-5916": docker container inspect cilium-20220921220531-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cilium-20220921220531-5916
	W0921 22:13:33.590459    7444 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: setting up container node: creating volume for cilium-20220921220531-5916 container: docker volume create cilium-20220921220531-5916 --label name.minikube.sigs.k8s.io=cilium-20220921220531-5916 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create cilium-20220921220531-5916: error while creating volume root path '/var/lib/docker/volumes/cilium-20220921220531-5916': mkdir /var/lib/docker/volumes/cilium-20220921220531-5916: read-only file system
	
	! StartHost failed, but will try again: creating host: create: creating: setting up container node: creating volume for cilium-20220921220531-5916 container: docker volume create cilium-20220921220531-5916 --label name.minikube.sigs.k8s.io=cilium-20220921220531-5916 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create cilium-20220921220531-5916: error while creating volume root path '/var/lib/docker/volumes/cilium-20220921220531-5916': mkdir /var/lib/docker/volumes/cilium-20220921220531-5916: read-only file system
	
	I0921 22:13:33.590459    7444 start.go:617] Will try again in 5 seconds ...
	I0921 22:13:38.602679    7444 start.go:364] acquiring machines lock for cilium-20220921220531-5916: {Name:mk47f0d266dd6bd46935b6546f672f77684f4148 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0921 22:13:38.602679    7444 start.go:368] acquired machines lock for "cilium-20220921220531-5916" in 0s
	I0921 22:13:38.603208    7444 start.go:96] Skipping create...Using existing machine configuration
	I0921 22:13:38.603208    7444 fix.go:55] fixHost starting: 
	I0921 22:13:38.618579    7444 cli_runner.go:164] Run: docker container inspect cilium-20220921220531-5916 --format={{.State.Status}}
	W0921 22:13:38.808257    7444 cli_runner.go:211] docker container inspect cilium-20220921220531-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:13:38.808257    7444 fix.go:103] recreateIfNeeded on cilium-20220921220531-5916: state= err=unknown state "cilium-20220921220531-5916": docker container inspect cilium-20220921220531-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cilium-20220921220531-5916
	I0921 22:13:38.808257    7444 fix.go:108] machineExists: false. err=machine does not exist
	I0921 22:13:38.812254    7444 out.go:177] * docker "cilium-20220921220531-5916" container is missing, will recreate.
	I0921 22:13:38.814253    7444 delete.go:124] DEMOLISHING cilium-20220921220531-5916 ...
	I0921 22:13:38.827258    7444 cli_runner.go:164] Run: docker container inspect cilium-20220921220531-5916 --format={{.State.Status}}
	W0921 22:13:39.030450    7444 cli_runner.go:211] docker container inspect cilium-20220921220531-5916 --format={{.State.Status}} returned with exit code 1
	W0921 22:13:39.030450    7444 stop.go:75] unable to get state: unknown state "cilium-20220921220531-5916": docker container inspect cilium-20220921220531-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cilium-20220921220531-5916
	I0921 22:13:39.030450    7444 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "cilium-20220921220531-5916": docker container inspect cilium-20220921220531-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cilium-20220921220531-5916
	I0921 22:13:39.045441    7444 cli_runner.go:164] Run: docker container inspect cilium-20220921220531-5916 --format={{.State.Status}}
	W0921 22:13:39.249814    7444 cli_runner.go:211] docker container inspect cilium-20220921220531-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:13:39.249814    7444 delete.go:82] Unable to get host status for cilium-20220921220531-5916, assuming it has already been deleted: state: unknown state "cilium-20220921220531-5916": docker container inspect cilium-20220921220531-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cilium-20220921220531-5916
	I0921 22:13:39.256814    7444 cli_runner.go:164] Run: docker container inspect -f {{.Id}} cilium-20220921220531-5916
	W0921 22:13:39.454295    7444 cli_runner.go:211] docker container inspect -f {{.Id}} cilium-20220921220531-5916 returned with exit code 1
	I0921 22:13:39.454295    7444 kic.go:356] could not find the container cilium-20220921220531-5916 to remove it. will try anyways
	I0921 22:13:39.465222    7444 cli_runner.go:164] Run: docker container inspect cilium-20220921220531-5916 --format={{.State.Status}}
	W0921 22:13:39.673773    7444 cli_runner.go:211] docker container inspect cilium-20220921220531-5916 --format={{.State.Status}} returned with exit code 1
	W0921 22:13:39.673773    7444 oci.go:84] error getting container status, will try to delete anyways: unknown state "cilium-20220921220531-5916": docker container inspect cilium-20220921220531-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cilium-20220921220531-5916
	I0921 22:13:39.681327    7444 cli_runner.go:164] Run: docker exec --privileged -t cilium-20220921220531-5916 /bin/bash -c "sudo init 0"
	W0921 22:13:39.879996    7444 cli_runner.go:211] docker exec --privileged -t cilium-20220921220531-5916 /bin/bash -c "sudo init 0" returned with exit code 1
	I0921 22:13:39.879996    7444 oci.go:646] error shutdown cilium-20220921220531-5916: docker exec --privileged -t cilium-20220921220531-5916 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: cilium-20220921220531-5916
	I0921 22:13:40.902843    7444 cli_runner.go:164] Run: docker container inspect cilium-20220921220531-5916 --format={{.State.Status}}
	W0921 22:13:41.096644    7444 cli_runner.go:211] docker container inspect cilium-20220921220531-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:13:41.096844    7444 oci.go:658] temporary error verifying shutdown: unknown state "cilium-20220921220531-5916": docker container inspect cilium-20220921220531-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cilium-20220921220531-5916
	I0921 22:13:41.096944    7444 oci.go:660] temporary error: container cilium-20220921220531-5916 status is  but expect it to be exited
	I0921 22:13:41.097013    7444 retry.go:31] will retry after 328.259627ms: couldn't verify container is exited. %v: unknown state "cilium-20220921220531-5916": docker container inspect cilium-20220921220531-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cilium-20220921220531-5916
	I0921 22:13:41.452908    7444 cli_runner.go:164] Run: docker container inspect cilium-20220921220531-5916 --format={{.State.Status}}
	W0921 22:13:41.641592    7444 cli_runner.go:211] docker container inspect cilium-20220921220531-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:13:41.641652    7444 oci.go:658] temporary error verifying shutdown: unknown state "cilium-20220921220531-5916": docker container inspect cilium-20220921220531-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cilium-20220921220531-5916
	I0921 22:13:41.641652    7444 oci.go:660] temporary error: container cilium-20220921220531-5916 status is  but expect it to be exited
	I0921 22:13:41.641652    7444 retry.go:31] will retry after 447.727139ms: couldn't verify container is exited. %v: unknown state "cilium-20220921220531-5916": docker container inspect cilium-20220921220531-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cilium-20220921220531-5916
	I0921 22:13:42.101241    7444 cli_runner.go:164] Run: docker container inspect cilium-20220921220531-5916 --format={{.State.Status}}
	W0921 22:13:42.285329    7444 cli_runner.go:211] docker container inspect cilium-20220921220531-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:13:42.285329    7444 oci.go:658] temporary error verifying shutdown: unknown state "cilium-20220921220531-5916": docker container inspect cilium-20220921220531-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cilium-20220921220531-5916
	I0921 22:13:42.285329    7444 oci.go:660] temporary error: container cilium-20220921220531-5916 status is  but expect it to be exited
	I0921 22:13:42.285329    7444 retry.go:31] will retry after 901.025843ms: couldn't verify container is exited. %v: unknown state "cilium-20220921220531-5916": docker container inspect cilium-20220921220531-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cilium-20220921220531-5916
	I0921 22:13:43.200670    7444 cli_runner.go:164] Run: docker container inspect cilium-20220921220531-5916 --format={{.State.Status}}
	W0921 22:13:43.383863    7444 cli_runner.go:211] docker container inspect cilium-20220921220531-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:13:43.383863    7444 oci.go:658] temporary error verifying shutdown: unknown state "cilium-20220921220531-5916": docker container inspect cilium-20220921220531-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cilium-20220921220531-5916
	I0921 22:13:43.383863    7444 oci.go:660] temporary error: container cilium-20220921220531-5916 status is  but expect it to be exited
	I0921 22:13:43.383863    7444 retry.go:31] will retry after 1.713171311s: couldn't verify container is exited. %v: unknown state "cilium-20220921220531-5916": docker container inspect cilium-20220921220531-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cilium-20220921220531-5916
	I0921 22:13:45.120637    7444 cli_runner.go:164] Run: docker container inspect cilium-20220921220531-5916 --format={{.State.Status}}
	W0921 22:13:45.327165    7444 cli_runner.go:211] docker container inspect cilium-20220921220531-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:13:45.327165    7444 oci.go:658] temporary error verifying shutdown: unknown state "cilium-20220921220531-5916": docker container inspect cilium-20220921220531-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cilium-20220921220531-5916
	I0921 22:13:45.327165    7444 oci.go:660] temporary error: container cilium-20220921220531-5916 status is  but expect it to be exited
	I0921 22:13:45.327165    7444 retry.go:31] will retry after 3.325151152s: couldn't verify container is exited. %v: unknown state "cilium-20220921220531-5916": docker container inspect cilium-20220921220531-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cilium-20220921220531-5916
	I0921 22:13:48.668074    7444 cli_runner.go:164] Run: docker container inspect cilium-20220921220531-5916 --format={{.State.Status}}
	W0921 22:13:48.878373    7444 cli_runner.go:211] docker container inspect cilium-20220921220531-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:13:48.878462    7444 oci.go:658] temporary error verifying shutdown: unknown state "cilium-20220921220531-5916": docker container inspect cilium-20220921220531-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cilium-20220921220531-5916
	I0921 22:13:48.878462    7444 oci.go:660] temporary error: container cilium-20220921220531-5916 status is  but expect it to be exited
	I0921 22:13:48.878537    7444 retry.go:31] will retry after 2.711970641s: couldn't verify container is exited. %v: unknown state "cilium-20220921220531-5916": docker container inspect cilium-20220921220531-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cilium-20220921220531-5916
	I0921 22:13:51.612851    7444 cli_runner.go:164] Run: docker container inspect cilium-20220921220531-5916 --format={{.State.Status}}
	W0921 22:13:51.821339    7444 cli_runner.go:211] docker container inspect cilium-20220921220531-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:13:51.821339    7444 oci.go:658] temporary error verifying shutdown: unknown state "cilium-20220921220531-5916": docker container inspect cilium-20220921220531-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cilium-20220921220531-5916
	I0921 22:13:51.821339    7444 oci.go:660] temporary error: container cilium-20220921220531-5916 status is  but expect it to be exited
	I0921 22:13:51.821339    7444 retry.go:31] will retry after 5.015617898s: couldn't verify container is exited. %v: unknown state "cilium-20220921220531-5916": docker container inspect cilium-20220921220531-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cilium-20220921220531-5916
	I0921 22:13:56.855683    7444 cli_runner.go:164] Run: docker container inspect cilium-20220921220531-5916 --format={{.State.Status}}
	W0921 22:13:57.050212    7444 cli_runner.go:211] docker container inspect cilium-20220921220531-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:13:57.050212    7444 oci.go:658] temporary error verifying shutdown: unknown state "cilium-20220921220531-5916": docker container inspect cilium-20220921220531-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cilium-20220921220531-5916
	I0921 22:13:57.050212    7444 oci.go:660] temporary error: container cilium-20220921220531-5916 status is  but expect it to be exited
	I0921 22:13:57.050212    7444 oci.go:88] couldn't shut down cilium-20220921220531-5916 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "cilium-20220921220531-5916": docker container inspect cilium-20220921220531-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cilium-20220921220531-5916
	 
	I0921 22:13:57.059354    7444 cli_runner.go:164] Run: docker rm -f -v cilium-20220921220531-5916
	I0921 22:13:57.275757    7444 cli_runner.go:164] Run: docker container inspect -f {{.Id}} cilium-20220921220531-5916
	W0921 22:13:57.470910    7444 cli_runner.go:211] docker container inspect -f {{.Id}} cilium-20220921220531-5916 returned with exit code 1
	I0921 22:13:57.478285    7444 cli_runner.go:164] Run: docker network inspect cilium-20220921220531-5916 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0921 22:13:57.657253    7444 cli_runner.go:211] docker network inspect cilium-20220921220531-5916 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0921 22:13:57.665240    7444 network_create.go:272] running [docker network inspect cilium-20220921220531-5916] to gather additional debugging logs...
	I0921 22:13:57.665240    7444 cli_runner.go:164] Run: docker network inspect cilium-20220921220531-5916
	W0921 22:13:57.860671    7444 cli_runner.go:211] docker network inspect cilium-20220921220531-5916 returned with exit code 1
	I0921 22:13:57.860846    7444 network_create.go:275] error running [docker network inspect cilium-20220921220531-5916]: docker network inspect cilium-20220921220531-5916: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: cilium-20220921220531-5916
	I0921 22:13:57.860926    7444 network_create.go:277] output of [docker network inspect cilium-20220921220531-5916]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: cilium-20220921220531-5916
	
	** /stderr **
	W0921 22:13:57.862079    7444 delete.go:139] delete failed (probably ok) <nil>
	I0921 22:13:57.862079    7444 fix.go:115] Sleeping 1 second for extra luck!
	I0921 22:13:58.865056    7444 start.go:125] createHost starting for "" (driver="docker")
	I0921 22:13:58.868542    7444 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0921 22:13:58.868823    7444 start.go:159] libmachine.API.Create for "cilium-20220921220531-5916" (driver="docker")
	I0921 22:13:58.868953    7444 client.go:168] LocalClient.Create starting
	I0921 22:13:58.869474    7444 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I0921 22:13:58.869474    7444 main.go:134] libmachine: Decoding PEM data...
	I0921 22:13:58.869474    7444 main.go:134] libmachine: Parsing certificate...
	I0921 22:13:58.870100    7444 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I0921 22:13:58.870312    7444 main.go:134] libmachine: Decoding PEM data...
	I0921 22:13:58.870312    7444 main.go:134] libmachine: Parsing certificate...
	I0921 22:13:58.879529    7444 cli_runner.go:164] Run: docker network inspect cilium-20220921220531-5916 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0921 22:13:59.081395    7444 cli_runner.go:211] docker network inspect cilium-20220921220531-5916 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0921 22:13:59.088440    7444 network_create.go:272] running [docker network inspect cilium-20220921220531-5916] to gather additional debugging logs...
	I0921 22:13:59.088440    7444 cli_runner.go:164] Run: docker network inspect cilium-20220921220531-5916
	W0921 22:13:59.298737    7444 cli_runner.go:211] docker network inspect cilium-20220921220531-5916 returned with exit code 1
	I0921 22:13:59.298737    7444 network_create.go:275] error running [docker network inspect cilium-20220921220531-5916]: docker network inspect cilium-20220921220531-5916: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: cilium-20220921220531-5916
	I0921 22:13:59.298737    7444 network_create.go:277] output of [docker network inspect cilium-20220921220531-5916]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: cilium-20220921220531-5916
	
	** /stderr **
	I0921 22:13:59.307544    7444 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0921 22:13:59.537250    7444 network.go:281] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0006b8b40] amended:false}} dirty:map[] misses:0}
	I0921 22:13:59.537250    7444 network.go:239] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0921 22:13:59.553088    7444 network.go:290] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0006b8b40] amended:true}} dirty:map[192.168.49.0:0xc0006b8b40 192.168.58.0:0xc000a9a648] misses:0}
	I0921 22:13:59.553294    7444 network.go:236] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0921 22:13:59.553380    7444 network_create.go:115] attempt to create docker network cilium-20220921220531-5916 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0921 22:13:59.560562    7444 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=cilium-20220921220531-5916 cilium-20220921220531-5916
	W0921 22:13:59.735519    7444 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=cilium-20220921220531-5916 cilium-20220921220531-5916 returned with exit code 1
	E0921 22:13:59.735519    7444 network_create.go:104] error while trying to create docker network cilium-20220921220531-5916 192.168.58.0/24: create docker network cilium-20220921220531-5916 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=cilium-20220921220531-5916 cilium-20220921220531-5916: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 752d7c2e45effcf7f161ecb0a0f3da228cbc5e9b49d2743b970c8c3ac96f38da (br-752d7c2e45ef): conflicts with network 8a3cd8d165a4af032276277dbee93b52b8ee4aed7ca6d8aceee6814901b31792 (br-8a3cd8d165a4): networks have overlapping IPv4
	W0921 22:13:59.735519    7444 out.go:239] ! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network cilium-20220921220531-5916 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=cilium-20220921220531-5916 cilium-20220921220531-5916: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 752d7c2e45effcf7f161ecb0a0f3da228cbc5e9b49d2743b970c8c3ac96f38da (br-752d7c2e45ef): conflicts with network 8a3cd8d165a4af032276277dbee93b52b8ee4aed7ca6d8aceee6814901b31792 (br-8a3cd8d165a4): networks have overlapping IPv4
	
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network cilium-20220921220531-5916 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=cilium-20220921220531-5916 cilium-20220921220531-5916: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 752d7c2e45effcf7f161ecb0a0f3da228cbc5e9b49d2743b970c8c3ac96f38da (br-752d7c2e45ef): conflicts with network 8a3cd8d165a4af032276277dbee93b52b8ee4aed7ca6d8aceee6814901b31792 (br-8a3cd8d165a4): networks have overlapping IPv4
	
	I0921 22:13:59.749415    7444 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0921 22:13:59.960966    7444 cli_runner.go:164] Run: docker volume create cilium-20220921220531-5916 --label name.minikube.sigs.k8s.io=cilium-20220921220531-5916 --label created_by.minikube.sigs.k8s.io=true
	W0921 22:14:00.185276    7444 cli_runner.go:211] docker volume create cilium-20220921220531-5916 --label name.minikube.sigs.k8s.io=cilium-20220921220531-5916 --label created_by.minikube.sigs.k8s.io=true returned with exit code 1
	I0921 22:14:00.185366    7444 client.go:171] LocalClient.Create took 1.316403s
	I0921 22:14:02.204998    7444 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0921 22:14:02.210995    7444 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220921220531-5916
	W0921 22:14:02.409590    7444 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220921220531-5916 returned with exit code 1
	I0921 22:14:02.409590    7444 retry.go:31] will retry after 242.222461ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "cilium-20220921220531-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220921220531-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cilium-20220921220531-5916
	I0921 22:14:02.671645    7444 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220921220531-5916
	W0921 22:14:02.865454    7444 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220921220531-5916 returned with exit code 1
	I0921 22:14:02.865454    7444 retry.go:31] will retry after 293.637806ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "cilium-20220921220531-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220921220531-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cilium-20220921220531-5916
	I0921 22:14:03.175363    7444 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220921220531-5916
	W0921 22:14:03.358371    7444 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220921220531-5916 returned with exit code 1
	I0921 22:14:03.358371    7444 retry.go:31] will retry after 446.119795ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "cilium-20220921220531-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220921220531-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cilium-20220921220531-5916
	I0921 22:14:03.820556    7444 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220921220531-5916
	W0921 22:14:04.012751    7444 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220921220531-5916 returned with exit code 1
	W0921 22:14:04.012751    7444 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "cilium-20220921220531-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220921220531-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cilium-20220921220531-5916
	
	W0921 22:14:04.012751    7444 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "cilium-20220921220531-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220921220531-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cilium-20220921220531-5916
	I0921 22:14:04.024983    7444 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0921 22:14:04.032820    7444 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220921220531-5916
	W0921 22:14:04.213602    7444 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220921220531-5916 returned with exit code 1
	I0921 22:14:04.213935    7444 retry.go:31] will retry after 179.638263ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "cilium-20220921220531-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220921220531-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cilium-20220921220531-5916
	I0921 22:14:04.401674    7444 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220921220531-5916
	W0921 22:14:04.608344    7444 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220921220531-5916 returned with exit code 1
	I0921 22:14:04.608344    7444 retry.go:31] will retry after 263.695078ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "cilium-20220921220531-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220921220531-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cilium-20220921220531-5916
	I0921 22:14:04.880391    7444 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220921220531-5916
	W0921 22:14:05.078104    7444 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220921220531-5916 returned with exit code 1
	I0921 22:14:05.078104    7444 retry.go:31] will retry after 484.240172ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "cilium-20220921220531-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220921220531-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cilium-20220921220531-5916
	I0921 22:14:05.583216    7444 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220921220531-5916
	W0921 22:14:05.804907    7444 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220921220531-5916 returned with exit code 1
	W0921 22:14:05.804907    7444 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "cilium-20220921220531-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220921220531-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cilium-20220921220531-5916
	
	W0921 22:14:05.804907    7444 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "cilium-20220921220531-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220921220531-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cilium-20220921220531-5916
	I0921 22:14:05.804907    7444 start.go:128] duration metric: createHost completed in 6.9397957s
	I0921 22:14:05.815899    7444 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0921 22:14:05.822900    7444 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220921220531-5916
	W0921 22:14:06.012160    7444 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220921220531-5916 returned with exit code 1
	I0921 22:14:06.012405    7444 retry.go:31] will retry after 340.62286ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "cilium-20220921220531-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220921220531-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cilium-20220921220531-5916
	I0921 22:14:06.367645    7444 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220921220531-5916
	W0921 22:14:06.559826    7444 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220921220531-5916 returned with exit code 1
	I0921 22:14:06.559826    7444 retry.go:31] will retry after 297.417842ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "cilium-20220921220531-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220921220531-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cilium-20220921220531-5916
	I0921 22:14:06.879916    7444 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220921220531-5916
	W0921 22:14:07.090999    7444 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220921220531-5916 returned with exit code 1
	I0921 22:14:07.090999    7444 retry.go:31] will retry after 448.358942ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "cilium-20220921220531-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220921220531-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cilium-20220921220531-5916
	I0921 22:14:07.565409    7444 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220921220531-5916
	W0921 22:14:07.755276    7444 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220921220531-5916 returned with exit code 1
	W0921 22:14:07.755476    7444 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "cilium-20220921220531-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220921220531-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cilium-20220921220531-5916
	
	W0921 22:14:07.755651    7444 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "cilium-20220921220531-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220921220531-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cilium-20220921220531-5916
	I0921 22:14:07.767047    7444 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0921 22:14:07.772693    7444 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220921220531-5916
	W0921 22:14:07.995177    7444 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220921220531-5916 returned with exit code 1
	I0921 22:14:07.995177    7444 retry.go:31] will retry after 176.645665ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "cilium-20220921220531-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220921220531-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cilium-20220921220531-5916
	I0921 22:14:08.183006    7444 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220921220531-5916
	W0921 22:14:08.423223    7444 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220921220531-5916 returned with exit code 1
	I0921 22:14:08.423223    7444 retry.go:31] will retry after 512.00063ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "cilium-20220921220531-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220921220531-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cilium-20220921220531-5916
	I0921 22:14:08.945219    7444 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220921220531-5916
	W0921 22:14:09.157257    7444 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220921220531-5916 returned with exit code 1
	W0921 22:14:09.157257    7444 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "cilium-20220921220531-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220921220531-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cilium-20220921220531-5916
	
	W0921 22:14:09.157257    7444 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "cilium-20220921220531-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220921220531-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cilium-20220921220531-5916
	I0921 22:14:09.157257    7444 fix.go:57] fixHost completed within 30.5538064s
	I0921 22:14:09.157257    7444 start.go:83] releasing machines lock for "cilium-20220921220531-5916", held for 30.5543349s
	W0921 22:14:09.157257    7444 out.go:239] * Failed to start docker container. Running "minikube delete -p cilium-20220921220531-5916" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for cilium-20220921220531-5916 container: docker volume create cilium-20220921220531-5916 --label name.minikube.sigs.k8s.io=cilium-20220921220531-5916 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create cilium-20220921220531-5916: error while creating volume root path '/var/lib/docker/volumes/cilium-20220921220531-5916': mkdir /var/lib/docker/volumes/cilium-20220921220531-5916: read-only file system
	
	* Failed to start docker container. Running "minikube delete -p cilium-20220921220531-5916" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for cilium-20220921220531-5916 container: docker volume create cilium-20220921220531-5916 --label name.minikube.sigs.k8s.io=cilium-20220921220531-5916 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create cilium-20220921220531-5916: error while creating volume root path '/var/lib/docker/volumes/cilium-20220921220531-5916': mkdir /var/lib/docker/volumes/cilium-20220921220531-5916: read-only file system
	
	I0921 22:14:09.162287    7444 out.go:177] 
	W0921 22:14:09.164262    7444 out.go:239] X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for cilium-20220921220531-5916 container: docker volume create cilium-20220921220531-5916 --label name.minikube.sigs.k8s.io=cilium-20220921220531-5916 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create cilium-20220921220531-5916: error while creating volume root path '/var/lib/docker/volumes/cilium-20220921220531-5916': mkdir /var/lib/docker/volumes/cilium-20220921220531-5916: read-only file system
	
	X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for cilium-20220921220531-5916 container: docker volume create cilium-20220921220531-5916 --label name.minikube.sigs.k8s.io=cilium-20220921220531-5916 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create cilium-20220921220531-5916: error while creating volume root path '/var/lib/docker/volumes/cilium-20220921220531-5916': mkdir /var/lib/docker/volumes/cilium-20220921220531-5916: read-only file system
	
	W0921 22:14:09.164262    7444 out.go:239] * Suggestion: Restart Docker
	* Suggestion: Restart Docker
	W0921 22:14:09.164262    7444 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/6825
	* Related issue: https://github.com/kubernetes/minikube/issues/6825
	I0921 22:14:09.168281    7444 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:103: failed start: exit status 60
--- FAIL: TestNetworkPlugins/group/cilium/Start (49.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (49.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:101: (dbg) Run:  out/minikube-windows-amd64.exe start -p false-20220921220530-5916 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=false --driver=docker

                                                
                                                
=== CONT  TestNetworkPlugins/group/false/Start
net_test.go:101: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p false-20220921220530-5916 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=false --driver=docker: exit status 60 (49.1948881s)

                                                
                                                
-- stdout --
	* [false-20220921220530-5916] minikube v1.27.0 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	  - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=14995
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting control plane node false-20220921220530-5916 in cluster false-20220921220530-5916
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* docker "false-20220921220530-5916" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0921 22:13:23.758763    6256 out.go:296] Setting OutFile to fd 2008 ...
	I0921 22:13:23.825984    6256 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0921 22:13:23.825984    6256 out.go:309] Setting ErrFile to fd 1676...
	I0921 22:13:23.826031    6256 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0921 22:13:23.851973    6256 out.go:303] Setting JSON to false
	I0921 22:13:23.854322    6256 start.go:115] hostinfo: {"hostname":"minikube2","uptime":4472,"bootTime":1663793931,"procs":152,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19044 Build 19044","kernelVersion":"10.0.19044 Build 19044","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"29dced62-21fb-45d8-a34e-472b66ced002"}
	W0921 22:13:23.854322    6256 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0921 22:13:23.858282    6256 out.go:177] * [false-20220921220530-5916] minikube v1.27.0 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	I0921 22:13:23.862744    6256 notify.go:214] Checking for updates...
	I0921 22:13:23.864049    6256 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	I0921 22:13:23.867230    6256 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	I0921 22:13:23.869703    6256 out.go:177]   - MINIKUBE_LOCATION=14995
	I0921 22:13:23.872066    6256 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0921 22:13:23.876047    6256 config.go:180] Loaded profile config "cilium-20220921220531-5916": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.2
	I0921 22:13:23.876436    6256 config.go:180] Loaded profile config "default-k8s-different-port-20220921221221-5916": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.2
	I0921 22:13:23.876829    6256 config.go:180] Loaded profile config "multinode-20220921215635-5916-m01": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.2
	I0921 22:13:23.877380    6256 config.go:180] Loaded profile config "newest-cni-20220921221222-5916": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.2
	I0921 22:13:23.877380    6256 driver.go:365] Setting default libvirt URI to qemu:///system
	I0921 22:13:24.152137    6256 docker.go:137] docker version: linux-20.10.17
	I0921 22:13:24.160459    6256 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0921 22:13:24.677852    6256 info.go:265] docker info: {ID:6VFI:N2PO:PLBP:KXLR:EN7R:OR7P:564S:UY7N:ZWDA:5V2U:KGL2:KJZ7 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:44 OomKillDisable:true NGoroutines:90 SystemTime:2022-09-21 22:13:24.3118572 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.10.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.9] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-
plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.19.0]] Warnings:<nil>}}
	I0921 22:13:24.683670    6256 out.go:177] * Using the docker driver based on user configuration
	I0921 22:13:24.686106    6256 start.go:284] selected driver: docker
	I0921 22:13:24.686106    6256 start.go:808] validating driver "docker" against <nil>
	I0921 22:13:24.686106    6256 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0921 22:13:24.745107    6256 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0921 22:13:25.269854    6256 info.go:265] docker info: {ID:6VFI:N2PO:PLBP:KXLR:EN7R:OR7P:564S:UY7N:ZWDA:5V2U:KGL2:KJZ7 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:43 OomKillDisable:true NGoroutines:89 SystemTime:2022-09-21 22:13:24.897391 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_6
4 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,p
rofile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.10.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.9] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-p
lugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.19.0]] Warnings:<nil>}}
	I0921 22:13:25.269854    6256 start_flags.go:302] no existing cluster config was found, will generate one from the flags 
	I0921 22:13:25.269854    6256 start_flags.go:867] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0921 22:13:25.273817    6256 out.go:177] * Using Docker Desktop driver with root privileges
	I0921 22:13:25.276081    6256 cni.go:95] Creating CNI manager for "false"
	I0921 22:13:25.276081    6256 start_flags.go:316] config:
	{Name:false-20220921220530-5916 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.2 ClusterName:false-20220921220530-5916 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker C
RISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I0921 22:13:25.279889    6256 out.go:177] * Starting control plane node false-20220921220530-5916 in cluster false-20220921220530-5916
	I0921 22:13:25.281799    6256 cache.go:120] Beginning downloading kic base image for docker with docker
	I0921 22:13:25.284063    6256 out.go:177] * Pulling base image ...
	I0921 22:13:25.287063    6256 preload.go:132] Checking if preload exists for k8s version v1.25.2 and runtime docker
	I0921 22:13:25.288078    6256 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c in local docker daemon
	I0921 22:13:25.288117    6256 preload.go:148] Found local preload: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.25.2-docker-overlay2-amd64.tar.lz4
	I0921 22:13:25.288117    6256 cache.go:57] Caching tarball of preloaded images
	I0921 22:13:25.288117    6256 preload.go:174] Found C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.25.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0921 22:13:25.288810    6256 cache.go:60] Finished verifying existence of preloaded tar for  v1.25.2 on docker
	I0921 22:13:25.288964    6256 profile.go:148] Saving config to C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\false-20220921220530-5916\config.json ...
	I0921 22:13:25.288964    6256 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\false-20220921220530-5916\config.json: {Name:mk4b586ed1e4b23e6ac376dde497a4fa2edc72e9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0921 22:13:25.531751    6256 cache.go:147] Downloading gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c to local cache
	I0921 22:13:25.531873    6256 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase_v0.0.34@sha256_f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c.tar
	I0921 22:13:25.532115    6256 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase_v0.0.34@sha256_f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c.tar
	I0921 22:13:25.532227    6256 image.go:59] Checking for gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c in local cache directory
	I0921 22:13:25.532330    6256 image.go:62] Found gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c in local cache directory, skipping pull
	I0921 22:13:25.532330    6256 image.go:103] gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c exists in cache, skipping pull
	I0921 22:13:25.532488    6256 cache.go:150] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c as a tarball
	I0921 22:13:25.532488    6256 cache.go:161] Loading gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c from local cache
	I0921 22:13:25.532562    6256 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase_v0.0.34@sha256_f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c.tar
	I0921 22:13:27.914455    6256 cache.go:164] successfully loaded gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c from cached tarball
	I0921 22:13:27.914455    6256 cache.go:170] Downloading gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c to local daemon
	I0921 22:13:27.914558    6256 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c.d.lock -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase_v0.0.34@sha256_f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c.d.lock
	I0921 22:13:27.914956    6256 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c in local docker daemon
	I0921 22:13:28.123567    6256 image.go:243] Writing gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c to local daemon
	    > gcr.io/k8s-minikube/kicbase:  0 B [__________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase:  0 B [__________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase:  0 B [__________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase:  0 B [__________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase:  0 B [__________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase:  0 B [_______________________] ?% ? p/s 1.1sI0921 22:13:30.035108    6256 cache.go:177] use image loaded from cache gcr.io/k8s-minikube/kicbase:v0.0.34
	W0921 22:13:30.035213    6256 out.go:239] ! minikube was unable to download gcr.io/k8s-minikube/kicbase:v0.0.34, but successfully downloaded gcr.io/k8s-minikube/kicbase:v0.0.34 as a fallback image
	! minikube was unable to download gcr.io/k8s-minikube/kicbase:v0.0.34, but successfully downloaded gcr.io/k8s-minikube/kicbase:v0.0.34 as a fallback image
	I0921 22:13:30.035276    6256 cache.go:208] Successfully downloaded all kic artifacts
	I0921 22:13:30.035392    6256 start.go:364] acquiring machines lock for false-20220921220530-5916: {Name:mk4cfc5911e0a58ba2fed28e19ae4948269c56dd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0921 22:13:30.035392    6256 start.go:368] acquired machines lock for "false-20220921220530-5916" in 0s
	I0921 22:13:30.035392    6256 start.go:93] Provisioning new machine with config: &{Name:false-20220921220530-5916 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.34 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.2 ClusterName:false-20220921220530-5916 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.25.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetCli
entPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet} &{Name: IP: Port:8443 KubernetesVersion:v1.25.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0921 22:13:30.035936    6256 start.go:125] createHost starting for "" (driver="docker")
	I0921 22:13:30.052266    6256 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0921 22:13:30.052266    6256 start.go:159] libmachine.API.Create for "false-20220921220530-5916" (driver="docker")
	I0921 22:13:30.052266    6256 client.go:168] LocalClient.Create starting
	I0921 22:13:30.053427    6256 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I0921 22:13:30.053636    6256 main.go:134] libmachine: Decoding PEM data...
	I0921 22:13:30.053714    6256 main.go:134] libmachine: Parsing certificate...
	I0921 22:13:30.053909    6256 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I0921 22:13:30.054222    6256 main.go:134] libmachine: Decoding PEM data...
	I0921 22:13:30.054222    6256 main.go:134] libmachine: Parsing certificate...
	I0921 22:13:30.063063    6256 cli_runner.go:164] Run: docker network inspect false-20220921220530-5916 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0921 22:13:30.253243    6256 cli_runner.go:211] docker network inspect false-20220921220530-5916 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0921 22:13:30.260206    6256 network_create.go:272] running [docker network inspect false-20220921220530-5916] to gather additional debugging logs...
	I0921 22:13:30.260206    6256 cli_runner.go:164] Run: docker network inspect false-20220921220530-5916
	W0921 22:13:30.455183    6256 cli_runner.go:211] docker network inspect false-20220921220530-5916 returned with exit code 1
	I0921 22:13:30.455183    6256 network_create.go:275] error running [docker network inspect false-20220921220530-5916]: docker network inspect false-20220921220530-5916: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: false-20220921220530-5916
	I0921 22:13:30.455183    6256 network_create.go:277] output of [docker network inspect false-20220921220530-5916]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: false-20220921220530-5916
	
	** /stderr **
	I0921 22:13:30.463366    6256 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0921 22:13:30.677830    6256 network.go:290] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc00000ae38] misses:0}
	I0921 22:13:30.678078    6256 network.go:236] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0921 22:13:30.678129    6256 network_create.go:115] attempt to create docker network false-20220921220530-5916 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0921 22:13:30.685725    6256 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=false-20220921220530-5916 false-20220921220530-5916
	W0921 22:13:30.877152    6256 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=false-20220921220530-5916 false-20220921220530-5916 returned with exit code 1
	E0921 22:13:30.877390    6256 network_create.go:104] error while trying to create docker network false-20220921220530-5916 192.168.49.0/24: create docker network false-20220921220530-5916 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=false-20220921220530-5916 false-20220921220530-5916: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 9efe07e13e938df8e0914ddef4737699b9c1e3134bfab3d8c26ad8fe086bbf89 (br-9efe07e13e93): conflicts with network a04d36bfb3cf4b1b1ab643a2ec70fadfc4a485878b32a275b4c7653037d66dd0 (br-a04d36bfb3cf): networks have overlapping IPv4
	W0921 22:13:30.877390    6256 out.go:239] ! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network false-20220921220530-5916 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=false-20220921220530-5916 false-20220921220530-5916: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 9efe07e13e938df8e0914ddef4737699b9c1e3134bfab3d8c26ad8fe086bbf89 (br-9efe07e13e93): conflicts with network a04d36bfb3cf4b1b1ab643a2ec70fadfc4a485878b32a275b4c7653037d66dd0 (br-a04d36bfb3cf): networks have overlapping IPv4
	
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network false-20220921220530-5916 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=false-20220921220530-5916 false-20220921220530-5916: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 9efe07e13e938df8e0914ddef4737699b9c1e3134bfab3d8c26ad8fe086bbf89 (br-9efe07e13e93): conflicts with network a04d36bfb3cf4b1b1ab643a2ec70fadfc4a485878b32a275b4c7653037d66dd0 (br-a04d36bfb3cf): networks have overlapping IPv4
	
	I0921 22:13:30.887949    6256 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0921 22:13:31.100655    6256 cli_runner.go:164] Run: docker volume create false-20220921220530-5916 --label name.minikube.sigs.k8s.io=false-20220921220530-5916 --label created_by.minikube.sigs.k8s.io=true
	W0921 22:13:31.280968    6256 cli_runner.go:211] docker volume create false-20220921220530-5916 --label name.minikube.sigs.k8s.io=false-20220921220530-5916 --label created_by.minikube.sigs.k8s.io=true returned with exit code 1
	I0921 22:13:31.280968    6256 client.go:171] LocalClient.Create took 1.2286921s
	I0921 22:13:33.298679    6256 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0921 22:13:33.305365    6256 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220921220530-5916
	W0921 22:13:33.494459    6256 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220921220530-5916 returned with exit code 1
	I0921 22:13:33.494459    6256 retry.go:31] will retry after 276.165072ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "false-20220921220530-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220921220530-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-20220921220530-5916
	I0921 22:13:33.786917    6256 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220921220530-5916
	W0921 22:13:33.980339    6256 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220921220530-5916 returned with exit code 1
	I0921 22:13:33.980339    6256 retry.go:31] will retry after 540.190908ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "false-20220921220530-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220921220530-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-20220921220530-5916
	I0921 22:13:34.539409    6256 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220921220530-5916
	W0921 22:13:34.730976    6256 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220921220530-5916 returned with exit code 1
	W0921 22:13:34.730976    6256 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "false-20220921220530-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220921220530-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-20220921220530-5916
	
	W0921 22:13:34.730976    6256 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "false-20220921220530-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220921220530-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-20220921220530-5916
	I0921 22:13:34.741962    6256 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0921 22:13:34.749013    6256 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220921220530-5916
	W0921 22:13:34.939054    6256 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220921220530-5916 returned with exit code 1
	I0921 22:13:34.939054    6256 retry.go:31] will retry after 234.428547ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "false-20220921220530-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220921220530-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-20220921220530-5916
	I0921 22:13:35.181253    6256 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220921220530-5916
	W0921 22:13:35.380668    6256 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220921220530-5916 returned with exit code 1
	I0921 22:13:35.380668    6256 retry.go:31] will retry after 346.739061ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "false-20220921220530-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220921220530-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-20220921220530-5916
	I0921 22:13:35.751999    6256 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220921220530-5916
	W0921 22:13:35.966860    6256 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220921220530-5916 returned with exit code 1
	I0921 22:13:35.966860    6256 retry.go:31] will retry after 667.587979ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "false-20220921220530-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220921220530-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-20220921220530-5916
	I0921 22:13:36.647442    6256 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220921220530-5916
	W0921 22:13:36.838836    6256 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220921220530-5916 returned with exit code 1
	W0921 22:13:36.838836    6256 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "false-20220921220530-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220921220530-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-20220921220530-5916
	
	W0921 22:13:36.838836    6256 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "false-20220921220530-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220921220530-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-20220921220530-5916
	I0921 22:13:36.838836    6256 start.go:128] duration metric: createHost completed in 6.8028467s
	I0921 22:13:36.838836    6256 start.go:83] releasing machines lock for "false-20220921220530-5916", held for 6.8033902s
	W0921 22:13:36.838836    6256 start.go:602] error starting host: creating host: create: creating: setting up container node: creating volume for false-20220921220530-5916 container: docker volume create false-20220921220530-5916 --label name.minikube.sigs.k8s.io=false-20220921220530-5916 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create false-20220921220530-5916: error while creating volume root path '/var/lib/docker/volumes/false-20220921220530-5916': mkdir /var/lib/docker/volumes/false-20220921220530-5916: read-only file system
	I0921 22:13:36.856801    6256 cli_runner.go:164] Run: docker container inspect false-20220921220530-5916 --format={{.State.Status}}
	W0921 22:13:37.029448    6256 cli_runner.go:211] docker container inspect false-20220921220530-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:13:37.029448    6256 delete.go:82] Unable to get host status for false-20220921220530-5916, assuming it has already been deleted: state: unknown state "false-20220921220530-5916": docker container inspect false-20220921220530-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-20220921220530-5916
	W0921 22:13:37.029448    6256 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: setting up container node: creating volume for false-20220921220530-5916 container: docker volume create false-20220921220530-5916 --label name.minikube.sigs.k8s.io=false-20220921220530-5916 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create false-20220921220530-5916: error while creating volume root path '/var/lib/docker/volumes/false-20220921220530-5916': mkdir /var/lib/docker/volumes/false-20220921220530-5916: read-only file system
	
	! StartHost failed, but will try again: creating host: create: creating: setting up container node: creating volume for false-20220921220530-5916 container: docker volume create false-20220921220530-5916 --label name.minikube.sigs.k8s.io=false-20220921220530-5916 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create false-20220921220530-5916: error while creating volume root path '/var/lib/docker/volumes/false-20220921220530-5916': mkdir /var/lib/docker/volumes/false-20220921220530-5916: read-only file system
	
	I0921 22:13:37.029448    6256 start.go:617] Will try again in 5 seconds ...
	I0921 22:13:42.032581    6256 start.go:364] acquiring machines lock for false-20220921220530-5916: {Name:mk4cfc5911e0a58ba2fed28e19ae4948269c56dd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0921 22:13:42.032673    6256 start.go:368] acquired machines lock for "false-20220921220530-5916" in 0s
	I0921 22:13:42.032673    6256 start.go:96] Skipping create...Using existing machine configuration
	I0921 22:13:42.032673    6256 fix.go:55] fixHost starting: 
	I0921 22:13:42.061594    6256 cli_runner.go:164] Run: docker container inspect false-20220921220530-5916 --format={{.State.Status}}
	W0921 22:13:42.237251    6256 cli_runner.go:211] docker container inspect false-20220921220530-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:13:42.237251    6256 fix.go:103] recreateIfNeeded on false-20220921220530-5916: state= err=unknown state "false-20220921220530-5916": docker container inspect false-20220921220530-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-20220921220530-5916
	I0921 22:13:42.237251    6256 fix.go:108] machineExists: false. err=machine does not exist
	I0921 22:13:42.240240    6256 out.go:177] * docker "false-20220921220530-5916" container is missing, will recreate.
	I0921 22:13:42.243278    6256 delete.go:124] DEMOLISHING false-20220921220530-5916 ...
	I0921 22:13:42.259247    6256 cli_runner.go:164] Run: docker container inspect false-20220921220530-5916 --format={{.State.Status}}
	W0921 22:13:42.485887    6256 cli_runner.go:211] docker container inspect false-20220921220530-5916 --format={{.State.Status}} returned with exit code 1
	W0921 22:13:42.485887    6256 stop.go:75] unable to get state: unknown state "false-20220921220530-5916": docker container inspect false-20220921220530-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-20220921220530-5916
	I0921 22:13:42.485887    6256 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "false-20220921220530-5916": docker container inspect false-20220921220530-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-20220921220530-5916
	I0921 22:13:42.499915    6256 cli_runner.go:164] Run: docker container inspect false-20220921220530-5916 --format={{.State.Status}}
	W0921 22:13:42.690869    6256 cli_runner.go:211] docker container inspect false-20220921220530-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:13:42.690959    6256 delete.go:82] Unable to get host status for false-20220921220530-5916, assuming it has already been deleted: state: unknown state "false-20220921220530-5916": docker container inspect false-20220921220530-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-20220921220530-5916
	I0921 22:13:42.700562    6256 cli_runner.go:164] Run: docker container inspect -f {{.Id}} false-20220921220530-5916
	W0921 22:13:42.891712    6256 cli_runner.go:211] docker container inspect -f {{.Id}} false-20220921220530-5916 returned with exit code 1
	I0921 22:13:42.891835    6256 kic.go:356] could not find the container false-20220921220530-5916 to remove it. will try anyways
	I0921 22:13:42.899784    6256 cli_runner.go:164] Run: docker container inspect false-20220921220530-5916 --format={{.State.Status}}
	W0921 22:13:43.082588    6256 cli_runner.go:211] docker container inspect false-20220921220530-5916 --format={{.State.Status}} returned with exit code 1
	W0921 22:13:43.082588    6256 oci.go:84] error getting container status, will try to delete anyways: unknown state "false-20220921220530-5916": docker container inspect false-20220921220530-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-20220921220530-5916
	I0921 22:13:43.090572    6256 cli_runner.go:164] Run: docker exec --privileged -t false-20220921220530-5916 /bin/bash -c "sudo init 0"
	W0921 22:13:43.273672    6256 cli_runner.go:211] docker exec --privileged -t false-20220921220530-5916 /bin/bash -c "sudo init 0" returned with exit code 1
	I0921 22:13:43.273672    6256 oci.go:646] error shutdown false-20220921220530-5916: docker exec --privileged -t false-20220921220530-5916 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: false-20220921220530-5916
	I0921 22:13:44.297863    6256 cli_runner.go:164] Run: docker container inspect false-20220921220530-5916 --format={{.State.Status}}
	W0921 22:13:44.521836    6256 cli_runner.go:211] docker container inspect false-20220921220530-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:13:44.521836    6256 oci.go:658] temporary error verifying shutdown: unknown state "false-20220921220530-5916": docker container inspect false-20220921220530-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-20220921220530-5916
	I0921 22:13:44.521836    6256 oci.go:660] temporary error: container false-20220921220530-5916 status is  but expect it to be exited
	I0921 22:13:44.521836    6256 retry.go:31] will retry after 328.259627ms: couldn't verify container is exited. %v: unknown state "false-20220921220530-5916": docker container inspect false-20220921220530-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-20220921220530-5916
	I0921 22:13:44.874626    6256 cli_runner.go:164] Run: docker container inspect false-20220921220530-5916 --format={{.State.Status}}
	W0921 22:13:45.096208    6256 cli_runner.go:211] docker container inspect false-20220921220530-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:13:45.096351    6256 oci.go:658] temporary error verifying shutdown: unknown state "false-20220921220530-5916": docker container inspect false-20220921220530-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-20220921220530-5916
	I0921 22:13:45.096351    6256 oci.go:660] temporary error: container false-20220921220530-5916 status is  but expect it to be exited
	I0921 22:13:45.096351    6256 retry.go:31] will retry after 447.727139ms: couldn't verify container is exited. %v: unknown state "false-20220921220530-5916": docker container inspect false-20220921220530-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-20220921220530-5916
	I0921 22:13:45.567804    6256 cli_runner.go:164] Run: docker container inspect false-20220921220530-5916 --format={{.State.Status}}
	W0921 22:13:45.762071    6256 cli_runner.go:211] docker container inspect false-20220921220530-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:13:45.762071    6256 oci.go:658] temporary error verifying shutdown: unknown state "false-20220921220530-5916": docker container inspect false-20220921220530-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-20220921220530-5916
	I0921 22:13:45.762071    6256 oci.go:660] temporary error: container false-20220921220530-5916 status is  but expect it to be exited
	I0921 22:13:45.762071    6256 retry.go:31] will retry after 901.025843ms: couldn't verify container is exited. %v: unknown state "false-20220921220530-5916": docker container inspect false-20220921220530-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-20220921220530-5916
	I0921 22:13:46.679630    6256 cli_runner.go:164] Run: docker container inspect false-20220921220530-5916 --format={{.State.Status}}
	W0921 22:13:46.858699    6256 cli_runner.go:211] docker container inspect false-20220921220530-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:13:46.858699    6256 oci.go:658] temporary error verifying shutdown: unknown state "false-20220921220530-5916": docker container inspect false-20220921220530-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-20220921220530-5916
	I0921 22:13:46.858699    6256 oci.go:660] temporary error: container false-20220921220530-5916 status is  but expect it to be exited
	I0921 22:13:46.858699    6256 retry.go:31] will retry after 1.713171311s: couldn't verify container is exited. %v: unknown state "false-20220921220530-5916": docker container inspect false-20220921220530-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-20220921220530-5916
	I0921 22:13:48.589592    6256 cli_runner.go:164] Run: docker container inspect false-20220921220530-5916 --format={{.State.Status}}
	W0921 22:13:48.767755    6256 cli_runner.go:211] docker container inspect false-20220921220530-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:13:48.767755    6256 oci.go:658] temporary error verifying shutdown: unknown state "false-20220921220530-5916": docker container inspect false-20220921220530-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-20220921220530-5916
	I0921 22:13:48.767755    6256 oci.go:660] temporary error: container false-20220921220530-5916 status is  but expect it to be exited
	I0921 22:13:48.767755    6256 retry.go:31] will retry after 3.325151152s: couldn't verify container is exited. %v: unknown state "false-20220921220530-5916": docker container inspect false-20220921220530-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-20220921220530-5916
	I0921 22:13:52.115802    6256 cli_runner.go:164] Run: docker container inspect false-20220921220530-5916 --format={{.State.Status}}
	W0921 22:13:52.307935    6256 cli_runner.go:211] docker container inspect false-20220921220530-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:13:52.308186    6256 oci.go:658] temporary error verifying shutdown: unknown state "false-20220921220530-5916": docker container inspect false-20220921220530-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-20220921220530-5916
	I0921 22:13:52.308186    6256 oci.go:660] temporary error: container false-20220921220530-5916 status is  but expect it to be exited
	I0921 22:13:52.308423    6256 retry.go:31] will retry after 2.711970641s: couldn't verify container is exited. %v: unknown state "false-20220921220530-5916": docker container inspect false-20220921220530-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-20220921220530-5916
	I0921 22:13:55.033446    6256 cli_runner.go:164] Run: docker container inspect false-20220921220530-5916 --format={{.State.Status}}
	W0921 22:13:55.225679    6256 cli_runner.go:211] docker container inspect false-20220921220530-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:13:55.225679    6256 oci.go:658] temporary error verifying shutdown: unknown state "false-20220921220530-5916": docker container inspect false-20220921220530-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-20220921220530-5916
	I0921 22:13:55.225679    6256 oci.go:660] temporary error: container false-20220921220530-5916 status is  but expect it to be exited
	I0921 22:13:55.225679    6256 retry.go:31] will retry after 5.015617898s: couldn't verify container is exited. %v: unknown state "false-20220921220530-5916": docker container inspect false-20220921220530-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-20220921220530-5916
	I0921 22:14:00.254746    6256 cli_runner.go:164] Run: docker container inspect false-20220921220530-5916 --format={{.State.Status}}
	W0921 22:14:00.433984    6256 cli_runner.go:211] docker container inspect false-20220921220530-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:14:00.434254    6256 oci.go:658] temporary error verifying shutdown: unknown state "false-20220921220530-5916": docker container inspect false-20220921220530-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-20220921220530-5916
	I0921 22:14:00.434360    6256 oci.go:660] temporary error: container false-20220921220530-5916 status is  but expect it to be exited
	I0921 22:14:00.434462    6256 oci.go:88] couldn't shut down false-20220921220530-5916 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "false-20220921220530-5916": docker container inspect false-20220921220530-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-20220921220530-5916
	 
	I0921 22:14:00.443010    6256 cli_runner.go:164] Run: docker rm -f -v false-20220921220530-5916
	I0921 22:14:00.683036    6256 cli_runner.go:164] Run: docker container inspect -f {{.Id}} false-20220921220530-5916
	W0921 22:14:00.888565    6256 cli_runner.go:211] docker container inspect -f {{.Id}} false-20220921220530-5916 returned with exit code 1
	I0921 22:14:00.896384    6256 cli_runner.go:164] Run: docker network inspect false-20220921220530-5916 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0921 22:14:01.090278    6256 cli_runner.go:211] docker network inspect false-20220921220530-5916 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0921 22:14:01.100595    6256 network_create.go:272] running [docker network inspect false-20220921220530-5916] to gather additional debugging logs...
	I0921 22:14:01.100595    6256 cli_runner.go:164] Run: docker network inspect false-20220921220530-5916
	W0921 22:14:01.290238    6256 cli_runner.go:211] docker network inspect false-20220921220530-5916 returned with exit code 1
	I0921 22:14:01.290238    6256 network_create.go:275] error running [docker network inspect false-20220921220530-5916]: docker network inspect false-20220921220530-5916: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: false-20220921220530-5916
	I0921 22:14:01.290238    6256 network_create.go:277] output of [docker network inspect false-20220921220530-5916]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: false-20220921220530-5916
	
	** /stderr **
	W0921 22:14:01.291206    6256 delete.go:139] delete failed (probably ok) <nil>
	I0921 22:14:01.291206    6256 fix.go:115] Sleeping 1 second for extra luck!
	I0921 22:14:02.302426    6256 start.go:125] createHost starting for "" (driver="docker")
	I0921 22:14:02.306355    6256 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0921 22:14:02.306355    6256 start.go:159] libmachine.API.Create for "false-20220921220530-5916" (driver="docker")
	I0921 22:14:02.306355    6256 client.go:168] LocalClient.Create starting
	I0921 22:14:02.306935    6256 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I0921 22:14:02.306935    6256 main.go:134] libmachine: Decoding PEM data...
	I0921 22:14:02.307528    6256 main.go:134] libmachine: Parsing certificate...
	I0921 22:14:02.307908    6256 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I0921 22:14:02.308079    6256 main.go:134] libmachine: Decoding PEM data...
	I0921 22:14:02.308215    6256 main.go:134] libmachine: Parsing certificate...
	I0921 22:14:02.320068    6256 cli_runner.go:164] Run: docker network inspect false-20220921220530-5916 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0921 22:14:02.534486    6256 cli_runner.go:211] docker network inspect false-20220921220530-5916 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0921 22:14:02.543790    6256 network_create.go:272] running [docker network inspect false-20220921220530-5916] to gather additional debugging logs...
	I0921 22:14:02.543790    6256 cli_runner.go:164] Run: docker network inspect false-20220921220530-5916
	W0921 22:14:02.753467    6256 cli_runner.go:211] docker network inspect false-20220921220530-5916 returned with exit code 1
	I0921 22:14:02.753467    6256 network_create.go:275] error running [docker network inspect false-20220921220530-5916]: docker network inspect false-20220921220530-5916: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: false-20220921220530-5916
	I0921 22:14:02.753467    6256 network_create.go:277] output of [docker network inspect false-20220921220530-5916]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: false-20220921220530-5916
	
	** /stderr **
	I0921 22:14:02.760459    6256 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0921 22:14:02.977459    6256 network.go:281] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00000ae38] amended:false}} dirty:map[] misses:0}
	I0921 22:14:02.977459    6256 network.go:239] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0921 22:14:02.993468    6256 network.go:290] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00000ae38] amended:true}} dirty:map[192.168.49.0:0xc00000ae38 192.168.58.0:0xc00000aa40] misses:0}
	I0921 22:14:02.993468    6256 network.go:236] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0921 22:14:02.993468    6256 network_create.go:115] attempt to create docker network false-20220921220530-5916 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0921 22:14:03.000604    6256 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=false-20220921220530-5916 false-20220921220530-5916
	W0921 22:14:03.183590    6256 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=false-20220921220530-5916 false-20220921220530-5916 returned with exit code 1
	E0921 22:14:03.183590    6256 network_create.go:104] error while trying to create docker network false-20220921220530-5916 192.168.58.0/24: create docker network false-20220921220530-5916 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=false-20220921220530-5916 false-20220921220530-5916: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 7a3b592af291a7f0cf99c7539313aff6a91b93921d7a318fc6259bc4475afea9 (br-7a3b592af291): conflicts with network 8a3cd8d165a4af032276277dbee93b52b8ee4aed7ca6d8aceee6814901b31792 (br-8a3cd8d165a4): networks have overlapping IPv4
	W0921 22:14:03.183590    6256 out.go:239] ! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network false-20220921220530-5916 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=false-20220921220530-5916 false-20220921220530-5916: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 7a3b592af291a7f0cf99c7539313aff6a91b93921d7a318fc6259bc4475afea9 (br-7a3b592af291): conflicts with network 8a3cd8d165a4af032276277dbee93b52b8ee4aed7ca6d8aceee6814901b31792 (br-8a3cd8d165a4): networks have overlapping IPv4
	
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network false-20220921220530-5916 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=false-20220921220530-5916 false-20220921220530-5916: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 7a3b592af291a7f0cf99c7539313aff6a91b93921d7a318fc6259bc4475afea9 (br-7a3b592af291): conflicts with network 8a3cd8d165a4af032276277dbee93b52b8ee4aed7ca6d8aceee6814901b31792 (br-8a3cd8d165a4): networks have overlapping IPv4
	
	I0921 22:14:03.198368    6256 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0921 22:14:03.396090    6256 cli_runner.go:164] Run: docker volume create false-20220921220530-5916 --label name.minikube.sigs.k8s.io=false-20220921220530-5916 --label created_by.minikube.sigs.k8s.io=true
	W0921 22:14:03.592859    6256 cli_runner.go:211] docker volume create false-20220921220530-5916 --label name.minikube.sigs.k8s.io=false-20220921220530-5916 --label created_by.minikube.sigs.k8s.io=true returned with exit code 1
	I0921 22:14:03.592859    6256 client.go:171] LocalClient.Create took 1.2864941s
	I0921 22:14:05.616300    6256 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0921 22:14:05.624928    6256 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220921220530-5916
	W0921 22:14:05.836576    6256 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220921220530-5916 returned with exit code 1
	I0921 22:14:05.836891    6256 retry.go:31] will retry after 242.222461ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "false-20220921220530-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220921220530-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-20220921220530-5916
	I0921 22:14:06.097837    6256 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220921220530-5916
	W0921 22:14:06.305768    6256 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220921220530-5916 returned with exit code 1
	I0921 22:14:06.305834    6256 retry.go:31] will retry after 293.637806ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "false-20220921220530-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220921220530-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-20220921220530-5916
	I0921 22:14:06.616473    6256 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220921220530-5916
	W0921 22:14:06.808784    6256 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220921220530-5916 returned with exit code 1
	I0921 22:14:06.808784    6256 retry.go:31] will retry after 446.119795ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "false-20220921220530-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220921220530-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-20220921220530-5916
	I0921 22:14:07.267643    6256 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220921220530-5916
	W0921 22:14:07.461729    6256 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220921220530-5916 returned with exit code 1
	W0921 22:14:07.461806    6256 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "false-20220921220530-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220921220530-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-20220921220530-5916
	
	W0921 22:14:07.461806    6256 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "false-20220921220530-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220921220530-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-20220921220530-5916
	I0921 22:14:07.472938    6256 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0921 22:14:07.479685    6256 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220921220530-5916
	W0921 22:14:07.678040    6256 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220921220530-5916 returned with exit code 1
	I0921 22:14:07.678508    6256 retry.go:31] will retry after 179.638263ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "false-20220921220530-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220921220530-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-20220921220530-5916
	I0921 22:14:07.875973    6256 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220921220530-5916
	W0921 22:14:08.089723    6256 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220921220530-5916 returned with exit code 1
	I0921 22:14:08.089723    6256 retry.go:31] will retry after 263.695078ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "false-20220921220530-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220921220530-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-20220921220530-5916
	I0921 22:14:08.373419    6256 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220921220530-5916
	W0921 22:14:08.563298    6256 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220921220530-5916 returned with exit code 1
	I0921 22:14:08.563298    6256 retry.go:31] will retry after 484.240172ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "false-20220921220530-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220921220530-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-20220921220530-5916
	I0921 22:14:09.072772    6256 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220921220530-5916
	W0921 22:14:09.268282    6256 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220921220530-5916 returned with exit code 1
	W0921 22:14:09.268282    6256 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "false-20220921220530-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220921220530-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-20220921220530-5916
	
	W0921 22:14:09.268282    6256 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "false-20220921220530-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220921220530-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-20220921220530-5916
	I0921 22:14:09.268282    6256 start.go:128] duration metric: createHost completed in 6.9658007s
	I0921 22:14:09.284262    6256 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0921 22:14:09.293259    6256 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220921220530-5916
	W0921 22:14:09.518222    6256 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220921220530-5916 returned with exit code 1
	I0921 22:14:09.518222    6256 retry.go:31] will retry after 340.62286ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "false-20220921220530-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220921220530-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-20220921220530-5916
	I0921 22:14:09.868885    6256 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220921220530-5916
	W0921 22:14:10.080331    6256 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220921220530-5916 returned with exit code 1
	I0921 22:14:10.080574    6256 retry.go:31] will retry after 297.417842ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "false-20220921220530-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220921220530-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-20220921220530-5916
	I0921 22:14:10.401370    6256 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220921220530-5916
	W0921 22:14:10.583636    6256 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220921220530-5916 returned with exit code 1
	I0921 22:14:10.583636    6256 retry.go:31] will retry after 448.358942ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "false-20220921220530-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220921220530-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-20220921220530-5916
	I0921 22:14:11.047869    6256 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220921220530-5916
	W0921 22:14:11.280835    6256 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220921220530-5916 returned with exit code 1
	W0921 22:14:11.280835    6256 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "false-20220921220530-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220921220530-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-20220921220530-5916
	
	W0921 22:14:11.280835    6256 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "false-20220921220530-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220921220530-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-20220921220530-5916
	I0921 22:14:11.292952    6256 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0921 22:14:11.299493    6256 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220921220530-5916
	W0921 22:14:11.514154    6256 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220921220530-5916 returned with exit code 1
	I0921 22:14:11.514291    6256 retry.go:31] will retry after 176.645665ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "false-20220921220530-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220921220530-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-20220921220530-5916
	I0921 22:14:11.710532    6256 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220921220530-5916
	W0921 22:14:11.935434    6256 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220921220530-5916 returned with exit code 1
	I0921 22:14:11.935434    6256 retry.go:31] will retry after 512.00063ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "false-20220921220530-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220921220530-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-20220921220530-5916
	I0921 22:14:12.456645    6256 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220921220530-5916
	W0921 22:14:12.649029    6256 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220921220530-5916 returned with exit code 1
	W0921 22:14:12.649029    6256 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "false-20220921220530-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220921220530-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-20220921220530-5916
	
	W0921 22:14:12.649029    6256 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "false-20220921220530-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220921220530-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-20220921220530-5916
	I0921 22:14:12.649029    6256 fix.go:57] fixHost completed within 30.6161126s
	I0921 22:14:12.649029    6256 start.go:83] releasing machines lock for "false-20220921220530-5916", held for 30.6161126s
	W0921 22:14:12.649829    6256 out.go:239] * Failed to start docker container. Running "minikube delete -p false-20220921220530-5916" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for false-20220921220530-5916 container: docker volume create false-20220921220530-5916 --label name.minikube.sigs.k8s.io=false-20220921220530-5916 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create false-20220921220530-5916: error while creating volume root path '/var/lib/docker/volumes/false-20220921220530-5916': mkdir /var/lib/docker/volumes/false-20220921220530-5916: read-only file system
	
	* Failed to start docker container. Running "minikube delete -p false-20220921220530-5916" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for false-20220921220530-5916 container: docker volume create false-20220921220530-5916 --label name.minikube.sigs.k8s.io=false-20220921220530-5916 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create false-20220921220530-5916: error while creating volume root path '/var/lib/docker/volumes/false-20220921220530-5916': mkdir /var/lib/docker/volumes/false-20220921220530-5916: read-only file system
	
	I0921 22:14:12.654870    6256 out.go:177] 
	W0921 22:14:12.659877    6256 out.go:239] X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for false-20220921220530-5916 container: docker volume create false-20220921220530-5916 --label name.minikube.sigs.k8s.io=false-20220921220530-5916 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create false-20220921220530-5916: error while creating volume root path '/var/lib/docker/volumes/false-20220921220530-5916': mkdir /var/lib/docker/volumes/false-20220921220530-5916: read-only file system
	
	X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for false-20220921220530-5916 container: docker volume create false-20220921220530-5916 --label name.minikube.sigs.k8s.io=false-20220921220530-5916 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create false-20220921220530-5916: error while creating volume root path '/var/lib/docker/volumes/false-20220921220530-5916': mkdir /var/lib/docker/volumes/false-20220921220530-5916: read-only file system
	
	W0921 22:14:12.660148    6256 out.go:239] * Suggestion: Restart Docker
	* Suggestion: Restart Docker
	W0921 22:14:12.660341    6256 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/6825
	* Related issue: https://github.com/kubernetes/minikube/issues/6825
	I0921 22:14:12.669607    6256 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:103: failed start: exit status 60
--- FAIL: TestNetworkPlugins/group/false/Start (49.29s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (2.11s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p newest-cni-20220921221222-5916 -n newest-cni-20220921221222-5916

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p newest-cni-20220921221222-5916 -n newest-cni-20220921221222-5916: exit status 7 (611.6661ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0921 22:13:34.451123    7464 status.go:247] status error: host: state: unknown state "newest-cni-20220921221222-5916": docker container inspect newest-cni-20220921221222-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220921221222-5916

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Nonexistent"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p newest-cni-20220921221222-5916 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect newest-cni-20220921221222-5916
helpers_test.go:231: (dbg) Non-zero exit: docker inspect newest-cni-20220921221222-5916: exit status 1 (244.2274ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: newest-cni-20220921221222-5916

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p newest-cni-20220921221222-5916 -n newest-cni-20220921221222-5916

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p newest-cni-20220921221222-5916 -n newest-cni-20220921221222-5916: exit status 7 (617.4593ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0921 22:13:35.950895    8960 status.go:247] status error: host: state: unknown state "newest-cni-20220921221222-5916": docker container inspect newest-cni-20220921221222-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220921221222-5916

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-20220921221222-5916" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (2.11s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop (2.03s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p default-k8s-different-port-20220921221221-5916 -n default-k8s-different-port-20220921221221-5916

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p default-k8s-different-port-20220921221221-5916 -n default-k8s-different-port-20220921221221-5916: exit status 7 (584.7865ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0921 22:13:35.460395    3044 status.go:247] status error: host: state: unknown state "default-k8s-different-port-20220921221221-5916": docker container inspect default-k8s-different-port-20220921221221-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220921221221-5916

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Nonexistent"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p default-k8s-different-port-20220921221221-5916 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect default-k8s-different-port-20220921221221-5916
helpers_test.go:231: (dbg) Non-zero exit: docker inspect default-k8s-different-port-20220921221221-5916: exit status 1 (243.0489ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: default-k8s-different-port-20220921221221-5916

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p default-k8s-different-port-20220921221221-5916 -n default-k8s-different-port-20220921221221-5916
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p default-k8s-different-port-20220921221221-5916 -n default-k8s-different-port-20220921221221-5916: exit status 7 (591.3229ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0921 22:13:36.901866    9080 status.go:247] status error: host: state: unknown state "default-k8s-different-port-20220921221221-5916": docker container inspect default-k8s-different-port-20220921221221-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220921221221-5916

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-different-port-20220921221221-5916" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop (2.03s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (77.76s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-windows-amd64.exe start -p newest-cni-20220921221222-5916 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker --kubernetes-version=v1.25.2

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p newest-cni-20220921221222-5916 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker --kubernetes-version=v1.25.2: exit status 60 (1m16.6057291s)

                                                
                                                
-- stdout --
	* [newest-cni-20220921221222-5916] minikube v1.27.0 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	  - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=14995
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on existing profile
	* Starting control plane node newest-cni-20220921221222-5916 in cluster newest-cni-20220921221222-5916
	* Pulling base image ...
	* docker "newest-cni-20220921221222-5916" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* docker "newest-cni-20220921221222-5916" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0921 22:13:36.222824    6616 out.go:296] Setting OutFile to fd 1520 ...
	I0921 22:13:36.298278    6616 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0921 22:13:36.298278    6616 out.go:309] Setting ErrFile to fd 1564...
	I0921 22:13:36.298278    6616 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0921 22:13:36.317272    6616 out.go:303] Setting JSON to false
	I0921 22:13:36.320272    6616 start.go:115] hostinfo: {"hostname":"minikube2","uptime":4484,"bootTime":1663793932,"procs":152,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19044 Build 19044","kernelVersion":"10.0.19044 Build 19044","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"29dced62-21fb-45d8-a34e-472b66ced002"}
	W0921 22:13:36.320272    6616 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0921 22:13:36.323273    6616 out.go:177] * [newest-cni-20220921221222-5916] minikube v1.27.0 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	I0921 22:13:36.326280    6616 notify.go:214] Checking for updates...
	I0921 22:13:36.328281    6616 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	I0921 22:13:36.330280    6616 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	I0921 22:13:36.335289    6616 out.go:177]   - MINIKUBE_LOCATION=14995
	I0921 22:13:36.337594    6616 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0921 22:13:36.341296    6616 config.go:180] Loaded profile config "newest-cni-20220921221222-5916": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.2
	I0921 22:13:36.342761    6616 driver.go:365] Setting default libvirt URI to qemu:///system
	I0921 22:13:36.650800    6616 docker.go:137] docker version: linux-20.10.17
	I0921 22:13:36.658288    6616 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0921 22:13:37.201131    6616 info.go:265] docker info: {ID:6VFI:N2PO:PLBP:KXLR:EN7R:OR7P:564S:UY7N:ZWDA:5V2U:KGL2:KJZ7 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:43 OomKillDisable:true NGoroutines:91 SystemTime:2022-09-21 22:13:36.8156804 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.10.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.9] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-
plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.19.0]] Warnings:<nil>}}
	I0921 22:13:37.207851    6616 out.go:177] * Using the docker driver based on existing profile
	I0921 22:13:37.210154    6616 start.go:284] selected driver: docker
	I0921 22:13:37.210154    6616 start.go:808] validating driver "docker" against &{Name:newest-cni-20220921221222-5916 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.2 ClusterName:newest-cni-20220921221222-5916 Namespace:default APIServerName:mini
kubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.25.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpira
tion:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I0921 22:13:37.210154    6616 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0921 22:13:37.273734    6616 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0921 22:13:37.873135    6616 info.go:265] docker info: {ID:6VFI:N2PO:PLBP:KXLR:EN7R:OR7P:564S:UY7N:ZWDA:5V2U:KGL2:KJZ7 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:43 OomKillDisable:true NGoroutines:91 SystemTime:2022-09-21 22:13:37.4433362 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.10.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.9] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-
plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.19.0]] Warnings:<nil>}}
	I0921 22:13:37.873135    6616 start_flags.go:886] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0921 22:13:37.873135    6616 cni.go:95] Creating CNI manager for ""
	I0921 22:13:37.874135    6616 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0921 22:13:37.874135    6616 start_flags.go:316] config:
	{Name:newest-cni-20220921221222-5916 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.2 ClusterName:newest-cni-20220921221222-5916 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.25.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-hos
t Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I0921 22:13:37.878123    6616 out.go:177] * Starting control plane node newest-cni-20220921221222-5916 in cluster newest-cni-20220921221222-5916
	I0921 22:13:37.880148    6616 cache.go:120] Beginning downloading kic base image for docker with docker
	I0921 22:13:37.882129    6616 out.go:177] * Pulling base image ...
	I0921 22:13:37.886113    6616 preload.go:132] Checking if preload exists for k8s version v1.25.2 and runtime docker
	I0921 22:13:37.886113    6616 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c in local docker daemon
	I0921 22:13:37.886113    6616 preload.go:148] Found local preload: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.25.2-docker-overlay2-amd64.tar.lz4
	I0921 22:13:37.886113    6616 cache.go:57] Caching tarball of preloaded images
	I0921 22:13:37.886113    6616 preload.go:174] Found C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.25.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0921 22:13:37.887125    6616 cache.go:60] Finished verifying existence of preloaded tar for  v1.25.2 on docker
	I0921 22:13:37.887125    6616 profile.go:148] Saving config to C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\newest-cni-20220921221222-5916\config.json ...
	I0921 22:13:38.106001    6616 cache.go:147] Downloading gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c to local cache
	I0921 22:13:38.106001    6616 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase_v0.0.34@sha256_f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c.tar
	I0921 22:13:38.106001    6616 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase_v0.0.34@sha256_f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c.tar
	I0921 22:13:38.106001    6616 image.go:59] Checking for gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c in local cache directory
	I0921 22:13:38.106001    6616 image.go:62] Found gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c in local cache directory, skipping pull
	I0921 22:13:38.106001    6616 image.go:103] gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c exists in cache, skipping pull
	I0921 22:13:38.106001    6616 cache.go:150] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c as a tarball
	I0921 22:13:38.106001    6616 cache.go:161] Loading gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c from local cache
	I0921 22:13:38.106001    6616 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase_v0.0.34@sha256_f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c.tar
	I0921 22:13:40.521799    6616 cache.go:164] successfully loaded gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c from cached tarball
	I0921 22:13:40.521903    6616 cache.go:170] Downloading gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c to local daemon
	I0921 22:13:40.521974    6616 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c.d.lock -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase_v0.0.34@sha256_f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c.d.lock
	I0921 22:13:40.522063    6616 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c in local docker daemon
	I0921 22:13:40.745012    6616 image.go:243] Writing gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c to local daemon
	    > gcr.io/k8s-minikube/kicbase:  0 B [__________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase:  0 B [__________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase:  0 B [__________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase:  0 B [______________________] ?% ? p/s 800msI0921 22:13:42.221239    6616 cache.go:177] use image loaded from cache gcr.io/k8s-minikube/kicbase:v0.0.34
	W0921 22:13:42.221239    6616 out.go:239] ! minikube was unable to download gcr.io/k8s-minikube/kicbase:v0.0.34, but successfully downloaded gcr.io/k8s-minikube/kicbase:v0.0.34 as a fallback image
	! minikube was unable to download gcr.io/k8s-minikube/kicbase:v0.0.34, but successfully downloaded gcr.io/k8s-minikube/kicbase:v0.0.34 as a fallback image
	I0921 22:13:42.221239    6616 cache.go:208] Successfully downloaded all kic artifacts
	I0921 22:13:42.221239    6616 start.go:364] acquiring machines lock for newest-cni-20220921221222-5916: {Name:mkba2d573750337952145210e595be8251a49600 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0921 22:13:42.221239    6616 start.go:368] acquired machines lock for "newest-cni-20220921221222-5916" in 0s
	I0921 22:13:42.222236    6616 start.go:96] Skipping create...Using existing machine configuration
	I0921 22:13:42.222236    6616 fix.go:55] fixHost starting: 
	I0921 22:13:42.237251    6616 cli_runner.go:164] Run: docker container inspect newest-cni-20220921221222-5916 --format={{.State.Status}}
	W0921 22:13:42.455243    6616 cli_runner.go:211] docker container inspect newest-cni-20220921221222-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:13:42.455317    6616 fix.go:103] recreateIfNeeded on newest-cni-20220921221222-5916: state= err=unknown state "newest-cni-20220921221222-5916": docker container inspect newest-cni-20220921221222-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220921221222-5916
	I0921 22:13:42.455317    6616 fix.go:108] machineExists: false. err=machine does not exist
	I0921 22:13:42.459165    6616 out.go:177] * docker "newest-cni-20220921221222-5916" container is missing, will recreate.
	I0921 22:13:42.461373    6616 delete.go:124] DEMOLISHING newest-cni-20220921221222-5916 ...
	I0921 22:13:42.482881    6616 cli_runner.go:164] Run: docker container inspect newest-cni-20220921221222-5916 --format={{.State.Status}}
	W0921 22:13:42.675353    6616 cli_runner.go:211] docker container inspect newest-cni-20220921221222-5916 --format={{.State.Status}} returned with exit code 1
	W0921 22:13:42.675517    6616 stop.go:75] unable to get state: unknown state "newest-cni-20220921221222-5916": docker container inspect newest-cni-20220921221222-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220921221222-5916
	I0921 22:13:42.675582    6616 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "newest-cni-20220921221222-5916": docker container inspect newest-cni-20220921221222-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220921221222-5916
	I0921 22:13:42.692390    6616 cli_runner.go:164] Run: docker container inspect newest-cni-20220921221222-5916 --format={{.State.Status}}
	W0921 22:13:42.909557    6616 cli_runner.go:211] docker container inspect newest-cni-20220921221222-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:13:42.909557    6616 delete.go:82] Unable to get host status for newest-cni-20220921221222-5916, assuming it has already been deleted: state: unknown state "newest-cni-20220921221222-5916": docker container inspect newest-cni-20220921221222-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220921221222-5916
	I0921 22:13:42.924509    6616 cli_runner.go:164] Run: docker container inspect -f {{.Id}} newest-cni-20220921221222-5916
	W0921 22:13:43.098578    6616 cli_runner.go:211] docker container inspect -f {{.Id}} newest-cni-20220921221222-5916 returned with exit code 1
	I0921 22:13:43.098578    6616 kic.go:356] could not find the container newest-cni-20220921221222-5916 to remove it. will try anyways
	I0921 22:13:43.107575    6616 cli_runner.go:164] Run: docker container inspect newest-cni-20220921221222-5916 --format={{.State.Status}}
	W0921 22:13:43.320713    6616 cli_runner.go:211] docker container inspect newest-cni-20220921221222-5916 --format={{.State.Status}} returned with exit code 1
	W0921 22:13:43.320713    6616 oci.go:84] error getting container status, will try to delete anyways: unknown state "newest-cni-20220921221222-5916": docker container inspect newest-cni-20220921221222-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220921221222-5916
	I0921 22:13:43.327718    6616 cli_runner.go:164] Run: docker exec --privileged -t newest-cni-20220921221222-5916 /bin/bash -c "sudo init 0"
	W0921 22:13:43.508825    6616 cli_runner.go:211] docker exec --privileged -t newest-cni-20220921221222-5916 /bin/bash -c "sudo init 0" returned with exit code 1
	I0921 22:13:43.509148    6616 oci.go:646] error shutdown newest-cni-20220921221222-5916: docker exec --privileged -t newest-cni-20220921221222-5916 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: newest-cni-20220921221222-5916
	I0921 22:13:44.528835    6616 cli_runner.go:164] Run: docker container inspect newest-cni-20220921221222-5916 --format={{.State.Status}}
	W0921 22:13:44.772002    6616 cli_runner.go:211] docker container inspect newest-cni-20220921221222-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:13:44.772047    6616 oci.go:658] temporary error verifying shutdown: unknown state "newest-cni-20220921221222-5916": docker container inspect newest-cni-20220921221222-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220921221222-5916
	I0921 22:13:44.772047    6616 oci.go:660] temporary error: container newest-cni-20220921221222-5916 status is  but expect it to be exited
	I0921 22:13:44.772047    6616 retry.go:31] will retry after 552.330144ms: couldn't verify container is exited. %v: unknown state "newest-cni-20220921221222-5916": docker container inspect newest-cni-20220921221222-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220921221222-5916
	I0921 22:13:45.334399    6616 cli_runner.go:164] Run: docker container inspect newest-cni-20220921221222-5916 --format={{.State.Status}}
	W0921 22:13:45.527069    6616 cli_runner.go:211] docker container inspect newest-cni-20220921221222-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:13:45.527069    6616 oci.go:658] temporary error verifying shutdown: unknown state "newest-cni-20220921221222-5916": docker container inspect newest-cni-20220921221222-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220921221222-5916
	I0921 22:13:45.527069    6616 oci.go:660] temporary error: container newest-cni-20220921221222-5916 status is  but expect it to be exited
	I0921 22:13:45.527069    6616 retry.go:31] will retry after 1.080381816s: couldn't verify container is exited. %v: unknown state "newest-cni-20220921221222-5916": docker container inspect newest-cni-20220921221222-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220921221222-5916
	I0921 22:13:46.616457    6616 cli_runner.go:164] Run: docker container inspect newest-cni-20220921221222-5916 --format={{.State.Status}}
	W0921 22:13:46.827249    6616 cli_runner.go:211] docker container inspect newest-cni-20220921221222-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:13:46.827249    6616 oci.go:658] temporary error verifying shutdown: unknown state "newest-cni-20220921221222-5916": docker container inspect newest-cni-20220921221222-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220921221222-5916
	I0921 22:13:46.827249    6616 oci.go:660] temporary error: container newest-cni-20220921221222-5916 status is  but expect it to be exited
	I0921 22:13:46.827249    6616 retry.go:31] will retry after 1.31013006s: couldn't verify container is exited. %v: unknown state "newest-cni-20220921221222-5916": docker container inspect newest-cni-20220921221222-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220921221222-5916
	I0921 22:13:48.150077    6616 cli_runner.go:164] Run: docker container inspect newest-cni-20220921221222-5916 --format={{.State.Status}}
	W0921 22:13:48.329726    6616 cli_runner.go:211] docker container inspect newest-cni-20220921221222-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:13:48.329972    6616 oci.go:658] temporary error verifying shutdown: unknown state "newest-cni-20220921221222-5916": docker container inspect newest-cni-20220921221222-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220921221222-5916
	I0921 22:13:48.329972    6616 oci.go:660] temporary error: container newest-cni-20220921221222-5916 status is  but expect it to be exited
	I0921 22:13:48.330066    6616 retry.go:31] will retry after 1.582392691s: couldn't verify container is exited. %v: unknown state "newest-cni-20220921221222-5916": docker container inspect newest-cni-20220921221222-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220921221222-5916
	I0921 22:13:49.923410    6616 cli_runner.go:164] Run: docker container inspect newest-cni-20220921221222-5916 --format={{.State.Status}}
	W0921 22:13:50.116418    6616 cli_runner.go:211] docker container inspect newest-cni-20220921221222-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:13:50.116501    6616 oci.go:658] temporary error verifying shutdown: unknown state "newest-cni-20220921221222-5916": docker container inspect newest-cni-20220921221222-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220921221222-5916
	I0921 22:13:50.116622    6616 oci.go:660] temporary error: container newest-cni-20220921221222-5916 status is  but expect it to be exited
	I0921 22:13:50.116657    6616 retry.go:31] will retry after 2.340488664s: couldn't verify container is exited. %v: unknown state "newest-cni-20220921221222-5916": docker container inspect newest-cni-20220921221222-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220921221222-5916
	I0921 22:13:52.470870    6616 cli_runner.go:164] Run: docker container inspect newest-cni-20220921221222-5916 --format={{.State.Status}}
	W0921 22:13:52.678679    6616 cli_runner.go:211] docker container inspect newest-cni-20220921221222-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:13:52.678758    6616 oci.go:658] temporary error verifying shutdown: unknown state "newest-cni-20220921221222-5916": docker container inspect newest-cni-20220921221222-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220921221222-5916
	I0921 22:13:52.678758    6616 oci.go:660] temporary error: container newest-cni-20220921221222-5916 status is  but expect it to be exited
	I0921 22:13:52.678758    6616 retry.go:31] will retry after 4.506218855s: couldn't verify container is exited. %v: unknown state "newest-cni-20220921221222-5916": docker container inspect newest-cni-20220921221222-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220921221222-5916
	I0921 22:13:57.199452    6616 cli_runner.go:164] Run: docker container inspect newest-cni-20220921221222-5916 --format={{.State.Status}}
	W0921 22:13:57.393163    6616 cli_runner.go:211] docker container inspect newest-cni-20220921221222-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:13:57.393163    6616 oci.go:658] temporary error verifying shutdown: unknown state "newest-cni-20220921221222-5916": docker container inspect newest-cni-20220921221222-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220921221222-5916
	I0921 22:13:57.393163    6616 oci.go:660] temporary error: container newest-cni-20220921221222-5916 status is  but expect it to be exited
	I0921 22:13:57.393163    6616 retry.go:31] will retry after 3.221479586s: couldn't verify container is exited. %v: unknown state "newest-cni-20220921221222-5916": docker container inspect newest-cni-20220921221222-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220921221222-5916
	I0921 22:14:00.634897    6616 cli_runner.go:164] Run: docker container inspect newest-cni-20220921221222-5916 --format={{.State.Status}}
	W0921 22:14:00.858448    6616 cli_runner.go:211] docker container inspect newest-cni-20220921221222-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:14:00.858448    6616 oci.go:658] temporary error verifying shutdown: unknown state "newest-cni-20220921221222-5916": docker container inspect newest-cni-20220921221222-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220921221222-5916
	I0921 22:14:00.858448    6616 oci.go:660] temporary error: container newest-cni-20220921221222-5916 status is  but expect it to be exited
	I0921 22:14:00.858448    6616 oci.go:88] couldn't shut down newest-cni-20220921221222-5916 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "newest-cni-20220921221222-5916": docker container inspect newest-cni-20220921221222-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220921221222-5916
	 
	I0921 22:14:00.868224    6616 cli_runner.go:164] Run: docker rm -f -v newest-cni-20220921221222-5916
	I0921 22:14:01.098880    6616 cli_runner.go:164] Run: docker container inspect -f {{.Id}} newest-cni-20220921221222-5916
	W0921 22:14:01.306242    6616 cli_runner.go:211] docker container inspect -f {{.Id}} newest-cni-20220921221222-5916 returned with exit code 1
	I0921 22:14:01.313112    6616 cli_runner.go:164] Run: docker network inspect newest-cni-20220921221222-5916 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0921 22:14:01.525105    6616 cli_runner.go:211] docker network inspect newest-cni-20220921221222-5916 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0921 22:14:01.533170    6616 network_create.go:272] running [docker network inspect newest-cni-20220921221222-5916] to gather additional debugging logs...
	I0921 22:14:01.533170    6616 cli_runner.go:164] Run: docker network inspect newest-cni-20220921221222-5916
	W0921 22:14:01.726215    6616 cli_runner.go:211] docker network inspect newest-cni-20220921221222-5916 returned with exit code 1
	I0921 22:14:01.726215    6616 network_create.go:275] error running [docker network inspect newest-cni-20220921221222-5916]: docker network inspect newest-cni-20220921221222-5916: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: newest-cni-20220921221222-5916
	I0921 22:14:01.726215    6616 network_create.go:277] output of [docker network inspect newest-cni-20220921221222-5916]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: newest-cni-20220921221222-5916
	
	** /stderr **
	W0921 22:14:01.727546    6616 delete.go:139] delete failed (probably ok) <nil>
	I0921 22:14:01.727602    6616 fix.go:115] Sleeping 1 second for extra luck!
	I0921 22:14:02.737610    6616 start.go:125] createHost starting for "" (driver="docker")
	I0921 22:14:02.741456    6616 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0921 22:14:02.741456    6616 start.go:159] libmachine.API.Create for "newest-cni-20220921221222-5916" (driver="docker")
	I0921 22:14:02.741456    6616 client.go:168] LocalClient.Create starting
	I0921 22:14:02.742456    6616 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I0921 22:14:02.742456    6616 main.go:134] libmachine: Decoding PEM data...
	I0921 22:14:02.742456    6616 main.go:134] libmachine: Parsing certificate...
	I0921 22:14:02.742456    6616 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I0921 22:14:02.742456    6616 main.go:134] libmachine: Decoding PEM data...
	I0921 22:14:02.742456    6616 main.go:134] libmachine: Parsing certificate...
	I0921 22:14:02.751458    6616 cli_runner.go:164] Run: docker network inspect newest-cni-20220921221222-5916 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0921 22:14:02.977459    6616 cli_runner.go:211] docker network inspect newest-cni-20220921221222-5916 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0921 22:14:02.984455    6616 network_create.go:272] running [docker network inspect newest-cni-20220921221222-5916] to gather additional debugging logs...
	I0921 22:14:02.984455    6616 cli_runner.go:164] Run: docker network inspect newest-cni-20220921221222-5916
	W0921 22:14:03.199362    6616 cli_runner.go:211] docker network inspect newest-cni-20220921221222-5916 returned with exit code 1
	I0921 22:14:03.199362    6616 network_create.go:275] error running [docker network inspect newest-cni-20220921221222-5916]: docker network inspect newest-cni-20220921221222-5916: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: newest-cni-20220921221222-5916
	I0921 22:14:03.199362    6616 network_create.go:277] output of [docker network inspect newest-cni-20220921221222-5916]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: newest-cni-20220921221222-5916
	
	** /stderr **
	I0921 22:14:03.206389    6616 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0921 22:14:03.425092    6616 network.go:290] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc0003e4528] misses:0}
	I0921 22:14:03.425092    6616 network.go:236] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0921 22:14:03.425092    6616 network_create.go:115] attempt to create docker network newest-cni-20220921221222-5916 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0921 22:14:03.433091    6616 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-20220921221222-5916 newest-cni-20220921221222-5916
	W0921 22:14:03.640706    6616 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-20220921221222-5916 newest-cni-20220921221222-5916 returned with exit code 1
	E0921 22:14:03.640706    6616 network_create.go:104] error while trying to create docker network newest-cni-20220921221222-5916 192.168.49.0/24: create docker network newest-cni-20220921221222-5916 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-20220921221222-5916 newest-cni-20220921221222-5916: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network bfcddec357714eda9f2e5face76d467ac874f4c5aaa274e3e769a0df38b43c24 (br-bfcddec35771): conflicts with network a04d36bfb3cf4b1b1ab643a2ec70fadfc4a485878b32a275b4c7653037d66dd0 (br-a04d36bfb3cf): networks have overlapping IPv4
	W0921 22:14:03.640706    6616 out.go:239] ! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network newest-cni-20220921221222-5916 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-20220921221222-5916 newest-cni-20220921221222-5916: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network bfcddec357714eda9f2e5face76d467ac874f4c5aaa274e3e769a0df38b43c24 (br-bfcddec35771): conflicts with network a04d36bfb3cf4b1b1ab643a2ec70fadfc4a485878b32a275b4c7653037d66dd0 (br-a04d36bfb3cf): networks have overlapping IPv4
	
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network newest-cni-20220921221222-5916 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-20220921221222-5916 newest-cni-20220921221222-5916: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network bfcddec357714eda9f2e5face76d467ac874f4c5aaa274e3e769a0df38b43c24 (br-bfcddec35771): conflicts with network a04d36bfb3cf4b1b1ab643a2ec70fadfc4a485878b32a275b4c7653037d66dd0 (br-a04d36bfb3cf): networks have overlapping IPv4
	
	I0921 22:14:03.655505    6616 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0921 22:14:03.848385    6616 cli_runner.go:164] Run: docker volume create newest-cni-20220921221222-5916 --label name.minikube.sigs.k8s.io=newest-cni-20220921221222-5916 --label created_by.minikube.sigs.k8s.io=true
	W0921 22:14:04.028332    6616 cli_runner.go:211] docker volume create newest-cni-20220921221222-5916 --label name.minikube.sigs.k8s.io=newest-cni-20220921221222-5916 --label created_by.minikube.sigs.k8s.io=true returned with exit code 1
	I0921 22:14:04.028597    6616 client.go:171] LocalClient.Create took 1.2870532s
	I0921 22:14:06.053986    6616 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0921 22:14:06.061861    6616 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220921221222-5916
	W0921 22:14:06.275126    6616 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220921221222-5916 returned with exit code 1
	I0921 22:14:06.275490    6616 retry.go:31] will retry after 149.242379ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-20220921221222-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220921221222-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220921221222-5916
	I0921 22:14:06.448158    6616 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220921221222-5916
	W0921 22:14:06.668516    6616 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220921221222-5916 returned with exit code 1
	I0921 22:14:06.668743    6616 retry.go:31] will retry after 300.341948ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-20220921221222-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220921221222-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220921221222-5916
	I0921 22:14:06.992719    6616 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220921221222-5916
	W0921 22:14:07.209748    6616 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220921221222-5916 returned with exit code 1
	I0921 22:14:07.215498    6616 retry.go:31] will retry after 571.057104ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-20220921221222-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220921221222-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220921221222-5916
	I0921 22:14:07.814576    6616 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220921221222-5916
	W0921 22:14:08.011180    6616 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220921221222-5916 returned with exit code 1
	W0921 22:14:08.011180    6616 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-20220921221222-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220921221222-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220921221222-5916
	
	W0921 22:14:08.011180    6616 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-20220921221222-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220921221222-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220921221222-5916
	I0921 22:14:08.021189    6616 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0921 22:14:08.030183    6616 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220921221222-5916
	W0921 22:14:08.264934    6616 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220921221222-5916 returned with exit code 1
	I0921 22:14:08.264998    6616 retry.go:31] will retry after 178.565968ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-20220921221222-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220921221222-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220921221222-5916
	I0921 22:14:08.465572    6616 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220921221222-5916
	W0921 22:14:08.672538    6616 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220921221222-5916 returned with exit code 1
	I0921 22:14:08.672538    6616 retry.go:31] will retry after 330.246446ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-20220921221222-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220921221222-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220921221222-5916
	I0921 22:14:09.024477    6616 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220921221222-5916
	W0921 22:14:09.236253    6616 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220921221222-5916 returned with exit code 1
	I0921 22:14:09.236253    6616 retry.go:31] will retry after 460.157723ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-20220921221222-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220921221222-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220921221222-5916
	I0921 22:14:09.715013    6616 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220921221222-5916
	W0921 22:14:09.955213    6616 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220921221222-5916 returned with exit code 1
	W0921 22:14:09.955213    6616 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-20220921221222-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220921221222-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220921221222-5916
	
	W0921 22:14:09.955213    6616 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-20220921221222-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220921221222-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220921221222-5916
	I0921 22:14:09.955213    6616 start.go:128] duration metric: createHost completed in 7.2175451s
	I0921 22:14:09.969189    6616 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0921 22:14:09.978355    6616 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220921221222-5916
	W0921 22:14:10.179907    6616 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220921221222-5916 returned with exit code 1
	I0921 22:14:10.179907    6616 retry.go:31] will retry after 195.758538ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-20220921221222-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220921221222-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220921221222-5916
	I0921 22:14:10.384396    6616 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220921221222-5916
	W0921 22:14:10.567586    6616 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220921221222-5916 returned with exit code 1
	I0921 22:14:10.567586    6616 retry.go:31] will retry after 297.413196ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-20220921221222-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220921221222-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220921221222-5916
	I0921 22:14:10.888574    6616 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220921221222-5916
	W0921 22:14:11.080771    6616 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220921221222-5916 returned with exit code 1
	I0921 22:14:11.080932    6616 retry.go:31] will retry after 663.23513ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-20220921221222-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220921221222-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220921221222-5916
	I0921 22:14:11.755689    6616 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220921221222-5916
	W0921 22:14:11.965617    6616 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220921221222-5916 returned with exit code 1
	W0921 22:14:11.965617    6616 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-20220921221222-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220921221222-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220921221222-5916
	
	W0921 22:14:11.965617    6616 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-20220921221222-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220921221222-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220921221222-5916
	I0921 22:14:11.975579    6616 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0921 22:14:11.981634    6616 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220921221222-5916
	W0921 22:14:12.168397    6616 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220921221222-5916 returned with exit code 1
	I0921 22:14:12.168603    6616 retry.go:31] will retry after 175.796719ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-20220921221222-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220921221222-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220921221222-5916
	I0921 22:14:12.362466    6616 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220921221222-5916
	W0921 22:14:12.548737    6616 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220921221222-5916 returned with exit code 1
	I0921 22:14:12.549051    6616 retry.go:31] will retry after 322.826781ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-20220921221222-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220921221222-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220921221222-5916
	I0921 22:14:12.895368    6616 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220921221222-5916
	W0921 22:14:13.113411    6616 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220921221222-5916 returned with exit code 1
	I0921 22:14:13.113411    6616 retry.go:31] will retry after 602.253718ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-20220921221222-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220921221222-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220921221222-5916
	I0921 22:14:13.729958    6616 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220921221222-5916
	W0921 22:14:13.924415    6616 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220921221222-5916 returned with exit code 1
	W0921 22:14:13.924415    6616 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-20220921221222-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220921221222-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220921221222-5916
	
	W0921 22:14:13.924415    6616 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-20220921221222-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220921221222-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220921221222-5916
	I0921 22:14:13.924415    6616 fix.go:57] fixHost completed within 31.701926s
	I0921 22:14:13.924415    6616 start.go:83] releasing machines lock for "newest-cni-20220921221222-5916", held for 31.7029237s
	W0921 22:14:13.924415    6616 start.go:602] error starting host: recreate: creating host: create: creating: setting up container node: creating volume for newest-cni-20220921221222-5916 container: docker volume create newest-cni-20220921221222-5916 --label name.minikube.sigs.k8s.io=newest-cni-20220921221222-5916 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create newest-cni-20220921221222-5916: error while creating volume root path '/var/lib/docker/volumes/newest-cni-20220921221222-5916': mkdir /var/lib/docker/volumes/newest-cni-20220921221222-5916: read-only file system
	W0921 22:14:13.925165    6616 out.go:239] ! StartHost failed, but will try again: recreate: creating host: create: creating: setting up container node: creating volume for newest-cni-20220921221222-5916 container: docker volume create newest-cni-20220921221222-5916 --label name.minikube.sigs.k8s.io=newest-cni-20220921221222-5916 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create newest-cni-20220921221222-5916: error while creating volume root path '/var/lib/docker/volumes/newest-cni-20220921221222-5916': mkdir /var/lib/docker/volumes/newest-cni-20220921221222-5916: read-only file system
	
	! StartHost failed, but will try again: recreate: creating host: create: creating: setting up container node: creating volume for newest-cni-20220921221222-5916 container: docker volume create newest-cni-20220921221222-5916 --label name.minikube.sigs.k8s.io=newest-cni-20220921221222-5916 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create newest-cni-20220921221222-5916: error while creating volume root path '/var/lib/docker/volumes/newest-cni-20220921221222-5916': mkdir /var/lib/docker/volumes/newest-cni-20220921221222-5916: read-only file system
	
	I0921 22:14:13.925165    6616 start.go:617] Will try again in 5 seconds ...
	I0921 22:14:18.938336    6616 start.go:364] acquiring machines lock for newest-cni-20220921221222-5916: {Name:mkba2d573750337952145210e595be8251a49600 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0921 22:14:18.938486    6616 start.go:368] acquired machines lock for "newest-cni-20220921221222-5916" in 0s
	I0921 22:14:18.938486    6616 start.go:96] Skipping create...Using existing machine configuration
	I0921 22:14:18.938486    6616 fix.go:55] fixHost starting: 
	I0921 22:14:18.955980    6616 cli_runner.go:164] Run: docker container inspect newest-cni-20220921221222-5916 --format={{.State.Status}}
	W0921 22:14:19.142845    6616 cli_runner.go:211] docker container inspect newest-cni-20220921221222-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:14:19.142845    6616 fix.go:103] recreateIfNeeded on newest-cni-20220921221222-5916: state= err=unknown state "newest-cni-20220921221222-5916": docker container inspect newest-cni-20220921221222-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220921221222-5916
	I0921 22:14:19.142845    6616 fix.go:108] machineExists: false. err=machine does not exist
	I0921 22:14:19.145847    6616 out.go:177] * docker "newest-cni-20220921221222-5916" container is missing, will recreate.
	I0921 22:14:19.147797    6616 delete.go:124] DEMOLISHING newest-cni-20220921221222-5916 ...
	I0921 22:14:19.161848    6616 cli_runner.go:164] Run: docker container inspect newest-cni-20220921221222-5916 --format={{.State.Status}}
	W0921 22:14:19.363904    6616 cli_runner.go:211] docker container inspect newest-cni-20220921221222-5916 --format={{.State.Status}} returned with exit code 1
	W0921 22:14:19.363904    6616 stop.go:75] unable to get state: unknown state "newest-cni-20220921221222-5916": docker container inspect newest-cni-20220921221222-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220921221222-5916
	I0921 22:14:19.363904    6616 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "newest-cni-20220921221222-5916": docker container inspect newest-cni-20220921221222-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220921221222-5916
	I0921 22:14:19.379297    6616 cli_runner.go:164] Run: docker container inspect newest-cni-20220921221222-5916 --format={{.State.Status}}
	W0921 22:14:19.568367    6616 cli_runner.go:211] docker container inspect newest-cni-20220921221222-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:14:19.568367    6616 delete.go:82] Unable to get host status for newest-cni-20220921221222-5916, assuming it has already been deleted: state: unknown state "newest-cni-20220921221222-5916": docker container inspect newest-cni-20220921221222-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220921221222-5916
	I0921 22:14:19.574317    6616 cli_runner.go:164] Run: docker container inspect -f {{.Id}} newest-cni-20220921221222-5916
	W0921 22:14:19.755060    6616 cli_runner.go:211] docker container inspect -f {{.Id}} newest-cni-20220921221222-5916 returned with exit code 1
	I0921 22:14:19.755060    6616 kic.go:356] could not find the container newest-cni-20220921221222-5916 to remove it. will try anyways
	I0921 22:14:19.762041    6616 cli_runner.go:164] Run: docker container inspect newest-cni-20220921221222-5916 --format={{.State.Status}}
	W0921 22:14:19.942946    6616 cli_runner.go:211] docker container inspect newest-cni-20220921221222-5916 --format={{.State.Status}} returned with exit code 1
	W0921 22:14:19.942946    6616 oci.go:84] error getting container status, will try to delete anyways: unknown state "newest-cni-20220921221222-5916": docker container inspect newest-cni-20220921221222-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220921221222-5916
	I0921 22:14:19.950942    6616 cli_runner.go:164] Run: docker exec --privileged -t newest-cni-20220921221222-5916 /bin/bash -c "sudo init 0"
	W0921 22:14:20.145850    6616 cli_runner.go:211] docker exec --privileged -t newest-cni-20220921221222-5916 /bin/bash -c "sudo init 0" returned with exit code 1
	I0921 22:14:20.145850    6616 oci.go:646] error shutdown newest-cni-20220921221222-5916: docker exec --privileged -t newest-cni-20220921221222-5916 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: newest-cni-20220921221222-5916
	I0921 22:14:21.165642    6616 cli_runner.go:164] Run: docker container inspect newest-cni-20220921221222-5916 --format={{.State.Status}}
	W0921 22:14:21.373981    6616 cli_runner.go:211] docker container inspect newest-cni-20220921221222-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:14:21.374049    6616 oci.go:658] temporary error verifying shutdown: unknown state "newest-cni-20220921221222-5916": docker container inspect newest-cni-20220921221222-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220921221222-5916
	I0921 22:14:21.374049    6616 oci.go:660] temporary error: container newest-cni-20220921221222-5916 status is  but expect it to be exited
	I0921 22:14:21.374049    6616 retry.go:31] will retry after 396.557122ms: couldn't verify container is exited. %v: unknown state "newest-cni-20220921221222-5916": docker container inspect newest-cni-20220921221222-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220921221222-5916
	I0921 22:14:21.795119    6616 cli_runner.go:164] Run: docker container inspect newest-cni-20220921221222-5916 --format={{.State.Status}}
	W0921 22:14:21.986482    6616 cli_runner.go:211] docker container inspect newest-cni-20220921221222-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:14:21.986482    6616 oci.go:658] temporary error verifying shutdown: unknown state "newest-cni-20220921221222-5916": docker container inspect newest-cni-20220921221222-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220921221222-5916
	I0921 22:14:21.986482    6616 oci.go:660] temporary error: container newest-cni-20220921221222-5916 status is  but expect it to be exited
	I0921 22:14:21.986482    6616 retry.go:31] will retry after 597.811922ms: couldn't verify container is exited. %v: unknown state "newest-cni-20220921221222-5916": docker container inspect newest-cni-20220921221222-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220921221222-5916
	I0921 22:14:22.604098    6616 cli_runner.go:164] Run: docker container inspect newest-cni-20220921221222-5916 --format={{.State.Status}}
	W0921 22:14:22.811327    6616 cli_runner.go:211] docker container inspect newest-cni-20220921221222-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:14:22.811434    6616 oci.go:658] temporary error verifying shutdown: unknown state "newest-cni-20220921221222-5916": docker container inspect newest-cni-20220921221222-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220921221222-5916
	I0921 22:14:22.811434    6616 oci.go:660] temporary error: container newest-cni-20220921221222-5916 status is  but expect it to be exited
	I0921 22:14:22.811627    6616 retry.go:31] will retry after 1.409144665s: couldn't verify container is exited. %v: unknown state "newest-cni-20220921221222-5916": docker container inspect newest-cni-20220921221222-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220921221222-5916
	I0921 22:14:24.232718    6616 cli_runner.go:164] Run: docker container inspect newest-cni-20220921221222-5916 --format={{.State.Status}}
	W0921 22:14:24.424645    6616 cli_runner.go:211] docker container inspect newest-cni-20220921221222-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:14:24.424645    6616 oci.go:658] temporary error verifying shutdown: unknown state "newest-cni-20220921221222-5916": docker container inspect newest-cni-20220921221222-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220921221222-5916
	I0921 22:14:24.424645    6616 oci.go:660] temporary error: container newest-cni-20220921221222-5916 status is  but expect it to be exited
	I0921 22:14:24.424645    6616 retry.go:31] will retry after 1.192358242s: couldn't verify container is exited. %v: unknown state "newest-cni-20220921221222-5916": docker container inspect newest-cni-20220921221222-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220921221222-5916
	I0921 22:14:25.640368    6616 cli_runner.go:164] Run: docker container inspect newest-cni-20220921221222-5916 --format={{.State.Status}}
	W0921 22:14:25.863911    6616 cli_runner.go:211] docker container inspect newest-cni-20220921221222-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:14:25.864060    6616 oci.go:658] temporary error verifying shutdown: unknown state "newest-cni-20220921221222-5916": docker container inspect newest-cni-20220921221222-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220921221222-5916
	I0921 22:14:25.864125    6616 oci.go:660] temporary error: container newest-cni-20220921221222-5916 status is  but expect it to be exited
	I0921 22:14:25.864163    6616 retry.go:31] will retry after 3.456004252s: couldn't verify container is exited. %v: unknown state "newest-cni-20220921221222-5916": docker container inspect newest-cni-20220921221222-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220921221222-5916
	I0921 22:14:29.340412    6616 cli_runner.go:164] Run: docker container inspect newest-cni-20220921221222-5916 --format={{.State.Status}}
	W0921 22:14:29.519398    6616 cli_runner.go:211] docker container inspect newest-cni-20220921221222-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:14:29.519616    6616 oci.go:658] temporary error verifying shutdown: unknown state "newest-cni-20220921221222-5916": docker container inspect newest-cni-20220921221222-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220921221222-5916
	I0921 22:14:29.519616    6616 oci.go:660] temporary error: container newest-cni-20220921221222-5916 status is  but expect it to be exited
	I0921 22:14:29.519616    6616 retry.go:31] will retry after 4.543793083s: couldn't verify container is exited. %v: unknown state "newest-cni-20220921221222-5916": docker container inspect newest-cni-20220921221222-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220921221222-5916
	I0921 22:14:34.080730    6616 cli_runner.go:164] Run: docker container inspect newest-cni-20220921221222-5916 --format={{.State.Status}}
	W0921 22:14:34.274285    6616 cli_runner.go:211] docker container inspect newest-cni-20220921221222-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:14:34.274406    6616 oci.go:658] temporary error verifying shutdown: unknown state "newest-cni-20220921221222-5916": docker container inspect newest-cni-20220921221222-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220921221222-5916
	I0921 22:14:34.274441    6616 oci.go:660] temporary error: container newest-cni-20220921221222-5916 status is  but expect it to be exited
	I0921 22:14:34.274611    6616 retry.go:31] will retry after 5.830976587s: couldn't verify container is exited. %v: unknown state "newest-cni-20220921221222-5916": docker container inspect newest-cni-20220921221222-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220921221222-5916
	I0921 22:14:40.126079    6616 cli_runner.go:164] Run: docker container inspect newest-cni-20220921221222-5916 --format={{.State.Status}}
	W0921 22:14:40.317847    6616 cli_runner.go:211] docker container inspect newest-cni-20220921221222-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:14:40.317847    6616 oci.go:658] temporary error verifying shutdown: unknown state "newest-cni-20220921221222-5916": docker container inspect newest-cni-20220921221222-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220921221222-5916
	I0921 22:14:40.317847    6616 oci.go:660] temporary error: container newest-cni-20220921221222-5916 status is  but expect it to be exited
	I0921 22:14:40.317847    6616 oci.go:88] couldn't shut down newest-cni-20220921221222-5916 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "newest-cni-20220921221222-5916": docker container inspect newest-cni-20220921221222-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220921221222-5916
	 
	I0921 22:14:40.325740    6616 cli_runner.go:164] Run: docker rm -f -v newest-cni-20220921221222-5916
	I0921 22:14:40.529513    6616 cli_runner.go:164] Run: docker container inspect -f {{.Id}} newest-cni-20220921221222-5916
	W0921 22:14:40.708145    6616 cli_runner.go:211] docker container inspect -f {{.Id}} newest-cni-20220921221222-5916 returned with exit code 1
	I0921 22:14:40.717782    6616 cli_runner.go:164] Run: docker network inspect newest-cni-20220921221222-5916 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0921 22:14:40.925177    6616 cli_runner.go:211] docker network inspect newest-cni-20220921221222-5916 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0921 22:14:40.934563    6616 network_create.go:272] running [docker network inspect newest-cni-20220921221222-5916] to gather additional debugging logs...
	I0921 22:14:40.934563    6616 cli_runner.go:164] Run: docker network inspect newest-cni-20220921221222-5916
	W0921 22:14:41.140914    6616 cli_runner.go:211] docker network inspect newest-cni-20220921221222-5916 returned with exit code 1
	I0921 22:14:41.141032    6616 network_create.go:275] error running [docker network inspect newest-cni-20220921221222-5916]: docker network inspect newest-cni-20220921221222-5916: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: newest-cni-20220921221222-5916
	I0921 22:14:41.141032    6616 network_create.go:277] output of [docker network inspect newest-cni-20220921221222-5916]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: newest-cni-20220921221222-5916
	
	** /stderr **
	W0921 22:14:41.142325    6616 delete.go:139] delete failed (probably ok) <nil>
	I0921 22:14:41.142325    6616 fix.go:115] Sleeping 1 second for extra luck!
	I0921 22:14:42.147777    6616 start.go:125] createHost starting for "" (driver="docker")
	I0921 22:14:42.157929    6616 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0921 22:14:42.158372    6616 start.go:159] libmachine.API.Create for "newest-cni-20220921221222-5916" (driver="docker")
	I0921 22:14:42.158414    6616 client.go:168] LocalClient.Create starting
	I0921 22:14:42.158953    6616 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I0921 22:14:42.158979    6616 main.go:134] libmachine: Decoding PEM data...
	I0921 22:14:42.158979    6616 main.go:134] libmachine: Parsing certificate...
	I0921 22:14:42.158979    6616 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I0921 22:14:42.159557    6616 main.go:134] libmachine: Decoding PEM data...
	I0921 22:14:42.159681    6616 main.go:134] libmachine: Parsing certificate...
	I0921 22:14:42.169508    6616 cli_runner.go:164] Run: docker network inspect newest-cni-20220921221222-5916 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0921 22:14:42.379840    6616 cli_runner.go:211] docker network inspect newest-cni-20220921221222-5916 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0921 22:14:42.385803    6616 network_create.go:272] running [docker network inspect newest-cni-20220921221222-5916] to gather additional debugging logs...
	I0921 22:14:42.385803    6616 cli_runner.go:164] Run: docker network inspect newest-cni-20220921221222-5916
	W0921 22:14:42.612616    6616 cli_runner.go:211] docker network inspect newest-cni-20220921221222-5916 returned with exit code 1
	I0921 22:14:42.612855    6616 network_create.go:275] error running [docker network inspect newest-cni-20220921221222-5916]: docker network inspect newest-cni-20220921221222-5916: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: newest-cni-20220921221222-5916
	I0921 22:14:42.613172    6616 network_create.go:277] output of [docker network inspect newest-cni-20220921221222-5916]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: newest-cni-20220921221222-5916
	
	** /stderr **
	I0921 22:14:42.621181    6616 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0921 22:14:42.880173    6616 network.go:281] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0003e4528] amended:false}} dirty:map[] misses:0}
	I0921 22:14:42.880304    6616 network.go:239] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0921 22:14:42.900784    6616 network.go:290] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0003e4528] amended:true}} dirty:map[192.168.49.0:0xc0003e4528 192.168.58.0:0xc000171110] misses:0}
	I0921 22:14:42.900784    6616 network.go:236] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0921 22:14:42.900784    6616 network_create.go:115] attempt to create docker network newest-cni-20220921221222-5916 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0921 22:14:42.907917    6616 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-20220921221222-5916 newest-cni-20220921221222-5916
	W0921 22:14:43.107431    6616 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-20220921221222-5916 newest-cni-20220921221222-5916 returned with exit code 1
	E0921 22:14:43.107623    6616 network_create.go:104] error while trying to create docker network newest-cni-20220921221222-5916 192.168.58.0/24: create docker network newest-cni-20220921221222-5916 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-20220921221222-5916 newest-cni-20220921221222-5916: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 9406d3720d010da5dd07cb6a57e7efb76c0d8dd28738c603c452aae1e27cf8b9 (br-9406d3720d01): conflicts with network 8a3cd8d165a4af032276277dbee93b52b8ee4aed7ca6d8aceee6814901b31792 (br-8a3cd8d165a4): networks have overlapping IPv4
	W0921 22:14:43.108003    6616 out.go:239] ! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network newest-cni-20220921221222-5916 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-20220921221222-5916 newest-cni-20220921221222-5916: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 9406d3720d010da5dd07cb6a57e7efb76c0d8dd28738c603c452aae1e27cf8b9 (br-9406d3720d01): conflicts with network 8a3cd8d165a4af032276277dbee93b52b8ee4aed7ca6d8aceee6814901b31792 (br-8a3cd8d165a4): networks have overlapping IPv4
	
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network newest-cni-20220921221222-5916 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-20220921221222-5916 newest-cni-20220921221222-5916: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 9406d3720d010da5dd07cb6a57e7efb76c0d8dd28738c603c452aae1e27cf8b9 (br-9406d3720d01): conflicts with network 8a3cd8d165a4af032276277dbee93b52b8ee4aed7ca6d8aceee6814901b31792 (br-8a3cd8d165a4): networks have overlapping IPv4
	
	I0921 22:14:43.121100    6616 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0921 22:14:43.363170    6616 cli_runner.go:164] Run: docker volume create newest-cni-20220921221222-5916 --label name.minikube.sigs.k8s.io=newest-cni-20220921221222-5916 --label created_by.minikube.sigs.k8s.io=true
	W0921 22:14:43.571558    6616 cli_runner.go:211] docker volume create newest-cni-20220921221222-5916 --label name.minikube.sigs.k8s.io=newest-cni-20220921221222-5916 --label created_by.minikube.sigs.k8s.io=true returned with exit code 1
	I0921 22:14:43.571632    6616 client.go:171] LocalClient.Create took 1.4131759s
	I0921 22:14:45.590814    6616 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0921 22:14:45.596892    6616 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220921221222-5916
	W0921 22:14:45.783036    6616 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220921221222-5916 returned with exit code 1
	I0921 22:14:45.783036    6616 retry.go:31] will retry after 164.582069ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-20220921221222-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220921221222-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220921221222-5916
	I0921 22:14:45.971480    6616 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220921221222-5916
	W0921 22:14:46.167982    6616 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220921221222-5916 returned with exit code 1
	I0921 22:14:46.169079    6616 retry.go:31] will retry after 415.22004ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-20220921221222-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220921221222-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220921221222-5916
	I0921 22:14:46.601848    6616 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220921221222-5916
	W0921 22:14:46.828440    6616 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220921221222-5916 returned with exit code 1
	W0921 22:14:46.828724    6616 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-20220921221222-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220921221222-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220921221222-5916
	
	W0921 22:14:46.828724    6616 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-20220921221222-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220921221222-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220921221222-5916
	I0921 22:14:46.838740    6616 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0921 22:14:46.844187    6616 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220921221222-5916
	W0921 22:14:47.046323    6616 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220921221222-5916 returned with exit code 1
	I0921 22:14:47.046632    6616 retry.go:31] will retry after 144.863405ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-20220921221222-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220921221222-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220921221222-5916
	I0921 22:14:47.210569    6616 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220921221222-5916
	W0921 22:14:47.416437    6616 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220921221222-5916 returned with exit code 1
	I0921 22:14:47.416437    6616 retry.go:31] will retry after 410.553224ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-20220921221222-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220921221222-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220921221222-5916
	I0921 22:14:47.838105    6616 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220921221222-5916
	W0921 22:14:48.050117    6616 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220921221222-5916 returned with exit code 1
	I0921 22:14:48.050635    6616 retry.go:31] will retry after 314.505366ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-20220921221222-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220921221222-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220921221222-5916
	I0921 22:14:48.376147    6616 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220921221222-5916
	W0921 22:14:48.578400    6616 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220921221222-5916 returned with exit code 1
	W0921 22:14:48.578400    6616 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-20220921221222-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220921221222-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220921221222-5916
	
	W0921 22:14:48.578400    6616 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-20220921221222-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220921221222-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220921221222-5916
	I0921 22:14:48.578400    6616 start.go:128] duration metric: createHost completed in 6.4305714s
	I0921 22:14:48.588392    6616 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0921 22:14:48.595391    6616 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220921221222-5916
	W0921 22:14:48.801190    6616 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220921221222-5916 returned with exit code 1
	I0921 22:14:48.801613    6616 retry.go:31] will retry after 200.38067ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-20220921221222-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220921221222-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220921221222-5916
	I0921 22:14:49.014145    6616 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220921221222-5916
	W0921 22:14:49.235968    6616 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220921221222-5916 returned with exit code 1
	I0921 22:14:49.235968    6616 retry.go:31] will retry after 252.474839ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-20220921221222-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220921221222-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220921221222-5916
	I0921 22:14:49.507520    6616 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220921221222-5916
	W0921 22:14:49.703932    6616 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220921221222-5916 returned with exit code 1
	I0921 22:14:49.703932    6616 retry.go:31] will retry after 585.618668ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-20220921221222-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220921221222-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220921221222-5916
	I0921 22:14:50.304254    6616 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220921221222-5916
	W0921 22:14:50.516015    6616 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220921221222-5916 returned with exit code 1
	W0921 22:14:50.516193    6616 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-20220921221222-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220921221222-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220921221222-5916
	
	W0921 22:14:50.516254    6616 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-20220921221222-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220921221222-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220921221222-5916
	I0921 22:14:50.529539    6616 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0921 22:14:50.539406    6616 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220921221222-5916
	W0921 22:14:50.735367    6616 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220921221222-5916 returned with exit code 1
	I0921 22:14:50.735367    6616 retry.go:31] will retry after 194.626905ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-20220921221222-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220921221222-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220921221222-5916
	I0921 22:14:50.947347    6616 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220921221222-5916
	W0921 22:14:51.153967    6616 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220921221222-5916 returned with exit code 1
	I0921 22:14:51.153967    6616 retry.go:31] will retry after 346.182076ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-20220921221222-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220921221222-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220921221222-5916
	I0921 22:14:51.516627    6616 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220921221222-5916
	W0921 22:14:51.728525    6616 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220921221222-5916 returned with exit code 1
	I0921 22:14:51.729034    6616 retry.go:31] will retry after 579.704465ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-20220921221222-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220921221222-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220921221222-5916
	I0921 22:14:52.333348    6616 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220921221222-5916
	W0921 22:14:52.541520    6616 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220921221222-5916 returned with exit code 1
	W0921 22:14:52.541520    6616 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-20220921221222-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220921221222-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220921221222-5916
	
	W0921 22:14:52.541520    6616 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-20220921221222-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220921221222-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220921221222-5916
	I0921 22:14:52.541520    6616 fix.go:57] fixHost completed within 33.6027657s
	I0921 22:14:52.541520    6616 start.go:83] releasing machines lock for "newest-cni-20220921221222-5916", held for 33.6027657s
	W0921 22:14:52.542360    6616 out.go:239] * Failed to start docker container. Running "minikube delete -p newest-cni-20220921221222-5916" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for newest-cni-20220921221222-5916 container: docker volume create newest-cni-20220921221222-5916 --label name.minikube.sigs.k8s.io=newest-cni-20220921221222-5916 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create newest-cni-20220921221222-5916: error while creating volume root path '/var/lib/docker/volumes/newest-cni-20220921221222-5916': mkdir /var/lib/docker/volumes/newest-cni-20220921221222-5916: read-only file system
	
	* Failed to start docker container. Running "minikube delete -p newest-cni-20220921221222-5916" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for newest-cni-20220921221222-5916 container: docker volume create newest-cni-20220921221222-5916 --label name.minikube.sigs.k8s.io=newest-cni-20220921221222-5916 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create newest-cni-20220921221222-5916: error while creating volume root path '/var/lib/docker/volumes/newest-cni-20220921221222-5916': mkdir /var/lib/docker/volumes/newest-cni-20220921221222-5916: read-only file system
	
	I0921 22:14:52.546344    6616 out.go:177] 
	W0921 22:14:52.549334    6616 out.go:239] X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for newest-cni-20220921221222-5916 container: docker volume create newest-cni-20220921221222-5916 --label name.minikube.sigs.k8s.io=newest-cni-20220921221222-5916 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create newest-cni-20220921221222-5916: error while creating volume root path '/var/lib/docker/volumes/newest-cni-20220921221222-5916': mkdir /var/lib/docker/volumes/newest-cni-20220921221222-5916: read-only file system
	
	X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for newest-cni-20220921221222-5916 container: docker volume create newest-cni-20220921221222-5916 --label name.minikube.sigs.k8s.io=newest-cni-20220921221222-5916 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create newest-cni-20220921221222-5916: error while creating volume root path '/var/lib/docker/volumes/newest-cni-20220921221222-5916': mkdir /var/lib/docker/volumes/newest-cni-20220921221222-5916: read-only file system
	
	W0921 22:14:52.549334    6616 out.go:239] * Suggestion: Restart Docker
	* Suggestion: Restart Docker
	W0921 22:14:52.549334    6616 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/6825
	* Related issue: https://github.com/kubernetes/minikube/issues/6825
	I0921 22:14:52.552344    6616 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-windows-amd64.exe start -p newest-cni-20220921221222-5916 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker --kubernetes-version=v1.25.2": exit status 60
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/newest-cni/serial/SecondStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect newest-cni-20220921221222-5916
helpers_test.go:231: (dbg) Non-zero exit: docker inspect newest-cni-20220921221222-5916: exit status 1 (308.3224ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: newest-cni-20220921221222-5916

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p newest-cni-20220921221222-5916 -n newest-cni-20220921221222-5916
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p newest-cni-20220921221222-5916 -n newest-cni-20220921221222-5916: exit status 7 (608.2229ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0921 22:14:53.704852    7220 status.go:247] status error: host: state: unknown state "newest-cni-20220921221222-5916": docker container inspect newest-cni-20220921221222-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220921221222-5916

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-20220921221222-5916" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/newest-cni/serial/SecondStart (77.76s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/SecondStart (78.34s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-windows-amd64.exe start -p default-k8s-different-port-20220921221221-5916 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker --kubernetes-version=v1.25.2

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p default-k8s-different-port-20220921221221-5916 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker --kubernetes-version=v1.25.2: exit status 60 (1m17.2881183s)

                                                
                                                
-- stdout --
	* [default-k8s-different-port-20220921221221-5916] minikube v1.27.0 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	  - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=14995
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on existing profile
	* Starting control plane node default-k8s-different-port-20220921221221-5916 in cluster default-k8s-different-port-20220921221221-5916
	* Pulling base image ...
	* Another minikube instance is downloading dependencies... 
	* docker "default-k8s-different-port-20220921221221-5916" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* docker "default-k8s-different-port-20220921221221-5916" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0921 22:13:37.170396    6968 out.go:296] Setting OutFile to fd 944 ...
	I0921 22:13:37.249671    6968 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0921 22:13:37.249671    6968 out.go:309] Setting ErrFile to fd 1948...
	I0921 22:13:37.249671    6968 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0921 22:13:37.267012    6968 out.go:303] Setting JSON to false
	I0921 22:13:37.270524    6968 start.go:115] hostinfo: {"hostname":"minikube2","uptime":4485,"bootTime":1663793932,"procs":152,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19044 Build 19044","kernelVersion":"10.0.19044 Build 19044","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"29dced62-21fb-45d8-a34e-472b66ced002"}
	W0921 22:13:37.270524    6968 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0921 22:13:37.274734    6968 out.go:177] * [default-k8s-different-port-20220921221221-5916] minikube v1.27.0 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	I0921 22:13:37.277125    6968 notify.go:214] Checking for updates...
	I0921 22:13:37.280122    6968 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	I0921 22:13:37.285626    6968 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	I0921 22:13:37.290605    6968 out.go:177]   - MINIKUBE_LOCATION=14995
	I0921 22:13:37.293752    6968 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0921 22:13:37.296709    6968 config.go:180] Loaded profile config "default-k8s-different-port-20220921221221-5916": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.2
	I0921 22:13:37.297955    6968 driver.go:365] Setting default libvirt URI to qemu:///system
	I0921 22:13:37.608543    6968 docker.go:137] docker version: linux-20.10.17
	I0921 22:13:37.616185    6968 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0921 22:13:38.185984    6968 info.go:265] docker info: {ID:6VFI:N2PO:PLBP:KXLR:EN7R:OR7P:564S:UY7N:ZWDA:5V2U:KGL2:KJZ7 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:43 OomKillDisable:true NGoroutines:91 SystemTime:2022-09-21 22:13:37.766256 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_6
4 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,p
rofile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.10.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.9] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-p
lugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.19.0]] Warnings:<nil>}}
	I0921 22:13:38.191578    6968 out.go:177] * Using the docker driver based on existing profile
	I0921 22:13:38.194131    6968 start.go:284] selected driver: docker
	I0921 22:13:38.194131    6968 start.go:808] validating driver "docker" against &{Name:default-k8s-different-port-20220921221221-5916 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.2 ClusterName:default-k8s-different-port-20220921221221-5916 Name
space:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.25.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.min
ikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I0921 22:13:38.194546    6968 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0921 22:13:38.252651    6968 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0921 22:13:38.793553    6968 info.go:265] docker info: {ID:6VFI:N2PO:PLBP:KXLR:EN7R:OR7P:564S:UY7N:ZWDA:5V2U:KGL2:KJZ7 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:44 OomKillDisable:true NGoroutines:92 SystemTime:2022-09-21 22:13:38.4227352 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.10.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.9] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-
plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.19.0]] Warnings:<nil>}}
	I0921 22:13:38.793553    6968 start_flags.go:867] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0921 22:13:38.793553    6968 cni.go:95] Creating CNI manager for ""
	I0921 22:13:38.793553    6968 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0921 22:13:38.793553    6968 start_flags.go:316] config:
	{Name:default-k8s-different-port-20220921221221-5916 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.2 ClusterName:default-k8s-different-port-20220921221221-5916 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDoma
in:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.25.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSiz
e:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I0921 22:13:38.798772    6968 out.go:177] * Starting control plane node default-k8s-different-port-20220921221221-5916 in cluster default-k8s-different-port-20220921221221-5916
	I0921 22:13:38.801842    6968 cache.go:120] Beginning downloading kic base image for docker with docker
	I0921 22:13:38.807254    6968 out.go:177] * Pulling base image ...
	I0921 22:13:38.809258    6968 preload.go:132] Checking if preload exists for k8s version v1.25.2 and runtime docker
	I0921 22:13:38.809258    6968 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c in local docker daemon
	I0921 22:13:38.809258    6968 preload.go:148] Found local preload: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.25.2-docker-overlay2-amd64.tar.lz4
	I0921 22:13:38.809258    6968 cache.go:57] Caching tarball of preloaded images
	I0921 22:13:38.809258    6968 preload.go:174] Found C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.25.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0921 22:13:38.809258    6968 cache.go:60] Finished verifying existence of preloaded tar for  v1.25.2 on docker
	I0921 22:13:38.810257    6968 profile.go:148] Saving config to C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\default-k8s-different-port-20220921221221-5916\config.json ...
	I0921 22:13:39.030450    6968 cache.go:147] Downloading gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c to local cache
	I0921 22:13:39.030450    6968 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase_v0.0.34@sha256_f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c.tar
	I0921 22:13:39.030450    6968 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase_v0.0.34@sha256_f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c.tar
	I0921 22:13:39.030450    6968 image.go:59] Checking for gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c in local cache directory
	I0921 22:13:39.030450    6968 image.go:62] Found gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c in local cache directory, skipping pull
	I0921 22:13:39.030450    6968 image.go:103] gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c exists in cache, skipping pull
	I0921 22:13:39.030450    6968 cache.go:150] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c as a tarball
	I0921 22:13:39.030450    6968 cache.go:161] Loading gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c from local cache
	I0921 22:13:39.030450    6968 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase_v0.0.34@sha256_f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c.tar
	I0921 22:13:41.390780    6968 cache.go:164] successfully loaded gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c from cached tarball
	I0921 22:13:41.390780    6968 cache.go:170] Downloading gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c to local daemon
	I0921 22:13:41.390780    6968 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c.d.lock -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase_v0.0.34@sha256_f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c.d.lock
	I0921 22:13:41.507165    6968 out.go:204] * Another minikube instance is downloading dependencies... 
	I0921 22:13:42.221239    6968 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c in local docker daemon
	I0921 22:13:42.470801    6968 image.go:243] Writing gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c to local daemon
	    > gcr.io/k8s-minikube/kicbase:  0 B [__________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase:  0 B [__________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase:  0 B [__________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase:  0 B [______________________] ?% ? p/s 800msI0921 22:13:43.946612    6968 cache.go:177] use image loaded from cache gcr.io/k8s-minikube/kicbase:v0.0.34
	W0921 22:13:43.946612    6968 out.go:239] ! minikube was unable to download gcr.io/k8s-minikube/kicbase:v0.0.34, but successfully downloaded gcr.io/k8s-minikube/kicbase:v0.0.34 as a fallback image
	! minikube was unable to download gcr.io/k8s-minikube/kicbase:v0.0.34, but successfully downloaded gcr.io/k8s-minikube/kicbase:v0.0.34 as a fallback image
	I0921 22:13:43.946612    6968 cache.go:208] Successfully downloaded all kic artifacts
	I0921 22:13:43.946612    6968 start.go:364] acquiring machines lock for default-k8s-different-port-20220921221221-5916: {Name:mk83eca1da19c7d9c5cd0808c146559719914d48 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0921 22:13:43.947142    6968 start.go:368] acquired machines lock for "default-k8s-different-port-20220921221221-5916" in 530.3µs
	I0921 22:13:43.947290    6968 start.go:96] Skipping create...Using existing machine configuration
	I0921 22:13:43.947290    6968 fix.go:55] fixHost starting: 
	I0921 22:13:43.960910    6968 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220921221221-5916 --format={{.State.Status}}
	W0921 22:13:44.179417    6968 cli_runner.go:211] docker container inspect default-k8s-different-port-20220921221221-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:13:44.179586    6968 fix.go:103] recreateIfNeeded on default-k8s-different-port-20220921221221-5916: state= err=unknown state "default-k8s-different-port-20220921221221-5916": docker container inspect default-k8s-different-port-20220921221221-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220921221221-5916
	I0921 22:13:44.179681    6968 fix.go:108] machineExists: false. err=machine does not exist
	I0921 22:13:44.186971    6968 out.go:177] * docker "default-k8s-different-port-20220921221221-5916" container is missing, will recreate.
	I0921 22:13:44.189830    6968 delete.go:124] DEMOLISHING default-k8s-different-port-20220921221221-5916 ...
	I0921 22:13:44.202418    6968 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220921221221-5916 --format={{.State.Status}}
	W0921 22:13:44.411880    6968 cli_runner.go:211] docker container inspect default-k8s-different-port-20220921221221-5916 --format={{.State.Status}} returned with exit code 1
	W0921 22:13:44.411880    6968 stop.go:75] unable to get state: unknown state "default-k8s-different-port-20220921221221-5916": docker container inspect default-k8s-different-port-20220921221221-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220921221221-5916
	I0921 22:13:44.411880    6968 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "default-k8s-different-port-20220921221221-5916": docker container inspect default-k8s-different-port-20220921221221-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220921221221-5916
	I0921 22:13:44.425669    6968 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220921221221-5916 --format={{.State.Status}}
	W0921 22:13:44.632617    6968 cli_runner.go:211] docker container inspect default-k8s-different-port-20220921221221-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:13:44.632778    6968 delete.go:82] Unable to get host status for default-k8s-different-port-20220921221221-5916, assuming it has already been deleted: state: unknown state "default-k8s-different-port-20220921221221-5916": docker container inspect default-k8s-different-port-20220921221221-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220921221221-5916
	I0921 22:13:44.640255    6968 cli_runner.go:164] Run: docker container inspect -f {{.Id}} default-k8s-different-port-20220921221221-5916
	W0921 22:13:44.848480    6968 cli_runner.go:211] docker container inspect -f {{.Id}} default-k8s-different-port-20220921221221-5916 returned with exit code 1
	I0921 22:13:44.848480    6968 kic.go:356] could not find the container default-k8s-different-port-20220921221221-5916 to remove it. will try anyways
	I0921 22:13:44.857182    6968 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220921221221-5916 --format={{.State.Status}}
	W0921 22:13:45.111540    6968 cli_runner.go:211] docker container inspect default-k8s-different-port-20220921221221-5916 --format={{.State.Status}} returned with exit code 1
	W0921 22:13:45.111710    6968 oci.go:84] error getting container status, will try to delete anyways: unknown state "default-k8s-different-port-20220921221221-5916": docker container inspect default-k8s-different-port-20220921221221-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220921221221-5916
	I0921 22:13:45.120637    6968 cli_runner.go:164] Run: docker exec --privileged -t default-k8s-different-port-20220921221221-5916 /bin/bash -c "sudo init 0"
	W0921 22:13:45.312022    6968 cli_runner.go:211] docker exec --privileged -t default-k8s-different-port-20220921221221-5916 /bin/bash -c "sudo init 0" returned with exit code 1
	I0921 22:13:45.312022    6968 oci.go:646] error shutdown default-k8s-different-port-20220921221221-5916: docker exec --privileged -t default-k8s-different-port-20220921221221-5916 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: default-k8s-different-port-20220921221221-5916
	I0921 22:13:46.329989    6968 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220921221221-5916 --format={{.State.Status}}
	W0921 22:13:46.514557    6968 cli_runner.go:211] docker container inspect default-k8s-different-port-20220921221221-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:13:46.514715    6968 oci.go:658] temporary error verifying shutdown: unknown state "default-k8s-different-port-20220921221221-5916": docker container inspect default-k8s-different-port-20220921221221-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220921221221-5916
	I0921 22:13:46.514715    6968 oci.go:660] temporary error: container default-k8s-different-port-20220921221221-5916 status is  but expect it to be exited
	I0921 22:13:46.514715    6968 retry.go:31] will retry after 552.330144ms: couldn't verify container is exited. %v: unknown state "default-k8s-different-port-20220921221221-5916": docker container inspect default-k8s-different-port-20220921221221-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220921221221-5916
	I0921 22:13:47.085706    6968 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220921221221-5916 --format={{.State.Status}}
	W0921 22:13:47.278512    6968 cli_runner.go:211] docker container inspect default-k8s-different-port-20220921221221-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:13:47.278512    6968 oci.go:658] temporary error verifying shutdown: unknown state "default-k8s-different-port-20220921221221-5916": docker container inspect default-k8s-different-port-20220921221221-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220921221221-5916
	I0921 22:13:47.278512    6968 oci.go:660] temporary error: container default-k8s-different-port-20220921221221-5916 status is  but expect it to be exited
	I0921 22:13:47.278512    6968 retry.go:31] will retry after 1.080381816s: couldn't verify container is exited. %v: unknown state "default-k8s-different-port-20220921221221-5916": docker container inspect default-k8s-different-port-20220921221221-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220921221221-5916
	I0921 22:13:48.371882    6968 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220921221221-5916 --format={{.State.Status}}
	W0921 22:13:48.566878    6968 cli_runner.go:211] docker container inspect default-k8s-different-port-20220921221221-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:13:48.567015    6968 oci.go:658] temporary error verifying shutdown: unknown state "default-k8s-different-port-20220921221221-5916": docker container inspect default-k8s-different-port-20220921221221-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220921221221-5916
	I0921 22:13:48.567015    6968 oci.go:660] temporary error: container default-k8s-different-port-20220921221221-5916 status is  but expect it to be exited
	I0921 22:13:48.567087    6968 retry.go:31] will retry after 1.31013006s: couldn't verify container is exited. %v: unknown state "default-k8s-different-port-20220921221221-5916": docker container inspect default-k8s-different-port-20220921221221-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220921221221-5916
	I0921 22:13:49.892347    6968 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220921221221-5916 --format={{.State.Status}}
	W0921 22:13:50.101172    6968 cli_runner.go:211] docker container inspect default-k8s-different-port-20220921221221-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:13:50.101257    6968 oci.go:658] temporary error verifying shutdown: unknown state "default-k8s-different-port-20220921221221-5916": docker container inspect default-k8s-different-port-20220921221221-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220921221221-5916
	I0921 22:13:50.101363    6968 oci.go:660] temporary error: container default-k8s-different-port-20220921221221-5916 status is  but expect it to be exited
	I0921 22:13:50.101363    6968 retry.go:31] will retry after 1.582392691s: couldn't verify container is exited. %v: unknown state "default-k8s-different-port-20220921221221-5916": docker container inspect default-k8s-different-port-20220921221221-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220921221221-5916
	I0921 22:13:51.706473    6968 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220921221221-5916 --format={{.State.Status}}
	W0921 22:13:51.916284    6968 cli_runner.go:211] docker container inspect default-k8s-different-port-20220921221221-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:13:51.916394    6968 oci.go:658] temporary error verifying shutdown: unknown state "default-k8s-different-port-20220921221221-5916": docker container inspect default-k8s-different-port-20220921221221-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220921221221-5916
	I0921 22:13:51.916394    6968 oci.go:660] temporary error: container default-k8s-different-port-20220921221221-5916 status is  but expect it to be exited
	I0921 22:13:51.916478    6968 retry.go:31] will retry after 2.340488664s: couldn't verify container is exited. %v: unknown state "default-k8s-different-port-20220921221221-5916": docker container inspect default-k8s-different-port-20220921221221-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220921221221-5916
	I0921 22:13:54.270757    6968 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220921221221-5916 --format={{.State.Status}}
	W0921 22:13:54.485124    6968 cli_runner.go:211] docker container inspect default-k8s-different-port-20220921221221-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:13:54.485247    6968 oci.go:658] temporary error verifying shutdown: unknown state "default-k8s-different-port-20220921221221-5916": docker container inspect default-k8s-different-port-20220921221221-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220921221221-5916
	I0921 22:13:54.485247    6968 oci.go:660] temporary error: container default-k8s-different-port-20220921221221-5916 status is  but expect it to be exited
	I0921 22:13:54.485247    6968 retry.go:31] will retry after 4.506218855s: couldn't verify container is exited. %v: unknown state "default-k8s-different-port-20220921221221-5916": docker container inspect default-k8s-different-port-20220921221221-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220921221221-5916
	I0921 22:13:59.013044    6968 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220921221221-5916 --format={{.State.Status}}
	W0921 22:13:59.252958    6968 cli_runner.go:211] docker container inspect default-k8s-different-port-20220921221221-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:13:59.253271    6968 oci.go:658] temporary error verifying shutdown: unknown state "default-k8s-different-port-20220921221221-5916": docker container inspect default-k8s-different-port-20220921221221-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220921221221-5916
	I0921 22:13:59.253271    6968 oci.go:660] temporary error: container default-k8s-different-port-20220921221221-5916 status is  but expect it to be exited
	I0921 22:13:59.253344    6968 retry.go:31] will retry after 3.221479586s: couldn't verify container is exited. %v: unknown state "default-k8s-different-port-20220921221221-5916": docker container inspect default-k8s-different-port-20220921221221-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220921221221-5916
	I0921 22:14:02.498432    6968 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220921221221-5916 --format={{.State.Status}}
	W0921 22:14:02.706463    6968 cli_runner.go:211] docker container inspect default-k8s-different-port-20220921221221-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:14:02.706463    6968 oci.go:658] temporary error verifying shutdown: unknown state "default-k8s-different-port-20220921221221-5916": docker container inspect default-k8s-different-port-20220921221221-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220921221221-5916
	I0921 22:14:02.706463    6968 oci.go:660] temporary error: container default-k8s-different-port-20220921221221-5916 status is  but expect it to be exited
	I0921 22:14:02.706463    6968 oci.go:88] couldn't shut down default-k8s-different-port-20220921221221-5916 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "default-k8s-different-port-20220921221221-5916": docker container inspect default-k8s-different-port-20220921221221-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220921221221-5916
	 
	I0921 22:14:02.713473    6968 cli_runner.go:164] Run: docker rm -f -v default-k8s-different-port-20220921221221-5916
	I0921 22:14:02.909455    6968 cli_runner.go:164] Run: docker container inspect -f {{.Id}} default-k8s-different-port-20220921221221-5916
	W0921 22:14:03.088678    6968 cli_runner.go:211] docker container inspect -f {{.Id}} default-k8s-different-port-20220921221221-5916 returned with exit code 1
	I0921 22:14:03.094664    6968 cli_runner.go:164] Run: docker network inspect default-k8s-different-port-20220921221221-5916 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0921 22:14:03.294369    6968 cli_runner.go:211] docker network inspect default-k8s-different-port-20220921221221-5916 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0921 22:14:03.300384    6968 network_create.go:272] running [docker network inspect default-k8s-different-port-20220921221221-5916] to gather additional debugging logs...
	I0921 22:14:03.301363    6968 cli_runner.go:164] Run: docker network inspect default-k8s-different-port-20220921221221-5916
	W0921 22:14:03.485099    6968 cli_runner.go:211] docker network inspect default-k8s-different-port-20220921221221-5916 returned with exit code 1
	I0921 22:14:03.485099    6968 network_create.go:275] error running [docker network inspect default-k8s-different-port-20220921221221-5916]: docker network inspect default-k8s-different-port-20220921221221-5916: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: default-k8s-different-port-20220921221221-5916
	I0921 22:14:03.485099    6968 network_create.go:277] output of [docker network inspect default-k8s-different-port-20220921221221-5916]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: default-k8s-different-port-20220921221221-5916
	
	** /stderr **
	W0921 22:14:03.486103    6968 delete.go:139] delete failed (probably ok) <nil>
	I0921 22:14:03.486103    6968 fix.go:115] Sleeping 1 second for extra luck!
	I0921 22:14:04.500760    6968 start.go:125] createHost starting for "" (driver="docker")
	I0921 22:14:04.505939    6968 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0921 22:14:04.506179    6968 start.go:159] libmachine.API.Create for "default-k8s-different-port-20220921221221-5916" (driver="docker")
	I0921 22:14:04.506348    6968 client.go:168] LocalClient.Create starting
	I0921 22:14:04.506400    6968 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I0921 22:14:04.506400    6968 main.go:134] libmachine: Decoding PEM data...
	I0921 22:14:04.506980    6968 main.go:134] libmachine: Parsing certificate...
	I0921 22:14:04.507114    6968 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I0921 22:14:04.507114    6968 main.go:134] libmachine: Decoding PEM data...
	I0921 22:14:04.507114    6968 main.go:134] libmachine: Parsing certificate...
	I0921 22:14:04.516733    6968 cli_runner.go:164] Run: docker network inspect default-k8s-different-port-20220921221221-5916 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0921 22:14:04.732781    6968 cli_runner.go:211] docker network inspect default-k8s-different-port-20220921221221-5916 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0921 22:14:04.740852    6968 network_create.go:272] running [docker network inspect default-k8s-different-port-20220921221221-5916] to gather additional debugging logs...
	I0921 22:14:04.740852    6968 cli_runner.go:164] Run: docker network inspect default-k8s-different-port-20220921221221-5916
	W0921 22:14:04.933992    6968 cli_runner.go:211] docker network inspect default-k8s-different-port-20220921221221-5916 returned with exit code 1
	I0921 22:14:04.933992    6968 network_create.go:275] error running [docker network inspect default-k8s-different-port-20220921221221-5916]: docker network inspect default-k8s-different-port-20220921221221-5916: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: default-k8s-different-port-20220921221221-5916
	I0921 22:14:04.933992    6968 network_create.go:277] output of [docker network inspect default-k8s-different-port-20220921221221-5916]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: default-k8s-different-port-20220921221221-5916
	
	** /stderr **
	I0921 22:14:04.946743    6968 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0921 22:14:05.159784    6968 network.go:290] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc000146918] misses:0}
	I0921 22:14:05.160837    6968 network.go:236] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0921 22:14:05.160929    6968 network_create.go:115] attempt to create docker network default-k8s-different-port-20220921221221-5916 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0921 22:14:05.168921    6968 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=default-k8s-different-port-20220921221221-5916 default-k8s-different-port-20220921221221-5916
	W0921 22:14:05.359123    6968 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=default-k8s-different-port-20220921221221-5916 default-k8s-different-port-20220921221221-5916 returned with exit code 1
	E0921 22:14:05.359123    6968 network_create.go:104] error while trying to create docker network default-k8s-different-port-20220921221221-5916 192.168.49.0/24: create docker network default-k8s-different-port-20220921221221-5916 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=default-k8s-different-port-20220921221221-5916 default-k8s-different-port-20220921221221-5916: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network b885048ce5e35ead1fa9d16513db1b88a63abb261961562afa348d2f104f868d (br-b885048ce5e3): conflicts with network a04d36bfb3cf4b1b1ab643a2ec70fadfc4a485878b32a275b4c7653037d66dd0 (br-a04d36bfb3cf): networks have overlapping IPv4
	W0921 22:14:05.359123    6968 out.go:239] ! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network default-k8s-different-port-20220921221221-5916 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=default-k8s-different-port-20220921221221-5916 default-k8s-different-port-20220921221221-5916: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network b885048ce5e35ead1fa9d16513db1b88a63abb261961562afa348d2f104f868d (br-b885048ce5e3): conflicts with network a04d36bfb3cf4b1b1ab643a2ec70fadfc4a485878b32a275b4c7653037d66dd0 (br-a04d36bfb3cf): networks have overlapping IPv4
	
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network default-k8s-different-port-20220921221221-5916 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=default-k8s-different-port-20220921221221-5916 default-k8s-different-port-20220921221221-5916: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network b885048ce5e35ead1fa9d16513db1b88a63abb261961562afa348d2f104f868d (br-b885048ce5e3): conflicts with network a04d36bfb3cf4b1b1ab643a2ec70fadfc4a485878b32a275b4c7653037d66dd0 (br-a04d36bfb3cf): networks have overlapping IPv4
	
	I0921 22:14:05.373400    6968 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0921 22:14:05.598190    6968 cli_runner.go:164] Run: docker volume create default-k8s-different-port-20220921221221-5916 --label name.minikube.sigs.k8s.io=default-k8s-different-port-20220921221221-5916 --label created_by.minikube.sigs.k8s.io=true
	W0921 22:14:05.788896    6968 cli_runner.go:211] docker volume create default-k8s-different-port-20220921221221-5916 --label name.minikube.sigs.k8s.io=default-k8s-different-port-20220921221221-5916 --label created_by.minikube.sigs.k8s.io=true returned with exit code 1
	I0921 22:14:05.788896    6968 client.go:171] LocalClient.Create took 1.2825159s
	I0921 22:14:07.817952    6968 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0921 22:14:07.825750    6968 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220921221221-5916
	W0921 22:14:08.027566    6968 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220921221221-5916 returned with exit code 1
	I0921 22:14:08.027566    6968 retry.go:31] will retry after 149.242379ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-different-port-20220921221221-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220921221221-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220921221221-5916
	I0921 22:14:08.200174    6968 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220921221221-5916
	W0921 22:14:08.392531    6968 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220921221221-5916 returned with exit code 1
	I0921 22:14:08.393023    6968 retry.go:31] will retry after 300.341948ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-different-port-20220921221221-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220921221221-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220921221221-5916
	I0921 22:14:08.712466    6968 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220921221221-5916
	W0921 22:14:08.906714    6968 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220921221221-5916 returned with exit code 1
	I0921 22:14:08.907030    6968 retry.go:31] will retry after 571.057104ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-different-port-20220921221221-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220921221221-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220921221221-5916
	I0921 22:14:09.500200    6968 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220921221221-5916
	W0921 22:14:09.690494    6968 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220921221221-5916 returned with exit code 1
	W0921 22:14:09.690770    6968 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-different-port-20220921221221-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220921221221-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220921221221-5916
	
	W0921 22:14:09.690770    6968 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-different-port-20220921221221-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220921221221-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220921221221-5916
	I0921 22:14:09.702327    6968 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0921 22:14:09.711745    6968 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220921221221-5916
	W0921 22:14:09.955213    6968 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220921221221-5916 returned with exit code 1
	I0921 22:14:09.955213    6968 retry.go:31] will retry after 178.565968ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-different-port-20220921221221-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220921221221-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220921221221-5916
	I0921 22:14:10.159907    6968 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220921221221-5916
	W0921 22:14:10.362360    6968 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220921221221-5916 returned with exit code 1
	I0921 22:14:10.362407    6968 retry.go:31] will retry after 330.246446ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-different-port-20220921221221-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220921221221-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220921221221-5916
	I0921 22:14:10.712447    6968 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220921221221-5916
	W0921 22:14:10.924844    6968 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220921221221-5916 returned with exit code 1
	I0921 22:14:10.924844    6968 retry.go:31] will retry after 460.157723ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-different-port-20220921221221-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220921221221-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220921221221-5916
	I0921 22:14:11.399764    6968 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220921221221-5916
	W0921 22:14:11.608328    6968 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220921221221-5916 returned with exit code 1
	W0921 22:14:11.608328    6968 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-different-port-20220921221221-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220921221221-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220921221221-5916
	
	W0921 22:14:11.608328    6968 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-different-port-20220921221221-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220921221221-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220921221221-5916
	I0921 22:14:11.608328    6968 start.go:128] duration metric: createHost completed in 7.1073738s
	I0921 22:14:11.619039    6968 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0921 22:14:11.626532    6968 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220921221221-5916
	W0921 22:14:11.856020    6968 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220921221221-5916 returned with exit code 1
	I0921 22:14:11.856020    6968 retry.go:31] will retry after 195.758538ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-different-port-20220921221221-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220921221221-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220921221221-5916
	I0921 22:14:12.070976    6968 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220921221221-5916
	W0921 22:14:12.277509    6968 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220921221221-5916 returned with exit code 1
	I0921 22:14:12.277913    6968 retry.go:31] will retry after 297.413196ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-different-port-20220921221221-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220921221221-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220921221221-5916
	I0921 22:14:12.587340    6968 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220921221221-5916
	W0921 22:14:12.790361    6968 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220921221221-5916 returned with exit code 1
	I0921 22:14:12.790361    6968 retry.go:31] will retry after 663.23513ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-different-port-20220921221221-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220921221221-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220921221221-5916
	I0921 22:14:13.476875    6968 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220921221221-5916
	W0921 22:14:13.642531    6968 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220921221221-5916 returned with exit code 1
	W0921 22:14:13.642531    6968 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-different-port-20220921221221-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220921221221-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220921221221-5916
	
	W0921 22:14:13.642531    6968 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-different-port-20220921221221-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220921221221-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220921221221-5916
	I0921 22:14:13.653528    6968 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0921 22:14:13.660537    6968 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220921221221-5916
	W0921 22:14:13.846755    6968 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220921221221-5916 returned with exit code 1
	I0921 22:14:13.846755    6968 retry.go:31] will retry after 175.796719ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-different-port-20220921221221-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220921221221-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220921221221-5916
	I0921 22:14:14.039759    6968 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220921221221-5916
	W0921 22:14:14.252592    6968 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220921221221-5916 returned with exit code 1
	I0921 22:14:14.252592    6968 retry.go:31] will retry after 322.826781ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-different-port-20220921221221-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220921221221-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220921221221-5916
	I0921 22:14:14.590146    6968 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220921221221-5916
	W0921 22:14:14.795642    6968 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220921221221-5916 returned with exit code 1
	I0921 22:14:14.795642    6968 retry.go:31] will retry after 602.253718ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-different-port-20220921221221-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220921221221-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220921221221-5916
	I0921 22:14:15.408347    6968 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220921221221-5916
	W0921 22:14:15.599296    6968 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220921221221-5916 returned with exit code 1
	W0921 22:14:15.599296    6968 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-different-port-20220921221221-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220921221221-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220921221221-5916
	
	W0921 22:14:15.599296    6968 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-different-port-20220921221221-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220921221221-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220921221221-5916
	I0921 22:14:15.599296    6968 fix.go:57] fixHost completed within 31.6517538s
	I0921 22:14:15.599296    6968 start.go:83] releasing machines lock for "default-k8s-different-port-20220921221221-5916", held for 31.651902s
	W0921 22:14:15.599296    6968 start.go:602] error starting host: recreate: creating host: create: creating: setting up container node: creating volume for default-k8s-different-port-20220921221221-5916 container: docker volume create default-k8s-different-port-20220921221221-5916 --label name.minikube.sigs.k8s.io=default-k8s-different-port-20220921221221-5916 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create default-k8s-different-port-20220921221221-5916: error while creating volume root path '/var/lib/docker/volumes/default-k8s-different-port-20220921221221-5916': mkdir /var/lib/docker/volumes/default-k8s-different-port-20220921221221-5916: read-only file system
	W0921 22:14:15.599296    6968 out.go:239] ! StartHost failed, but will try again: recreate: creating host: create: creating: setting up container node: creating volume for default-k8s-different-port-20220921221221-5916 container: docker volume create default-k8s-different-port-20220921221221-5916 --label name.minikube.sigs.k8s.io=default-k8s-different-port-20220921221221-5916 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create default-k8s-different-port-20220921221221-5916: error while creating volume root path '/var/lib/docker/volumes/default-k8s-different-port-20220921221221-5916': mkdir /var/lib/docker/volumes/default-k8s-different-port-20220921221221-5916: read-only file system
	
	! StartHost failed, but will try again: recreate: creating host: create: creating: setting up container node: creating volume for default-k8s-different-port-20220921221221-5916 container: docker volume create default-k8s-different-port-20220921221221-5916 --label name.minikube.sigs.k8s.io=default-k8s-different-port-20220921221221-5916 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create default-k8s-different-port-20220921221221-5916: error while creating volume root path '/var/lib/docker/volumes/default-k8s-different-port-20220921221221-5916': mkdir /var/lib/docker/volumes/default-k8s-different-port-20220921221221-5916: read-only file system
	
	I0921 22:14:15.599296    6968 start.go:617] Will try again in 5 seconds ...
	I0921 22:14:20.613596    6968 start.go:364] acquiring machines lock for default-k8s-different-port-20220921221221-5916: {Name:mk83eca1da19c7d9c5cd0808c146559719914d48 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0921 22:14:20.613596    6968 start.go:368] acquired machines lock for "default-k8s-different-port-20220921221221-5916" in 0s
	I0921 22:14:20.614200    6968 start.go:96] Skipping create...Using existing machine configuration
	I0921 22:14:20.614305    6968 fix.go:55] fixHost starting: 
	I0921 22:14:20.628578    6968 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220921221221-5916 --format={{.State.Status}}
	W0921 22:14:20.846048    6968 cli_runner.go:211] docker container inspect default-k8s-different-port-20220921221221-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:14:20.846200    6968 fix.go:103] recreateIfNeeded on default-k8s-different-port-20220921221221-5916: state= err=unknown state "default-k8s-different-port-20220921221221-5916": docker container inspect default-k8s-different-port-20220921221221-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220921221221-5916
	I0921 22:14:20.846239    6968 fix.go:108] machineExists: false. err=machine does not exist
	I0921 22:14:20.851126    6968 out.go:177] * docker "default-k8s-different-port-20220921221221-5916" container is missing, will recreate.
	I0921 22:14:20.853578    6968 delete.go:124] DEMOLISHING default-k8s-different-port-20220921221221-5916 ...
	I0921 22:14:20.867349    6968 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220921221221-5916 --format={{.State.Status}}
	W0921 22:14:21.047948    6968 cli_runner.go:211] docker container inspect default-k8s-different-port-20220921221221-5916 --format={{.State.Status}} returned with exit code 1
	W0921 22:14:21.047948    6968 stop.go:75] unable to get state: unknown state "default-k8s-different-port-20220921221221-5916": docker container inspect default-k8s-different-port-20220921221221-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220921221221-5916
	I0921 22:14:21.047948    6968 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "default-k8s-different-port-20220921221221-5916": docker container inspect default-k8s-different-port-20220921221221-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220921221221-5916
	I0921 22:14:21.062595    6968 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220921221221-5916 --format={{.State.Status}}
	W0921 22:14:21.265929    6968 cli_runner.go:211] docker container inspect default-k8s-different-port-20220921221221-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:14:21.265929    6968 delete.go:82] Unable to get host status for default-k8s-different-port-20220921221221-5916, assuming it has already been deleted: state: unknown state "default-k8s-different-port-20220921221221-5916": docker container inspect default-k8s-different-port-20220921221221-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220921221221-5916
	I0921 22:14:21.272966    6968 cli_runner.go:164] Run: docker container inspect -f {{.Id}} default-k8s-different-port-20220921221221-5916
	W0921 22:14:21.498686    6968 cli_runner.go:211] docker container inspect -f {{.Id}} default-k8s-different-port-20220921221221-5916 returned with exit code 1
	I0921 22:14:21.498686    6968 kic.go:356] could not find the container default-k8s-different-port-20220921221221-5916 to remove it. will try anyways
	I0921 22:14:21.506616    6968 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220921221221-5916 --format={{.State.Status}}
	W0921 22:14:21.736423    6968 cli_runner.go:211] docker container inspect default-k8s-different-port-20220921221221-5916 --format={{.State.Status}} returned with exit code 1
	W0921 22:14:21.736423    6968 oci.go:84] error getting container status, will try to delete anyways: unknown state "default-k8s-different-port-20220921221221-5916": docker container inspect default-k8s-different-port-20220921221221-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220921221221-5916
	I0921 22:14:21.742430    6968 cli_runner.go:164] Run: docker exec --privileged -t default-k8s-different-port-20220921221221-5916 /bin/bash -c "sudo init 0"
	W0921 22:14:21.939450    6968 cli_runner.go:211] docker exec --privileged -t default-k8s-different-port-20220921221221-5916 /bin/bash -c "sudo init 0" returned with exit code 1
	I0921 22:14:21.939450    6968 oci.go:646] error shutdown default-k8s-different-port-20220921221221-5916: docker exec --privileged -t default-k8s-different-port-20220921221221-5916 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: default-k8s-different-port-20220921221221-5916
	I0921 22:14:22.966525    6968 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220921221221-5916 --format={{.State.Status}}
	W0921 22:14:23.171434    6968 cli_runner.go:211] docker container inspect default-k8s-different-port-20220921221221-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:14:23.171562    6968 oci.go:658] temporary error verifying shutdown: unknown state "default-k8s-different-port-20220921221221-5916": docker container inspect default-k8s-different-port-20220921221221-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220921221221-5916
	I0921 22:14:23.171636    6968 oci.go:660] temporary error: container default-k8s-different-port-20220921221221-5916 status is  but expect it to be exited
	I0921 22:14:23.171668    6968 retry.go:31] will retry after 396.557122ms: couldn't verify container is exited. %v: unknown state "default-k8s-different-port-20220921221221-5916": docker container inspect default-k8s-different-port-20220921221221-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220921221221-5916
	I0921 22:14:23.581701    6968 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220921221221-5916 --format={{.State.Status}}
	W0921 22:14:23.777377    6968 cli_runner.go:211] docker container inspect default-k8s-different-port-20220921221221-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:14:23.777377    6968 oci.go:658] temporary error verifying shutdown: unknown state "default-k8s-different-port-20220921221221-5916": docker container inspect default-k8s-different-port-20220921221221-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220921221221-5916
	I0921 22:14:23.777377    6968 oci.go:660] temporary error: container default-k8s-different-port-20220921221221-5916 status is  but expect it to be exited
	I0921 22:14:23.777377    6968 retry.go:31] will retry after 597.811922ms: couldn't verify container is exited. %v: unknown state "default-k8s-different-port-20220921221221-5916": docker container inspect default-k8s-different-port-20220921221221-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220921221221-5916
	I0921 22:14:24.394155    6968 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220921221221-5916 --format={{.State.Status}}
	W0921 22:14:24.593540    6968 cli_runner.go:211] docker container inspect default-k8s-different-port-20220921221221-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:14:24.593637    6968 oci.go:658] temporary error verifying shutdown: unknown state "default-k8s-different-port-20220921221221-5916": docker container inspect default-k8s-different-port-20220921221221-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220921221221-5916
	I0921 22:14:24.593637    6968 oci.go:660] temporary error: container default-k8s-different-port-20220921221221-5916 status is  but expect it to be exited
	I0921 22:14:24.593637    6968 retry.go:31] will retry after 1.409144665s: couldn't verify container is exited. %v: unknown state "default-k8s-different-port-20220921221221-5916": docker container inspect default-k8s-different-port-20220921221221-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220921221221-5916
	I0921 22:14:26.024664    6968 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220921221221-5916 --format={{.State.Status}}
	W0921 22:14:26.234881    6968 cli_runner.go:211] docker container inspect default-k8s-different-port-20220921221221-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:14:26.234881    6968 oci.go:658] temporary error verifying shutdown: unknown state "default-k8s-different-port-20220921221221-5916": docker container inspect default-k8s-different-port-20220921221221-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220921221221-5916
	I0921 22:14:26.234881    6968 oci.go:660] temporary error: container default-k8s-different-port-20220921221221-5916 status is  but expect it to be exited
	I0921 22:14:26.234881    6968 retry.go:31] will retry after 1.192358242s: couldn't verify container is exited. %v: unknown state "default-k8s-different-port-20220921221221-5916": docker container inspect default-k8s-different-port-20220921221221-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220921221221-5916
	I0921 22:14:27.444079    6968 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220921221221-5916 --format={{.State.Status}}
	W0921 22:14:27.634968    6968 cli_runner.go:211] docker container inspect default-k8s-different-port-20220921221221-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:14:27.634968    6968 oci.go:658] temporary error verifying shutdown: unknown state "default-k8s-different-port-20220921221221-5916": docker container inspect default-k8s-different-port-20220921221221-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220921221221-5916
	I0921 22:14:27.634968    6968 oci.go:660] temporary error: container default-k8s-different-port-20220921221221-5916 status is  but expect it to be exited
	I0921 22:14:27.634968    6968 retry.go:31] will retry after 3.456004252s: couldn't verify container is exited. %v: unknown state "default-k8s-different-port-20220921221221-5916": docker container inspect default-k8s-different-port-20220921221221-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220921221221-5916
	I0921 22:14:31.110973    6968 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220921221221-5916 --format={{.State.Status}}
	W0921 22:14:31.319942    6968 cli_runner.go:211] docker container inspect default-k8s-different-port-20220921221221-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:14:31.319942    6968 oci.go:658] temporary error verifying shutdown: unknown state "default-k8s-different-port-20220921221221-5916": docker container inspect default-k8s-different-port-20220921221221-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220921221221-5916
	I0921 22:14:31.319942    6968 oci.go:660] temporary error: container default-k8s-different-port-20220921221221-5916 status is  but expect it to be exited
	I0921 22:14:31.319942    6968 retry.go:31] will retry after 4.543793083s: couldn't verify container is exited. %v: unknown state "default-k8s-different-port-20220921221221-5916": docker container inspect default-k8s-different-port-20220921221221-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220921221221-5916
	I0921 22:14:35.880537    6968 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220921221221-5916 --format={{.State.Status}}
	W0921 22:14:36.075843    6968 cli_runner.go:211] docker container inspect default-k8s-different-port-20220921221221-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:14:36.075843    6968 oci.go:658] temporary error verifying shutdown: unknown state "default-k8s-different-port-20220921221221-5916": docker container inspect default-k8s-different-port-20220921221221-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220921221221-5916
	I0921 22:14:36.075843    6968 oci.go:660] temporary error: container default-k8s-different-port-20220921221221-5916 status is  but expect it to be exited
	I0921 22:14:36.075843    6968 retry.go:31] will retry after 5.830976587s: couldn't verify container is exited. %v: unknown state "default-k8s-different-port-20220921221221-5916": docker container inspect default-k8s-different-port-20220921221221-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220921221221-5916
	I0921 22:14:41.925514    6968 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220921221221-5916 --format={{.State.Status}}
	W0921 22:14:42.116752    6968 cli_runner.go:211] docker container inspect default-k8s-different-port-20220921221221-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:14:42.116984    6968 oci.go:658] temporary error verifying shutdown: unknown state "default-k8s-different-port-20220921221221-5916": docker container inspect default-k8s-different-port-20220921221221-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220921221221-5916
	I0921 22:14:42.116984    6968 oci.go:660] temporary error: container default-k8s-different-port-20220921221221-5916 status is  but expect it to be exited
	I0921 22:14:42.117050    6968 oci.go:88] couldn't shut down default-k8s-different-port-20220921221221-5916 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "default-k8s-different-port-20220921221221-5916": docker container inspect default-k8s-different-port-20220921221221-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220921221221-5916
	 
	I0921 22:14:42.124429    6968 cli_runner.go:164] Run: docker rm -f -v default-k8s-different-port-20220921221221-5916
	I0921 22:14:42.354801    6968 cli_runner.go:164] Run: docker container inspect -f {{.Id}} default-k8s-different-port-20220921221221-5916
	W0921 22:14:42.549372    6968 cli_runner.go:211] docker container inspect -f {{.Id}} default-k8s-different-port-20220921221221-5916 returned with exit code 1
	I0921 22:14:42.556368    6968 cli_runner.go:164] Run: docker network inspect default-k8s-different-port-20220921221221-5916 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0921 22:14:42.766997    6968 cli_runner.go:211] docker network inspect default-k8s-different-port-20220921221221-5916 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0921 22:14:42.774296    6968 network_create.go:272] running [docker network inspect default-k8s-different-port-20220921221221-5916] to gather additional debugging logs...
	I0921 22:14:42.774296    6968 cli_runner.go:164] Run: docker network inspect default-k8s-different-port-20220921221221-5916
	W0921 22:14:42.967916    6968 cli_runner.go:211] docker network inspect default-k8s-different-port-20220921221221-5916 returned with exit code 1
	I0921 22:14:42.967916    6968 network_create.go:275] error running [docker network inspect default-k8s-different-port-20220921221221-5916]: docker network inspect default-k8s-different-port-20220921221221-5916: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: default-k8s-different-port-20220921221221-5916
	I0921 22:14:42.967916    6968 network_create.go:277] output of [docker network inspect default-k8s-different-port-20220921221221-5916]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: default-k8s-different-port-20220921221221-5916
	
	** /stderr **
	W0921 22:14:42.968928    6968 delete.go:139] delete failed (probably ok) <nil>
	I0921 22:14:42.968928    6968 fix.go:115] Sleeping 1 second for extra luck!
	I0921 22:14:43.980384    6968 start.go:125] createHost starting for "" (driver="docker")
	I0921 22:14:43.984397    6968 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0921 22:14:43.984599    6968 start.go:159] libmachine.API.Create for "default-k8s-different-port-20220921221221-5916" (driver="docker")
	I0921 22:14:43.984599    6968 client.go:168] LocalClient.Create starting
	I0921 22:14:43.996924    6968 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I0921 22:14:43.996924    6968 main.go:134] libmachine: Decoding PEM data...
	I0921 22:14:43.996924    6968 main.go:134] libmachine: Parsing certificate...
	I0921 22:14:43.996924    6968 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I0921 22:14:43.997611    6968 main.go:134] libmachine: Decoding PEM data...
	I0921 22:14:43.997685    6968 main.go:134] libmachine: Parsing certificate...
	I0921 22:14:44.006687    6968 cli_runner.go:164] Run: docker network inspect default-k8s-different-port-20220921221221-5916 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0921 22:14:44.199663    6968 cli_runner.go:211] docker network inspect default-k8s-different-port-20220921221221-5916 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0921 22:14:44.206844    6968 network_create.go:272] running [docker network inspect default-k8s-different-port-20220921221221-5916] to gather additional debugging logs...
	I0921 22:14:44.206844    6968 cli_runner.go:164] Run: docker network inspect default-k8s-different-port-20220921221221-5916
	W0921 22:14:44.388612    6968 cli_runner.go:211] docker network inspect default-k8s-different-port-20220921221221-5916 returned with exit code 1
	I0921 22:14:44.388612    6968 network_create.go:275] error running [docker network inspect default-k8s-different-port-20220921221221-5916]: docker network inspect default-k8s-different-port-20220921221221-5916: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: default-k8s-different-port-20220921221221-5916
	I0921 22:14:44.388612    6968 network_create.go:277] output of [docker network inspect default-k8s-different-port-20220921221221-5916]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: default-k8s-different-port-20220921221221-5916
	
	** /stderr **
	I0921 22:14:44.395848    6968 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0921 22:14:44.622502    6968 network.go:281] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000146918] amended:false}} dirty:map[] misses:0}
	I0921 22:14:44.622502    6968 network.go:239] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0921 22:14:44.636424    6968 network.go:290] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000146918] amended:true}} dirty:map[192.168.49.0:0xc000146918 192.168.58.0:0xc000146ab0] misses:0}
	I0921 22:14:44.636424    6968 network.go:236] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0921 22:14:44.636424    6968 network_create.go:115] attempt to create docker network default-k8s-different-port-20220921221221-5916 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0921 22:14:44.643721    6968 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=default-k8s-different-port-20220921221221-5916 default-k8s-different-port-20220921221221-5916
	W0921 22:14:44.854906    6968 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=default-k8s-different-port-20220921221221-5916 default-k8s-different-port-20220921221221-5916 returned with exit code 1
	E0921 22:14:44.855234    6968 network_create.go:104] error while trying to create docker network default-k8s-different-port-20220921221221-5916 192.168.58.0/24: create docker network default-k8s-different-port-20220921221221-5916 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=default-k8s-different-port-20220921221221-5916 default-k8s-different-port-20220921221221-5916: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 47e588432b7511441bda8047fb6c5f64f4b40cafdbaff9a30310dd3189afca81 (br-47e588432b75): conflicts with network 8a3cd8d165a4af032276277dbee93b52b8ee4aed7ca6d8aceee6814901b31792 (br-8a3cd8d165a4): networks have overlapping IPv4
	W0921 22:14:44.855507    6968 out.go:239] ! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network default-k8s-different-port-20220921221221-5916 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=default-k8s-different-port-20220921221221-5916 default-k8s-different-port-20220921221221-5916: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 47e588432b7511441bda8047fb6c5f64f4b40cafdbaff9a30310dd3189afca81 (br-47e588432b75): conflicts with network 8a3cd8d165a4af032276277dbee93b52b8ee4aed7ca6d8aceee6814901b31792 (br-8a3cd8d165a4): networks have overlapping IPv4
	
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network default-k8s-different-port-20220921221221-5916 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=default-k8s-different-port-20220921221221-5916 default-k8s-different-port-20220921221221-5916: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 47e588432b7511441bda8047fb6c5f64f4b40cafdbaff9a30310dd3189afca81 (br-47e588432b75): conflicts with network 8a3cd8d165a4af032276277dbee93b52b8ee4aed7ca6d8aceee6814901b31792 (br-8a3cd8d165a4): networks have overlapping IPv4
	
	I0921 22:14:44.868506    6968 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0921 22:14:45.064560    6968 cli_runner.go:164] Run: docker volume create default-k8s-different-port-20220921221221-5916 --label name.minikube.sigs.k8s.io=default-k8s-different-port-20220921221221-5916 --label created_by.minikube.sigs.k8s.io=true
	W0921 22:14:45.265696    6968 cli_runner.go:211] docker volume create default-k8s-different-port-20220921221221-5916 --label name.minikube.sigs.k8s.io=default-k8s-different-port-20220921221221-5916 --label created_by.minikube.sigs.k8s.io=true returned with exit code 1
	I0921 22:14:45.265696    6968 client.go:171] LocalClient.Create took 1.2810863s
	I0921 22:14:47.289456    6968 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0921 22:14:47.296573    6968 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220921221221-5916
	W0921 22:14:47.493634    6968 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220921221221-5916 returned with exit code 1
	I0921 22:14:47.493917    6968 retry.go:31] will retry after 164.582069ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-different-port-20220921221221-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220921221221-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220921221221-5916
	I0921 22:14:47.666999    6968 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220921221221-5916
	W0921 22:14:47.864802    6968 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220921221221-5916 returned with exit code 1
	I0921 22:14:47.865205    6968 retry.go:31] will retry after 415.22004ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-different-port-20220921221221-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220921221221-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220921221221-5916
	I0921 22:14:48.288850    6968 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220921221221-5916
	W0921 22:14:48.499135    6968 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220921221221-5916 returned with exit code 1
	W0921 22:14:48.499135    6968 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-different-port-20220921221221-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220921221221-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220921221221-5916
	
	W0921 22:14:48.499135    6968 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-different-port-20220921221221-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220921221221-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220921221221-5916
	I0921 22:14:48.511567    6968 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0921 22:14:48.518390    6968 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220921221221-5916
	W0921 22:14:48.709908    6968 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220921221221-5916 returned with exit code 1
	I0921 22:14:48.709961    6968 retry.go:31] will retry after 144.863405ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-different-port-20220921221221-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220921221221-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220921221221-5916
	I0921 22:14:48.873254    6968 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220921221221-5916
	W0921 22:14:49.064148    6968 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220921221221-5916 returned with exit code 1
	I0921 22:14:49.064148    6968 retry.go:31] will retry after 410.553224ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-different-port-20220921221221-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220921221221-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220921221221-5916
	I0921 22:14:49.490521    6968 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220921221221-5916
	W0921 22:14:49.687980    6968 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220921221221-5916 returned with exit code 1
	I0921 22:14:49.687980    6968 retry.go:31] will retry after 314.505366ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-different-port-20220921221221-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220921221221-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220921221221-5916
	I0921 22:14:50.024848    6968 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220921221221-5916
	W0921 22:14:50.234294    6968 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220921221221-5916 returned with exit code 1
	W0921 22:14:50.234435    6968 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-different-port-20220921221221-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220921221221-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220921221221-5916
	
	W0921 22:14:50.234435    6968 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-different-port-20220921221221-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220921221221-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220921221221-5916
	I0921 22:14:50.234435    6968 start.go:128] duration metric: createHost completed in 6.2540002s
	I0921 22:14:50.244747    6968 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0921 22:14:50.251844    6968 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220921221221-5916
	W0921 22:14:50.454972    6968 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220921221221-5916 returned with exit code 1
	I0921 22:14:50.455177    6968 retry.go:31] will retry after 200.38067ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-different-port-20220921221221-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220921221221-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220921221221-5916
	I0921 22:14:50.668026    6968 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220921221221-5916
	W0921 22:14:50.875169    6968 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220921221221-5916 returned with exit code 1
	I0921 22:14:50.875169    6968 retry.go:31] will retry after 252.474839ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-different-port-20220921221221-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220921221221-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220921221221-5916
	I0921 22:14:51.151032    6968 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220921221221-5916
	W0921 22:14:51.353889    6968 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220921221221-5916 returned with exit code 1
	I0921 22:14:51.353889    6968 retry.go:31] will retry after 585.618668ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-different-port-20220921221221-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220921221221-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220921221221-5916
	I0921 22:14:51.956545    6968 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220921221221-5916
	W0921 22:14:52.152610    6968 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220921221221-5916 returned with exit code 1
	W0921 22:14:52.152610    6968 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-different-port-20220921221221-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220921221221-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220921221221-5916
	
	W0921 22:14:52.152610    6968 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-different-port-20220921221221-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220921221221-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220921221221-5916
	I0921 22:14:52.164997    6968 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0921 22:14:52.172370    6968 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220921221221-5916
	W0921 22:14:52.399009    6968 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220921221221-5916 returned with exit code 1
	I0921 22:14:52.399292    6968 retry.go:31] will retry after 194.626905ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-different-port-20220921221221-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220921221221-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220921221221-5916
	I0921 22:14:52.614238    6968 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220921221221-5916
	W0921 22:14:52.821430    6968 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220921221221-5916 returned with exit code 1
	I0921 22:14:52.821430    6968 retry.go:31] will retry after 346.182076ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-different-port-20220921221221-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220921221221-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220921221221-5916
	I0921 22:14:53.188884    6968 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220921221221-5916
	W0921 22:14:53.391725    6968 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220921221221-5916 returned with exit code 1
	I0921 22:14:53.391725    6968 retry.go:31] will retry after 579.704465ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-different-port-20220921221221-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220921221221-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220921221221-5916
	I0921 22:14:53.985873    6968 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220921221221-5916
	W0921 22:14:54.182462    6968 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220921221221-5916 returned with exit code 1
	W0921 22:14:54.182462    6968 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-different-port-20220921221221-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220921221221-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220921221221-5916
	
	W0921 22:14:54.182462    6968 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-different-port-20220921221221-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220921221221-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220921221221-5916
	I0921 22:14:54.182462    6968 fix.go:57] fixHost completed within 33.5678881s
	I0921 22:14:54.182462    6968 start.go:83] releasing machines lock for "default-k8s-different-port-20220921221221-5916", held for 33.568597s
	W0921 22:14:54.183460    6968 out.go:239] * Failed to start docker container. Running "minikube delete -p default-k8s-different-port-20220921221221-5916" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for default-k8s-different-port-20220921221221-5916 container: docker volume create default-k8s-different-port-20220921221221-5916 --label name.minikube.sigs.k8s.io=default-k8s-different-port-20220921221221-5916 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create default-k8s-different-port-20220921221221-5916: error while creating volume root path '/var/lib/docker/volumes/default-k8s-different-port-20220921221221-5916': mkdir /var/lib/docker/volumes/default-k8s-different-port-20220921221221-5916: read-only file system
	
	* Failed to start docker container. Running "minikube delete -p default-k8s-different-port-20220921221221-5916" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for default-k8s-different-port-20220921221221-5916 container: docker volume create default-k8s-different-port-20220921221221-5916 --label name.minikube.sigs.k8s.io=default-k8s-different-port-20220921221221-5916 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create default-k8s-different-port-20220921221221-5916: error while creating volume root path '/var/lib/docker/volumes/default-k8s-different-port-20220921221221-5916': mkdir /var/lib/docker/volumes/default-k8s-different-port-20220921221221-5916: read-only file system
	
	I0921 22:14:54.188473    6968 out.go:177] 
	W0921 22:14:54.191466    6968 out.go:239] X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for default-k8s-different-port-20220921221221-5916 container: docker volume create default-k8s-different-port-20220921221221-5916 --label name.minikube.sigs.k8s.io=default-k8s-different-port-20220921221221-5916 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create default-k8s-different-port-20220921221221-5916: error while creating volume root path '/var/lib/docker/volumes/default-k8s-different-port-20220921221221-5916': mkdir /var/lib/docker/volumes/default-k8s-different-port-20220921221221-5916: read-only file system
	
	X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for default-k8s-different-port-20220921221221-5916 container: docker volume create default-k8s-different-port-20220921221221-5916 --label name.minikube.sigs.k8s.io=default-k8s-different-port-20220921221221-5916 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create default-k8s-different-port-20220921221221-5916: error while creating volume root path '/var/lib/docker/volumes/default-k8s-different-port-20220921221221-5916': mkdir /var/lib/docker/volumes/default-k8s-different-port-20220921221221-5916: read-only file system
	
	W0921 22:14:54.191466    6968 out.go:239] * Suggestion: Restart Docker
	* Suggestion: Restart Docker
	W0921 22:14:54.191466    6968 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/6825
	* Related issue: https://github.com/kubernetes/minikube/issues/6825
	I0921 22:14:54.195455    6968 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-windows-amd64.exe start -p default-k8s-different-port-20220921221221-5916 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker --kubernetes-version=v1.25.2": exit status 60
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/default-k8s-different-port/serial/SecondStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect default-k8s-different-port-20220921221221-5916
helpers_test.go:231: (dbg) Non-zero exit: docker inspect default-k8s-different-port-20220921221221-5916: exit status 1 (240.0053ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: default-k8s-different-port-20220921221221-5916

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p default-k8s-different-port-20220921221221-5916 -n default-k8s-different-port-20220921221221-5916

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/SecondStart
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p default-k8s-different-port-20220921221221-5916 -n default-k8s-different-port-20220921221221-5916: exit status 7 (583.8797ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0921 22:14:55.244193    8772 status.go:247] status error: host: state: unknown state "default-k8s-different-port-20220921221221-5916": docker container inspect default-k8s-different-port-20220921221221-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220921221221-5916

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-different-port-20220921221221-5916" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/default-k8s-different-port/serial/SecondStart (78.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (49.62s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:101: (dbg) Run:  out/minikube-windows-amd64.exe start -p bridge-20220921220528-5916 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=bridge --driver=docker

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/Start
net_test.go:101: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p bridge-20220921220528-5916 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=bridge --driver=docker: exit status 60 (49.5124665s)

                                                
                                                
-- stdout --
	* [bridge-20220921220528-5916] minikube v1.27.0 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	  - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=14995
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting control plane node bridge-20220921220528-5916 in cluster bridge-20220921220528-5916
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* docker "bridge-20220921220528-5916" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0921 22:14:12.204342    6568 out.go:296] Setting OutFile to fd 1980 ...
	I0921 22:14:12.278484    6568 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0921 22:14:12.278484    6568 out.go:309] Setting ErrFile to fd 1556...
	I0921 22:14:12.278484    6568 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0921 22:14:12.296448    6568 out.go:303] Setting JSON to false
	I0921 22:14:12.303223    6568 start.go:115] hostinfo: {"hostname":"minikube2","uptime":4520,"bootTime":1663793932,"procs":152,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19044 Build 19044","kernelVersion":"10.0.19044 Build 19044","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"29dced62-21fb-45d8-a34e-472b66ced002"}
	W0921 22:14:12.303274    6568 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0921 22:14:12.308288    6568 out.go:177] * [bridge-20220921220528-5916] minikube v1.27.0 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	I0921 22:14:12.311814    6568 notify.go:214] Checking for updates...
	I0921 22:14:12.313867    6568 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	I0921 22:14:12.316860    6568 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	I0921 22:14:12.318987    6568 out.go:177]   - MINIKUBE_LOCATION=14995
	I0921 22:14:12.321011    6568 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0921 22:14:12.324727    6568 config.go:180] Loaded profile config "default-k8s-different-port-20220921221221-5916": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.2
	I0921 22:14:12.325291    6568 config.go:180] Loaded profile config "false-20220921220530-5916": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.2
	I0921 22:14:12.325400    6568 config.go:180] Loaded profile config "multinode-20220921215635-5916-m01": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.2
	I0921 22:14:12.325400    6568 config.go:180] Loaded profile config "newest-cni-20220921221222-5916": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.2
	I0921 22:14:12.326111    6568 driver.go:365] Setting default libvirt URI to qemu:///system
	I0921 22:14:12.610536    6568 docker.go:137] docker version: linux-20.10.17
	I0921 22:14:12.617540    6568 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0921 22:14:13.191011    6568 info.go:265] docker info: {ID:6VFI:N2PO:PLBP:KXLR:EN7R:OR7P:564S:UY7N:ZWDA:5V2U:KGL2:KJZ7 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:43 OomKillDisable:true NGoroutines:93 SystemTime:2022-09-21 22:14:12.7714538 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.10.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.9] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-
plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.19.0]] Warnings:<nil>}}
	I0921 22:14:13.195665    6568 out.go:177] * Using the docker driver based on user configuration
	I0921 22:14:13.199416    6568 start.go:284] selected driver: docker
	I0921 22:14:13.199482    6568 start.go:808] validating driver "docker" against <nil>
	I0921 22:14:13.199511    6568 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0921 22:14:13.262087    6568 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0921 22:14:13.814736    6568 info.go:265] docker info: {ID:6VFI:N2PO:PLBP:KXLR:EN7R:OR7P:564S:UY7N:ZWDA:5V2U:KGL2:KJZ7 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:43 OomKillDisable:true NGoroutines:93 SystemTime:2022-09-21 22:14:13.4160613 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.10.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.9] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-
plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.19.0]] Warnings:<nil>}}
	I0921 22:14:13.814736    6568 start_flags.go:302] no existing cluster config was found, will generate one from the flags 
	I0921 22:14:13.815745    6568 start_flags.go:867] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0921 22:14:13.818743    6568 out.go:177] * Using Docker Desktop driver with root privileges
	I0921 22:14:13.820742    6568 cni.go:95] Creating CNI manager for "bridge"
	I0921 22:14:13.820742    6568 start_flags.go:311] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0921 22:14:13.820742    6568 start_flags.go:316] config:
	{Name:bridge-20220921220528-5916 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.2 ClusterName:bridge-20220921220528-5916 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I0921 22:14:13.824732    6568 out.go:177] * Starting control plane node bridge-20220921220528-5916 in cluster bridge-20220921220528-5916
	I0921 22:14:13.826731    6568 cache.go:120] Beginning downloading kic base image for docker with docker
	I0921 22:14:13.829730    6568 out.go:177] * Pulling base image ...
	I0921 22:14:13.833737    6568 preload.go:132] Checking if preload exists for k8s version v1.25.2 and runtime docker
	I0921 22:14:13.833737    6568 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c in local docker daemon
	I0921 22:14:13.833737    6568 preload.go:148] Found local preload: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.25.2-docker-overlay2-amd64.tar.lz4
	I0921 22:14:13.834740    6568 cache.go:57] Caching tarball of preloaded images
	I0921 22:14:13.834740    6568 preload.go:174] Found C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.25.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0921 22:14:13.834740    6568 cache.go:60] Finished verifying existence of preloaded tar for  v1.25.2 on docker
	I0921 22:14:13.834740    6568 profile.go:148] Saving config to C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\bridge-20220921220528-5916\config.json ...
	I0921 22:14:13.835735    6568 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\bridge-20220921220528-5916\config.json: {Name:mk2ddd340bc6bd640e869fad4339ec04c61fcc4c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0921 22:14:14.048764    6568 cache.go:147] Downloading gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c to local cache
	I0921 22:14:14.048764    6568 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase_v0.0.34@sha256_f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c.tar
	I0921 22:14:14.048764    6568 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase_v0.0.34@sha256_f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c.tar
	I0921 22:14:14.048764    6568 image.go:59] Checking for gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c in local cache directory
	I0921 22:14:14.048764    6568 image.go:62] Found gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c in local cache directory, skipping pull
	I0921 22:14:14.048764    6568 image.go:103] gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c exists in cache, skipping pull
	I0921 22:14:14.048764    6568 cache.go:150] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c as a tarball
	I0921 22:14:14.048764    6568 cache.go:161] Loading gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c from local cache
	I0921 22:14:14.048764    6568 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase_v0.0.34@sha256_f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c.tar
	I0921 22:14:16.501785    6568 cache.go:164] successfully loaded gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c from cached tarball
	I0921 22:14:16.501785    6568 cache.go:170] Downloading gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c to local daemon
	I0921 22:14:16.501785    6568 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c.d.lock -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase_v0.0.34@sha256_f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c.d.lock
	I0921 22:14:16.502721    6568 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c in local docker daemon
	I0921 22:14:16.711009    6568 image.go:243] Writing gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c to local daemon
	    > gcr.io/k8s-minikube/kicbase:  0 B [__________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase:  0 B [__________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase:  0 B [__________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase:  0 B [______________________] ?% ? p/s 800msI0921 22:14:18.641457    6568 cache.go:177] use image loaded from cache gcr.io/k8s-minikube/kicbase:v0.0.34
	W0921 22:14:18.641457    6568 out.go:239] ! minikube was unable to download gcr.io/k8s-minikube/kicbase:v0.0.34, but successfully downloaded gcr.io/k8s-minikube/kicbase:v0.0.34 as a fallback image
	! minikube was unable to download gcr.io/k8s-minikube/kicbase:v0.0.34, but successfully downloaded gcr.io/k8s-minikube/kicbase:v0.0.34 as a fallback image
	I0921 22:14:18.641570    6568 cache.go:208] Successfully downloaded all kic artifacts
	I0921 22:14:18.641677    6568 start.go:364] acquiring machines lock for bridge-20220921220528-5916: {Name:mk1ed476dcb37276966ef6730037bc3cfd9285ff Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0921 22:14:18.641804    6568 start.go:368] acquired machines lock for "bridge-20220921220528-5916" in 88.1µs
	I0921 22:14:18.641804    6568 start.go:93] Provisioning new machine with config: &{Name:bridge-20220921220528-5916 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.34 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.2 ClusterName:bridge-20220921220528-5916 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunt
ime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.25.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVM
netClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet} &{Name: IP: Port:8443 KubernetesVersion:v1.25.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0921 22:14:18.641804    6568 start.go:125] createHost starting for "" (driver="docker")
	I0921 22:14:18.646165    6568 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0921 22:14:18.646299    6568 start.go:159] libmachine.API.Create for "bridge-20220921220528-5916" (driver="docker")
	I0921 22:14:18.646299    6568 client.go:168] LocalClient.Create starting
	I0921 22:14:18.647182    6568 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I0921 22:14:18.647414    6568 main.go:134] libmachine: Decoding PEM data...
	I0921 22:14:18.647414    6568 main.go:134] libmachine: Parsing certificate...
	I0921 22:14:18.647414    6568 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I0921 22:14:18.647414    6568 main.go:134] libmachine: Decoding PEM data...
	I0921 22:14:18.647414    6568 main.go:134] libmachine: Parsing certificate...
	I0921 22:14:18.656045    6568 cli_runner.go:164] Run: docker network inspect bridge-20220921220528-5916 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0921 22:14:18.875999    6568 cli_runner.go:211] docker network inspect bridge-20220921220528-5916 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0921 22:14:18.884349    6568 network_create.go:272] running [docker network inspect bridge-20220921220528-5916] to gather additional debugging logs...
	I0921 22:14:18.884349    6568 cli_runner.go:164] Run: docker network inspect bridge-20220921220528-5916
	W0921 22:14:19.078808    6568 cli_runner.go:211] docker network inspect bridge-20220921220528-5916 returned with exit code 1
	I0921 22:14:19.078808    6568 network_create.go:275] error running [docker network inspect bridge-20220921220528-5916]: docker network inspect bridge-20220921220528-5916: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: bridge-20220921220528-5916
	I0921 22:14:19.078808    6568 network_create.go:277] output of [docker network inspect bridge-20220921220528-5916]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: bridge-20220921220528-5916
	
	** /stderr **
	I0921 22:14:19.088800    6568 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0921 22:14:19.321845    6568 network.go:290] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc0005ee3e0] misses:0}
	I0921 22:14:19.322885    6568 network.go:236] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0921 22:14:19.322885    6568 network_create.go:115] attempt to create docker network bridge-20220921220528-5916 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0921 22:14:19.329801    6568 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=bridge-20220921220528-5916 bridge-20220921220528-5916
	W0921 22:14:19.520292    6568 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=bridge-20220921220528-5916 bridge-20220921220528-5916 returned with exit code 1
	E0921 22:14:19.520292    6568 network_create.go:104] error while trying to create docker network bridge-20220921220528-5916 192.168.49.0/24: create docker network bridge-20220921220528-5916 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=bridge-20220921220528-5916 bridge-20220921220528-5916: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network d3e27aebb4504592476a13860656d2cb3a4c500025eaccab175ef28f8804c6b0 (br-d3e27aebb450): conflicts with network a04d36bfb3cf4b1b1ab643a2ec70fadfc4a485878b32a275b4c7653037d66dd0 (br-a04d36bfb3cf): networks have overlapping IPv4
	W0921 22:14:19.520292    6568 out.go:239] ! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network bridge-20220921220528-5916 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=bridge-20220921220528-5916 bridge-20220921220528-5916: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network d3e27aebb4504592476a13860656d2cb3a4c500025eaccab175ef28f8804c6b0 (br-d3e27aebb450): conflicts with network a04d36bfb3cf4b1b1ab643a2ec70fadfc4a485878b32a275b4c7653037d66dd0 (br-a04d36bfb3cf): networks have overlapping IPv4
	
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network bridge-20220921220528-5916 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=bridge-20220921220528-5916 bridge-20220921220528-5916: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network d3e27aebb4504592476a13860656d2cb3a4c500025eaccab175ef28f8804c6b0 (br-d3e27aebb450): conflicts with network a04d36bfb3cf4b1b1ab643a2ec70fadfc4a485878b32a275b4c7653037d66dd0 (br-a04d36bfb3cf): networks have overlapping IPv4
	
	I0921 22:14:19.534298    6568 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0921 22:14:19.729702    6568 cli_runner.go:164] Run: docker volume create bridge-20220921220528-5916 --label name.minikube.sigs.k8s.io=bridge-20220921220528-5916 --label created_by.minikube.sigs.k8s.io=true
	W0921 22:14:19.926969    6568 cli_runner.go:211] docker volume create bridge-20220921220528-5916 --label name.minikube.sigs.k8s.io=bridge-20220921220528-5916 --label created_by.minikube.sigs.k8s.io=true returned with exit code 1
	I0921 22:14:19.926969    6568 client.go:171] LocalClient.Create took 1.28066s
	I0921 22:14:21.949493    6568 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0921 22:14:21.956452    6568 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20220921220528-5916
	W0921 22:14:22.142221    6568 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20220921220528-5916 returned with exit code 1
	I0921 22:14:22.142274    6568 retry.go:31] will retry after 276.165072ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "bridge-20220921220528-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20220921220528-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-20220921220528-5916
	I0921 22:14:22.431448    6568 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20220921220528-5916
	W0921 22:14:22.639359    6568 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20220921220528-5916 returned with exit code 1
	I0921 22:14:22.639359    6568 retry.go:31] will retry after 540.190908ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "bridge-20220921220528-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20220921220528-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-20220921220528-5916
	I0921 22:14:23.194429    6568 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20220921220528-5916
	W0921 22:14:23.433801    6568 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20220921220528-5916 returned with exit code 1
	W0921 22:14:23.433942    6568 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "bridge-20220921220528-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20220921220528-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-20220921220528-5916
	
	W0921 22:14:23.434093    6568 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "bridge-20220921220528-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20220921220528-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-20220921220528-5916
	I0921 22:14:23.443540    6568 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0921 22:14:23.449599    6568 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20220921220528-5916
	W0921 22:14:23.651196    6568 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20220921220528-5916 returned with exit code 1
	I0921 22:14:23.651196    6568 retry.go:31] will retry after 234.428547ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "bridge-20220921220528-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20220921220528-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-20220921220528-5916
	I0921 22:14:23.895640    6568 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20220921220528-5916
	W0921 22:14:24.114013    6568 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20220921220528-5916 returned with exit code 1
	I0921 22:14:24.114273    6568 retry.go:31] will retry after 346.739061ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "bridge-20220921220528-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20220921220528-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-20220921220528-5916
	I0921 22:14:24.479785    6568 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20220921220528-5916
	W0921 22:14:24.670186    6568 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20220921220528-5916 returned with exit code 1
	I0921 22:14:24.670186    6568 retry.go:31] will retry after 667.587979ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "bridge-20220921220528-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20220921220528-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-20220921220528-5916
	I0921 22:14:25.357422    6568 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20220921220528-5916
	W0921 22:14:25.553095    6568 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20220921220528-5916 returned with exit code 1
	W0921 22:14:25.553095    6568 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "bridge-20220921220528-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20220921220528-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-20220921220528-5916
	
	W0921 22:14:25.553095    6568 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "bridge-20220921220528-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20220921220528-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-20220921220528-5916
	I0921 22:14:25.553095    6568 start.go:128] duration metric: createHost completed in 6.9112357s
	I0921 22:14:25.553095    6568 start.go:83] releasing machines lock for "bridge-20220921220528-5916", held for 6.9112357s
	W0921 22:14:25.553095    6568 start.go:602] error starting host: creating host: create: creating: setting up container node: creating volume for bridge-20220921220528-5916 container: docker volume create bridge-20220921220528-5916 --label name.minikube.sigs.k8s.io=bridge-20220921220528-5916 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create bridge-20220921220528-5916: error while creating volume root path '/var/lib/docker/volumes/bridge-20220921220528-5916': mkdir /var/lib/docker/volumes/bridge-20220921220528-5916: read-only file system
	I0921 22:14:25.569739    6568 cli_runner.go:164] Run: docker container inspect bridge-20220921220528-5916 --format={{.State.Status}}
	W0921 22:14:25.756542    6568 cli_runner.go:211] docker container inspect bridge-20220921220528-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:14:25.756542    6568 delete.go:82] Unable to get host status for bridge-20220921220528-5916, assuming it has already been deleted: state: unknown state "bridge-20220921220528-5916": docker container inspect bridge-20220921220528-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-20220921220528-5916
	W0921 22:14:25.756542    6568 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: setting up container node: creating volume for bridge-20220921220528-5916 container: docker volume create bridge-20220921220528-5916 --label name.minikube.sigs.k8s.io=bridge-20220921220528-5916 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create bridge-20220921220528-5916: error while creating volume root path '/var/lib/docker/volumes/bridge-20220921220528-5916': mkdir /var/lib/docker/volumes/bridge-20220921220528-5916: read-only file system
	
	! StartHost failed, but will try again: creating host: create: creating: setting up container node: creating volume for bridge-20220921220528-5916 container: docker volume create bridge-20220921220528-5916 --label name.minikube.sigs.k8s.io=bridge-20220921220528-5916 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create bridge-20220921220528-5916: error while creating volume root path '/var/lib/docker/volumes/bridge-20220921220528-5916': mkdir /var/lib/docker/volumes/bridge-20220921220528-5916: read-only file system
	
	I0921 22:14:25.756542    6568 start.go:617] Will try again in 5 seconds ...
	I0921 22:14:30.761433    6568 start.go:364] acquiring machines lock for bridge-20220921220528-5916: {Name:mk1ed476dcb37276966ef6730037bc3cfd9285ff Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0921 22:14:30.761995    6568 start.go:368] acquired machines lock for "bridge-20220921220528-5916" in 310.1µs
	I0921 22:14:30.762050    6568 start.go:96] Skipping create...Using existing machine configuration
	I0921 22:14:30.762050    6568 fix.go:55] fixHost starting: 
	I0921 22:14:30.779304    6568 cli_runner.go:164] Run: docker container inspect bridge-20220921220528-5916 --format={{.State.Status}}
	W0921 22:14:30.978696    6568 cli_runner.go:211] docker container inspect bridge-20220921220528-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:14:30.978696    6568 fix.go:103] recreateIfNeeded on bridge-20220921220528-5916: state= err=unknown state "bridge-20220921220528-5916": docker container inspect bridge-20220921220528-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-20220921220528-5916
	I0921 22:14:30.978696    6568 fix.go:108] machineExists: false. err=machine does not exist
	I0921 22:14:30.982497    6568 out.go:177] * docker "bridge-20220921220528-5916" container is missing, will recreate.
	I0921 22:14:30.985999    6568 delete.go:124] DEMOLISHING bridge-20220921220528-5916 ...
	I0921 22:14:30.999731    6568 cli_runner.go:164] Run: docker container inspect bridge-20220921220528-5916 --format={{.State.Status}}
	W0921 22:14:31.210034    6568 cli_runner.go:211] docker container inspect bridge-20220921220528-5916 --format={{.State.Status}} returned with exit code 1
	W0921 22:14:31.210134    6568 stop.go:75] unable to get state: unknown state "bridge-20220921220528-5916": docker container inspect bridge-20220921220528-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-20220921220528-5916
	I0921 22:14:31.210134    6568 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "bridge-20220921220528-5916": docker container inspect bridge-20220921220528-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-20220921220528-5916
	I0921 22:14:31.224831    6568 cli_runner.go:164] Run: docker container inspect bridge-20220921220528-5916 --format={{.State.Status}}
	W0921 22:14:31.428066    6568 cli_runner.go:211] docker container inspect bridge-20220921220528-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:14:31.428066    6568 delete.go:82] Unable to get host status for bridge-20220921220528-5916, assuming it has already been deleted: state: unknown state "bridge-20220921220528-5916": docker container inspect bridge-20220921220528-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-20220921220528-5916
	I0921 22:14:31.436243    6568 cli_runner.go:164] Run: docker container inspect -f {{.Id}} bridge-20220921220528-5916
	W0921 22:14:31.613182    6568 cli_runner.go:211] docker container inspect -f {{.Id}} bridge-20220921220528-5916 returned with exit code 1
	I0921 22:14:31.613375    6568 kic.go:356] could not find the container bridge-20220921220528-5916 to remove it. will try anyways
	I0921 22:14:31.620565    6568 cli_runner.go:164] Run: docker container inspect bridge-20220921220528-5916 --format={{.State.Status}}
	W0921 22:14:31.831185    6568 cli_runner.go:211] docker container inspect bridge-20220921220528-5916 --format={{.State.Status}} returned with exit code 1
	W0921 22:14:31.831185    6568 oci.go:84] error getting container status, will try to delete anyways: unknown state "bridge-20220921220528-5916": docker container inspect bridge-20220921220528-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-20220921220528-5916
	I0921 22:14:31.838167    6568 cli_runner.go:164] Run: docker exec --privileged -t bridge-20220921220528-5916 /bin/bash -c "sudo init 0"
	W0921 22:14:32.030977    6568 cli_runner.go:211] docker exec --privileged -t bridge-20220921220528-5916 /bin/bash -c "sudo init 0" returned with exit code 1
	I0921 22:14:32.030977    6568 oci.go:646] error shutdown bridge-20220921220528-5916: docker exec --privileged -t bridge-20220921220528-5916 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: bridge-20220921220528-5916
	I0921 22:14:33.052630    6568 cli_runner.go:164] Run: docker container inspect bridge-20220921220528-5916 --format={{.State.Status}}
	W0921 22:14:33.247500    6568 cli_runner.go:211] docker container inspect bridge-20220921220528-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:14:33.247500    6568 oci.go:658] temporary error verifying shutdown: unknown state "bridge-20220921220528-5916": docker container inspect bridge-20220921220528-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-20220921220528-5916
	I0921 22:14:33.247500    6568 oci.go:660] temporary error: container bridge-20220921220528-5916 status is  but expect it to be exited
	I0921 22:14:33.247500    6568 retry.go:31] will retry after 328.259627ms: couldn't verify container is exited. %v: unknown state "bridge-20220921220528-5916": docker container inspect bridge-20220921220528-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-20220921220528-5916
	I0921 22:14:33.584517    6568 cli_runner.go:164] Run: docker container inspect bridge-20220921220528-5916 --format={{.State.Status}}
	W0921 22:14:33.793717    6568 cli_runner.go:211] docker container inspect bridge-20220921220528-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:14:33.793717    6568 oci.go:658] temporary error verifying shutdown: unknown state "bridge-20220921220528-5916": docker container inspect bridge-20220921220528-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-20220921220528-5916
	I0921 22:14:33.793717    6568 oci.go:660] temporary error: container bridge-20220921220528-5916 status is  but expect it to be exited
	I0921 22:14:33.793717    6568 retry.go:31] will retry after 447.727139ms: couldn't verify container is exited. %v: unknown state "bridge-20220921220528-5916": docker container inspect bridge-20220921220528-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-20220921220528-5916
	I0921 22:14:34.251406    6568 cli_runner.go:164] Run: docker container inspect bridge-20220921220528-5916 --format={{.State.Status}}
	W0921 22:14:34.474877    6568 cli_runner.go:211] docker container inspect bridge-20220921220528-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:14:34.474992    6568 oci.go:658] temporary error verifying shutdown: unknown state "bridge-20220921220528-5916": docker container inspect bridge-20220921220528-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-20220921220528-5916
	I0921 22:14:34.475057    6568 oci.go:660] temporary error: container bridge-20220921220528-5916 status is  but expect it to be exited
	I0921 22:14:34.475115    6568 retry.go:31] will retry after 901.025843ms: couldn't verify container is exited. %v: unknown state "bridge-20220921220528-5916": docker container inspect bridge-20220921220528-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-20220921220528-5916
	I0921 22:14:35.389243    6568 cli_runner.go:164] Run: docker container inspect bridge-20220921220528-5916 --format={{.State.Status}}
	W0921 22:14:35.585323    6568 cli_runner.go:211] docker container inspect bridge-20220921220528-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:14:35.585440    6568 oci.go:658] temporary error verifying shutdown: unknown state "bridge-20220921220528-5916": docker container inspect bridge-20220921220528-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-20220921220528-5916
	I0921 22:14:35.585440    6568 oci.go:660] temporary error: container bridge-20220921220528-5916 status is  but expect it to be exited
	I0921 22:14:35.585440    6568 retry.go:31] will retry after 1.713171311s: couldn't verify container is exited. %v: unknown state "bridge-20220921220528-5916": docker container inspect bridge-20220921220528-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-20220921220528-5916
	I0921 22:14:37.315796    6568 cli_runner.go:164] Run: docker container inspect bridge-20220921220528-5916 --format={{.State.Status}}
	W0921 22:14:37.523579    6568 cli_runner.go:211] docker container inspect bridge-20220921220528-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:14:37.523862    6568 oci.go:658] temporary error verifying shutdown: unknown state "bridge-20220921220528-5916": docker container inspect bridge-20220921220528-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-20220921220528-5916
	I0921 22:14:37.524068    6568 oci.go:660] temporary error: container bridge-20220921220528-5916 status is  but expect it to be exited
	I0921 22:14:37.524068    6568 retry.go:31] will retry after 3.325151152s: couldn't verify container is exited. %v: unknown state "bridge-20220921220528-5916": docker container inspect bridge-20220921220528-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-20220921220528-5916
	I0921 22:14:40.872052    6568 cli_runner.go:164] Run: docker container inspect bridge-20220921220528-5916 --format={{.State.Status}}
	W0921 22:14:41.063490    6568 cli_runner.go:211] docker container inspect bridge-20220921220528-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:14:41.063732    6568 oci.go:658] temporary error verifying shutdown: unknown state "bridge-20220921220528-5916": docker container inspect bridge-20220921220528-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-20220921220528-5916
	I0921 22:14:41.063732    6568 oci.go:660] temporary error: container bridge-20220921220528-5916 status is  but expect it to be exited
	I0921 22:14:41.063783    6568 retry.go:31] will retry after 2.711970641s: couldn't verify container is exited. %v: unknown state "bridge-20220921220528-5916": docker container inspect bridge-20220921220528-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-20220921220528-5916
	I0921 22:14:43.787723    6568 cli_runner.go:164] Run: docker container inspect bridge-20220921220528-5916 --format={{.State.Status}}
	W0921 22:14:43.980384    6568 cli_runner.go:211] docker container inspect bridge-20220921220528-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:14:43.980384    6568 oci.go:658] temporary error verifying shutdown: unknown state "bridge-20220921220528-5916": docker container inspect bridge-20220921220528-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-20220921220528-5916
	I0921 22:14:43.980384    6568 oci.go:660] temporary error: container bridge-20220921220528-5916 status is  but expect it to be exited
	I0921 22:14:43.980384    6568 retry.go:31] will retry after 5.015617898s: couldn't verify container is exited. %v: unknown state "bridge-20220921220528-5916": docker container inspect bridge-20220921220528-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-20220921220528-5916
	I0921 22:14:49.013568    6568 cli_runner.go:164] Run: docker container inspect bridge-20220921220528-5916 --format={{.State.Status}}
	W0921 22:14:49.235968    6568 cli_runner.go:211] docker container inspect bridge-20220921220528-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:14:49.235968    6568 oci.go:658] temporary error verifying shutdown: unknown state "bridge-20220921220528-5916": docker container inspect bridge-20220921220528-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-20220921220528-5916
	I0921 22:14:49.235968    6568 oci.go:660] temporary error: container bridge-20220921220528-5916 status is  but expect it to be exited
	I0921 22:14:49.235968    6568 oci.go:88] couldn't shut down bridge-20220921220528-5916 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "bridge-20220921220528-5916": docker container inspect bridge-20220921220528-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-20220921220528-5916
	 
	I0921 22:14:49.241961    6568 cli_runner.go:164] Run: docker rm -f -v bridge-20220921220528-5916
	I0921 22:14:49.462496    6568 cli_runner.go:164] Run: docker container inspect -f {{.Id}} bridge-20220921220528-5916
	W0921 22:14:49.655966    6568 cli_runner.go:211] docker container inspect -f {{.Id}} bridge-20220921220528-5916 returned with exit code 1
	I0921 22:14:49.662989    6568 cli_runner.go:164] Run: docker network inspect bridge-20220921220528-5916 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0921 22:14:49.859681    6568 cli_runner.go:211] docker network inspect bridge-20220921220528-5916 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0921 22:14:49.865440    6568 network_create.go:272] running [docker network inspect bridge-20220921220528-5916] to gather additional debugging logs...
	I0921 22:14:49.865440    6568 cli_runner.go:164] Run: docker network inspect bridge-20220921220528-5916
	W0921 22:14:50.047210    6568 cli_runner.go:211] docker network inspect bridge-20220921220528-5916 returned with exit code 1
	I0921 22:14:50.047290    6568 network_create.go:275] error running [docker network inspect bridge-20220921220528-5916]: docker network inspect bridge-20220921220528-5916: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: bridge-20220921220528-5916
	I0921 22:14:50.047325    6568 network_create.go:277] output of [docker network inspect bridge-20220921220528-5916]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: bridge-20220921220528-5916
	
	** /stderr **
	W0921 22:14:50.048350    6568 delete.go:139] delete failed (probably ok) <nil>
	I0921 22:14:50.048350    6568 fix.go:115] Sleeping 1 second for extra luck!
	I0921 22:14:51.061535    6568 start.go:125] createHost starting for "" (driver="docker")
	I0921 22:14:51.067729    6568 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0921 22:14:51.067729    6568 start.go:159] libmachine.API.Create for "bridge-20220921220528-5916" (driver="docker")
	I0921 22:14:51.067729    6568 client.go:168] LocalClient.Create starting
	I0921 22:14:51.068422    6568 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I0921 22:14:51.068422    6568 main.go:134] libmachine: Decoding PEM data...
	I0921 22:14:51.068422    6568 main.go:134] libmachine: Parsing certificate...
	I0921 22:14:51.069177    6568 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I0921 22:14:51.069177    6568 main.go:134] libmachine: Decoding PEM data...
	I0921 22:14:51.069177    6568 main.go:134] libmachine: Parsing certificate...
	I0921 22:14:51.081847    6568 cli_runner.go:164] Run: docker network inspect bridge-20220921220528-5916 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0921 22:14:51.322306    6568 cli_runner.go:211] docker network inspect bridge-20220921220528-5916 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0921 22:14:51.330866    6568 network_create.go:272] running [docker network inspect bridge-20220921220528-5916] to gather additional debugging logs...
	I0921 22:14:51.330866    6568 cli_runner.go:164] Run: docker network inspect bridge-20220921220528-5916
	W0921 22:14:51.509627    6568 cli_runner.go:211] docker network inspect bridge-20220921220528-5916 returned with exit code 1
	I0921 22:14:51.509627    6568 network_create.go:275] error running [docker network inspect bridge-20220921220528-5916]: docker network inspect bridge-20220921220528-5916: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: bridge-20220921220528-5916
	I0921 22:14:51.509627    6568 network_create.go:277] output of [docker network inspect bridge-20220921220528-5916]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: bridge-20220921220528-5916
	
	** /stderr **
	I0921 22:14:51.518628    6568 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0921 22:14:51.779442    6568 network.go:281] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0005ee3e0] amended:false}} dirty:map[] misses:0}
	I0921 22:14:51.779442    6568 network.go:239] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0921 22:14:51.794276    6568 network.go:290] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0005ee3e0] amended:true}} dirty:map[192.168.49.0:0xc0005ee3e0 192.168.58.0:0xc0005ee4f0] misses:0}
	I0921 22:14:51.794646    6568 network.go:236] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0921 22:14:51.794646    6568 network_create.go:115] attempt to create docker network bridge-20220921220528-5916 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0921 22:14:51.804148    6568 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=bridge-20220921220528-5916 bridge-20220921220528-5916
	W0921 22:14:52.026449    6568 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=bridge-20220921220528-5916 bridge-20220921220528-5916 returned with exit code 1
	E0921 22:14:52.026449    6568 network_create.go:104] error while trying to create docker network bridge-20220921220528-5916 192.168.58.0/24: create docker network bridge-20220921220528-5916 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=bridge-20220921220528-5916 bridge-20220921220528-5916: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 9082ba465ce62b59ff15a35bc3964f9282f9a24e43fed58a59a0fc5cc9647283 (br-9082ba465ce6): conflicts with network 8a3cd8d165a4af032276277dbee93b52b8ee4aed7ca6d8aceee6814901b31792 (br-8a3cd8d165a4): networks have overlapping IPv4
	W0921 22:14:52.026449    6568 out.go:239] ! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network bridge-20220921220528-5916 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=bridge-20220921220528-5916 bridge-20220921220528-5916: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 9082ba465ce62b59ff15a35bc3964f9282f9a24e43fed58a59a0fc5cc9647283 (br-9082ba465ce6): conflicts with network 8a3cd8d165a4af032276277dbee93b52b8ee4aed7ca6d8aceee6814901b31792 (br-8a3cd8d165a4): networks have overlapping IPv4
	
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network bridge-20220921220528-5916 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=bridge-20220921220528-5916 bridge-20220921220528-5916: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 9082ba465ce62b59ff15a35bc3964f9282f9a24e43fed58a59a0fc5cc9647283 (br-9082ba465ce6): conflicts with network 8a3cd8d165a4af032276277dbee93b52b8ee4aed7ca6d8aceee6814901b31792 (br-8a3cd8d165a4): networks have overlapping IPv4
	
	I0921 22:14:52.040451    6568 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0921 22:14:52.261298    6568 cli_runner.go:164] Run: docker volume create bridge-20220921220528-5916 --label name.minikube.sigs.k8s.io=bridge-20220921220528-5916 --label created_by.minikube.sigs.k8s.io=true
	W0921 22:14:52.462247    6568 cli_runner.go:211] docker volume create bridge-20220921220528-5916 --label name.minikube.sigs.k8s.io=bridge-20220921220528-5916 --label created_by.minikube.sigs.k8s.io=true returned with exit code 1
	I0921 22:14:52.462247    6568 client.go:171] LocalClient.Create took 1.3945068s
	I0921 22:14:54.483234    6568 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0921 22:14:54.494233    6568 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20220921220528-5916
	W0921 22:14:54.724238    6568 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20220921220528-5916 returned with exit code 1
	I0921 22:14:54.724238    6568 retry.go:31] will retry after 242.222461ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "bridge-20220921220528-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20220921220528-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-20220921220528-5916
	I0921 22:14:54.984115    6568 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20220921220528-5916
	W0921 22:14:55.197114    6568 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20220921220528-5916 returned with exit code 1
	I0921 22:14:55.197114    6568 retry.go:31] will retry after 293.637806ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "bridge-20220921220528-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20220921220528-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-20220921220528-5916
	I0921 22:14:55.500243    6568 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20220921220528-5916
	W0921 22:14:55.701240    6568 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20220921220528-5916 returned with exit code 1
	I0921 22:14:55.701240    6568 retry.go:31] will retry after 446.119795ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "bridge-20220921220528-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20220921220528-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-20220921220528-5916
	I0921 22:14:56.166271    6568 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20220921220528-5916
	W0921 22:14:56.349271    6568 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20220921220528-5916 returned with exit code 1
	W0921 22:14:56.349271    6568 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "bridge-20220921220528-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20220921220528-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-20220921220528-5916
	
	W0921 22:14:56.349271    6568 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "bridge-20220921220528-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20220921220528-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-20220921220528-5916
	I0921 22:14:56.359288    6568 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0921 22:14:56.366281    6568 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20220921220528-5916
	W0921 22:14:56.571995    6568 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20220921220528-5916 returned with exit code 1
	I0921 22:14:56.571995    6568 retry.go:31] will retry after 179.638263ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "bridge-20220921220528-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20220921220528-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-20220921220528-5916
	I0921 22:14:56.771060    6568 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20220921220528-5916
	W0921 22:14:56.971247    6568 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20220921220528-5916 returned with exit code 1
	I0921 22:14:56.971247    6568 retry.go:31] will retry after 263.695078ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "bridge-20220921220528-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20220921220528-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-20220921220528-5916
	I0921 22:14:57.252476    6568 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20220921220528-5916
	W0921 22:14:57.448140    6568 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20220921220528-5916 returned with exit code 1
	I0921 22:14:57.448140    6568 retry.go:31] will retry after 484.240172ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "bridge-20220921220528-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20220921220528-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-20220921220528-5916
	I0921 22:14:57.944907    6568 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20220921220528-5916
	W0921 22:14:58.140978    6568 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20220921220528-5916 returned with exit code 1
	W0921 22:14:58.140978    6568 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "bridge-20220921220528-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20220921220528-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-20220921220528-5916
	
	W0921 22:14:58.140978    6568 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "bridge-20220921220528-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20220921220528-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-20220921220528-5916
	I0921 22:14:58.140978    6568 start.go:128] duration metric: createHost completed in 7.0793375s
	I0921 22:14:58.150971    6568 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0921 22:14:58.157919    6568 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20220921220528-5916
	W0921 22:14:58.364336    6568 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20220921220528-5916 returned with exit code 1
	I0921 22:14:58.364336    6568 retry.go:31] will retry after 340.62286ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "bridge-20220921220528-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20220921220528-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-20220921220528-5916
	I0921 22:14:58.714852    6568 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20220921220528-5916
	W0921 22:14:58.898966    6568 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20220921220528-5916 returned with exit code 1
	I0921 22:14:58.898966    6568 retry.go:31] will retry after 297.417842ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "bridge-20220921220528-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20220921220528-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-20220921220528-5916
	I0921 22:14:59.209836    6568 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20220921220528-5916
	W0921 22:14:59.403157    6568 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20220921220528-5916 returned with exit code 1
	I0921 22:14:59.403157    6568 retry.go:31] will retry after 448.358942ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "bridge-20220921220528-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20220921220528-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-20220921220528-5916
	I0921 22:14:59.865384    6568 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20220921220528-5916
	W0921 22:15:00.045213    6568 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20220921220528-5916 returned with exit code 1
	W0921 22:15:00.045213    6568 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "bridge-20220921220528-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20220921220528-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-20220921220528-5916
	
	W0921 22:15:00.045213    6568 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "bridge-20220921220528-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20220921220528-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-20220921220528-5916
	I0921 22:15:00.058602    6568 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0921 22:15:00.066210    6568 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20220921220528-5916
	W0921 22:15:00.255412    6568 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20220921220528-5916 returned with exit code 1
	I0921 22:15:00.255412    6568 retry.go:31] will retry after 176.645665ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "bridge-20220921220528-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20220921220528-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-20220921220528-5916
	I0921 22:15:00.452022    6568 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20220921220528-5916
	W0921 22:15:00.647931    6568 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20220921220528-5916 returned with exit code 1
	I0921 22:15:00.647931    6568 retry.go:31] will retry after 512.00063ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "bridge-20220921220528-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20220921220528-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-20220921220528-5916
	I0921 22:15:01.175163    6568 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20220921220528-5916
	W0921 22:15:01.416166    6568 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20220921220528-5916 returned with exit code 1
	W0921 22:15:01.416166    6568 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "bridge-20220921220528-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20220921220528-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-20220921220528-5916
	
	W0921 22:15:01.416166    6568 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "bridge-20220921220528-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20220921220528-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-20220921220528-5916
	I0921 22:15:01.416166    6568 fix.go:57] fixHost completed within 30.6538698s
	I0921 22:15:01.416166    6568 start.go:83] releasing machines lock for "bridge-20220921220528-5916", held for 30.6538698s
	W0921 22:15:01.416166    6568 out.go:239] * Failed to start docker container. Running "minikube delete -p bridge-20220921220528-5916" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for bridge-20220921220528-5916 container: docker volume create bridge-20220921220528-5916 --label name.minikube.sigs.k8s.io=bridge-20220921220528-5916 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create bridge-20220921220528-5916: error while creating volume root path '/var/lib/docker/volumes/bridge-20220921220528-5916': mkdir /var/lib/docker/volumes/bridge-20220921220528-5916: read-only file system
	
	* Failed to start docker container. Running "minikube delete -p bridge-20220921220528-5916" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for bridge-20220921220528-5916 container: docker volume create bridge-20220921220528-5916 --label name.minikube.sigs.k8s.io=bridge-20220921220528-5916 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create bridge-20220921220528-5916: error while creating volume root path '/var/lib/docker/volumes/bridge-20220921220528-5916': mkdir /var/lib/docker/volumes/bridge-20220921220528-5916: read-only file system
	
	I0921 22:15:01.421135    6568 out.go:177] 
	W0921 22:15:01.423138    6568 out.go:239] X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for bridge-20220921220528-5916 container: docker volume create bridge-20220921220528-5916 --label name.minikube.sigs.k8s.io=bridge-20220921220528-5916 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create bridge-20220921220528-5916: error while creating volume root path '/var/lib/docker/volumes/bridge-20220921220528-5916': mkdir /var/lib/docker/volumes/bridge-20220921220528-5916: read-only file system
	
	X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for bridge-20220921220528-5916 container: docker volume create bridge-20220921220528-5916 --label name.minikube.sigs.k8s.io=bridge-20220921220528-5916 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create bridge-20220921220528-5916: error while creating volume root path '/var/lib/docker/volumes/bridge-20220921220528-5916': mkdir /var/lib/docker/volumes/bridge-20220921220528-5916: read-only file system
	
	W0921 22:15:01.423138    6568 out.go:239] * Suggestion: Restart Docker
	* Suggestion: Restart Docker
	W0921 22:15:01.423138    6568 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/6825
	* Related issue: https://github.com/kubernetes/minikube/issues/6825
	I0921 22:15:01.427188    6568 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:103: failed start: exit status 60
--- FAIL: TestNetworkPlugins/group/bridge/Start (49.62s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (50s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:101: (dbg) Run:  out/minikube-windows-amd64.exe start -p enable-default-cni-20220921220528-5916 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --enable-default-cni=true --driver=docker

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:101: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p enable-default-cni-20220921220528-5916 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --enable-default-cni=true --driver=docker: exit status 60 (49.9037921s)

                                                
                                                
-- stdout --
	* [enable-default-cni-20220921220528-5916] minikube v1.27.0 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	  - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=14995
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting control plane node enable-default-cni-20220921220528-5916 in cluster enable-default-cni-20220921220528-5916
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* docker "enable-default-cni-20220921220528-5916" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0921 22:14:15.636276    5552 out.go:296] Setting OutFile to fd 1412 ...
	I0921 22:14:15.698348    5552 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0921 22:14:15.698348    5552 out.go:309] Setting ErrFile to fd 1812...
	I0921 22:14:15.698938    5552 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0921 22:14:15.717008    5552 out.go:303] Setting JSON to false
	I0921 22:14:15.735302    5552 start.go:115] hostinfo: {"hostname":"minikube2","uptime":4524,"bootTime":1663793931,"procs":152,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19044 Build 19044","kernelVersion":"10.0.19044 Build 19044","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"29dced62-21fb-45d8-a34e-472b66ced002"}
	W0921 22:14:15.735302    5552 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0921 22:14:15.744610    5552 out.go:177] * [enable-default-cni-20220921220528-5916] minikube v1.27.0 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	I0921 22:14:15.750107    5552 notify.go:214] Checking for updates...
	I0921 22:14:15.752394    5552 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	I0921 22:14:15.754198    5552 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	I0921 22:14:15.756985    5552 out.go:177]   - MINIKUBE_LOCATION=14995
	I0921 22:14:15.760004    5552 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0921 22:14:15.763694    5552 config.go:180] Loaded profile config "bridge-20220921220528-5916": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.2
	I0921 22:14:15.764271    5552 config.go:180] Loaded profile config "default-k8s-different-port-20220921221221-5916": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.2
	I0921 22:14:15.764388    5552 config.go:180] Loaded profile config "multinode-20220921215635-5916-m01": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.2
	I0921 22:14:15.765001    5552 config.go:180] Loaded profile config "newest-cni-20220921221222-5916": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.2
	I0921 22:14:15.765001    5552 driver.go:365] Setting default libvirt URI to qemu:///system
	I0921 22:14:16.068490    5552 docker.go:137] docker version: linux-20.10.17
	I0921 22:14:16.077006    5552 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0921 22:14:16.619781    5552 info.go:265] docker info: {ID:6VFI:N2PO:PLBP:KXLR:EN7R:OR7P:564S:UY7N:ZWDA:5V2U:KGL2:KJZ7 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:44 OomKillDisable:true NGoroutines:94 SystemTime:2022-09-21 22:14:16.2275616 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.10.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.9] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-
plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.19.0]] Warnings:<nil>}}
	I0921 22:14:16.622934    5552 out.go:177] * Using the docker driver based on user configuration
	I0921 22:14:16.626032    5552 start.go:284] selected driver: docker
	I0921 22:14:16.626032    5552 start.go:808] validating driver "docker" against <nil>
	I0921 22:14:16.626032    5552 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0921 22:14:16.698000    5552 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0921 22:14:17.237290    5552 info.go:265] docker info: {ID:6VFI:N2PO:PLBP:KXLR:EN7R:OR7P:564S:UY7N:ZWDA:5V2U:KGL2:KJZ7 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:43 OomKillDisable:true NGoroutines:93 SystemTime:2022-09-21 22:14:16.8457662 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.10.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.9] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-
plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.19.0]] Warnings:<nil>}}
	I0921 22:14:17.237864    5552 start_flags.go:302] no existing cluster config was found, will generate one from the flags 
	E0921 22:14:17.238008    5552 start_flags.go:454] Found deprecated --enable-default-cni flag, setting --cni=bridge
	I0921 22:14:17.238008    5552 start_flags.go:867] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0921 22:14:17.244211    5552 out.go:177] * Using Docker Desktop driver with root privileges
	I0921 22:14:17.245954    5552 cni.go:95] Creating CNI manager for "bridge"
	I0921 22:14:17.245954    5552 start_flags.go:311] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0921 22:14:17.245954    5552 start_flags.go:316] config:
	{Name:enable-default-cni-20220921220528-5916 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.2 ClusterName:enable-default-cni-20220921220528-5916 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket
_vmnet}
	I0921 22:14:17.249884    5552 out.go:177] * Starting control plane node enable-default-cni-20220921220528-5916 in cluster enable-default-cni-20220921220528-5916
	I0921 22:14:17.253983    5552 cache.go:120] Beginning downloading kic base image for docker with docker
	I0921 22:14:17.258616    5552 out.go:177] * Pulling base image ...
	I0921 22:14:17.260245    5552 preload.go:132] Checking if preload exists for k8s version v1.25.2 and runtime docker
	I0921 22:14:17.260245    5552 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c in local docker daemon
	I0921 22:14:17.260245    5552 preload.go:148] Found local preload: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.25.2-docker-overlay2-amd64.tar.lz4
	I0921 22:14:17.260245    5552 cache.go:57] Caching tarball of preloaded images
	I0921 22:14:17.261063    5552 preload.go:174] Found C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.25.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0921 22:14:17.261063    5552 cache.go:60] Finished verifying existence of preloaded tar for  v1.25.2 on docker
	I0921 22:14:17.261714    5552 profile.go:148] Saving config to C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\enable-default-cni-20220921220528-5916\config.json ...
	I0921 22:14:17.261869    5552 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\enable-default-cni-20220921220528-5916\config.json: {Name:mk6dcf4df79b901ef769110d1b09db690f43bbb1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0921 22:14:17.455473    5552 cache.go:147] Downloading gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c to local cache
	I0921 22:14:17.455473    5552 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase_v0.0.34@sha256_f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c.tar
	I0921 22:14:17.455473    5552 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase_v0.0.34@sha256_f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c.tar
	I0921 22:14:17.455473    5552 image.go:59] Checking for gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c in local cache directory
	I0921 22:14:17.455473    5552 image.go:62] Found gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c in local cache directory, skipping pull
	I0921 22:14:17.455473    5552 image.go:103] gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c exists in cache, skipping pull
	I0921 22:14:17.456445    5552 cache.go:150] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c as a tarball
	I0921 22:14:17.456445    5552 cache.go:161] Loading gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c from local cache
	I0921 22:14:17.456445    5552 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase_v0.0.34@sha256_f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c.tar
	I0921 22:14:19.793031    5552 cache.go:164] successfully loaded gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c from cached tarball
	I0921 22:14:19.793031    5552 cache.go:170] Downloading gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c to local daemon
	I0921 22:14:19.793031    5552 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c.d.lock -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase_v0.0.34@sha256_f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c.d.lock
	I0921 22:14:19.794029    5552 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c in local docker daemon
	I0921 22:14:20.006762    5552 image.go:243] Writing gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c to local daemon
	    > gcr.io/k8s-minikube/kicbase:  0 B [__________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase:  0 B [__________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase:  0 B [__________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase:  0 B [__________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase:  0 B [______________________] ?% ? p/s 800msI0921 22:14:21.582460    5552 cache.go:177] use image loaded from cache gcr.io/k8s-minikube/kicbase:v0.0.34
	W0921 22:14:21.582460    5552 out.go:239] ! minikube was unable to download gcr.io/k8s-minikube/kicbase:v0.0.34, but successfully downloaded gcr.io/k8s-minikube/kicbase:v0.0.34 as a fallback image
	! minikube was unable to download gcr.io/k8s-minikube/kicbase:v0.0.34, but successfully downloaded gcr.io/k8s-minikube/kicbase:v0.0.34 as a fallback image
	I0921 22:14:21.582460    5552 cache.go:208] Successfully downloaded all kic artifacts
	I0921 22:14:21.582460    5552 start.go:364] acquiring machines lock for enable-default-cni-20220921220528-5916: {Name:mkb45a33bffccff54796d50008c9ac38e0bcf5e6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0921 22:14:21.582460    5552 start.go:368] acquired machines lock for "enable-default-cni-20220921220528-5916" in 0s
	I0921 22:14:21.582460    5552 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-20220921220528-5916 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.34 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.2 ClusterName:enable-default-cni-20220921220528-5916 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:clu
ster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.25.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQe
muFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet} &{Name: IP: Port:8443 KubernetesVersion:v1.25.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0921 22:14:21.583464    5552 start.go:125] createHost starting for "" (driver="docker")
	I0921 22:14:21.587511    5552 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0921 22:14:21.587511    5552 start.go:159] libmachine.API.Create for "enable-default-cni-20220921220528-5916" (driver="docker")
	I0921 22:14:21.587511    5552 client.go:168] LocalClient.Create starting
	I0921 22:14:21.587511    5552 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I0921 22:14:21.588473    5552 main.go:134] libmachine: Decoding PEM data...
	I0921 22:14:21.588473    5552 main.go:134] libmachine: Parsing certificate...
	I0921 22:14:21.588473    5552 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I0921 22:14:21.588473    5552 main.go:134] libmachine: Decoding PEM data...
	I0921 22:14:21.588473    5552 main.go:134] libmachine: Parsing certificate...
	I0921 22:14:21.597468    5552 cli_runner.go:164] Run: docker network inspect enable-default-cni-20220921220528-5916 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0921 22:14:21.814529    5552 cli_runner.go:211] docker network inspect enable-default-cni-20220921220528-5916 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0921 22:14:21.821550    5552 network_create.go:272] running [docker network inspect enable-default-cni-20220921220528-5916] to gather additional debugging logs...
	I0921 22:14:21.821550    5552 cli_runner.go:164] Run: docker network inspect enable-default-cni-20220921220528-5916
	W0921 22:14:22.002489    5552 cli_runner.go:211] docker network inspect enable-default-cni-20220921220528-5916 returned with exit code 1
	I0921 22:14:22.002489    5552 network_create.go:275] error running [docker network inspect enable-default-cni-20220921220528-5916]: docker network inspect enable-default-cni-20220921220528-5916: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: enable-default-cni-20220921220528-5916
	I0921 22:14:22.002489    5552 network_create.go:277] output of [docker network inspect enable-default-cni-20220921220528-5916]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: enable-default-cni-20220921220528-5916
	
	** /stderr **
	I0921 22:14:22.008480    5552 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0921 22:14:22.224508    5552 network.go:290] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc0006e8b58] misses:0}
	I0921 22:14:22.224508    5552 network.go:236] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0921 22:14:22.224508    5552 network_create.go:115] attempt to create docker network enable-default-cni-20220921220528-5916 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0921 22:14:22.232761    5552 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=enable-default-cni-20220921220528-5916 enable-default-cni-20220921220528-5916
	W0921 22:14:22.420983    5552 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=enable-default-cni-20220921220528-5916 enable-default-cni-20220921220528-5916 returned with exit code 1
	E0921 22:14:22.421064    5552 network_create.go:104] error while trying to create docker network enable-default-cni-20220921220528-5916 192.168.49.0/24: create docker network enable-default-cni-20220921220528-5916 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=enable-default-cni-20220921220528-5916 enable-default-cni-20220921220528-5916: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 7a8b640037f54600be492d9116092162e8aa2069dc8050d5b23e23a553782d62 (br-7a8b640037f5): conflicts with network a04d36bfb3cf4b1b1ab643a2ec70fadfc4a485878b32a275b4c7653037d66dd0 (br-a04d36bfb3cf): networks have overlapping IPv4
	W0921 22:14:22.421396    5552 out.go:239] ! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network enable-default-cni-20220921220528-5916 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=enable-default-cni-20220921220528-5916 enable-default-cni-20220921220528-5916: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 7a8b640037f54600be492d9116092162e8aa2069dc8050d5b23e23a553782d62 (br-7a8b640037f5): conflicts with network a04d36bfb3cf4b1b1ab643a2ec70fadfc4a485878b32a275b4c7653037d66dd0 (br-a04d36bfb3cf): networks have overlapping IPv4
	
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network enable-default-cni-20220921220528-5916 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=enable-default-cni-20220921220528-5916 enable-default-cni-20220921220528-5916: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 7a8b640037f54600be492d9116092162e8aa2069dc8050d5b23e23a553782d62 (br-7a8b640037f5): conflicts with network a04d36bfb3cf4b1b1ab643a2ec70fadfc4a485878b32a275b4c7653037d66dd0 (br-a04d36bfb3cf): networks have overlapping IPv4
	
	I0921 22:14:22.435761    5552 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0921 22:14:22.630358    5552 cli_runner.go:164] Run: docker volume create enable-default-cni-20220921220528-5916 --label name.minikube.sigs.k8s.io=enable-default-cni-20220921220528-5916 --label created_by.minikube.sigs.k8s.io=true
	W0921 22:14:22.843496    5552 cli_runner.go:211] docker volume create enable-default-cni-20220921220528-5916 --label name.minikube.sigs.k8s.io=enable-default-cni-20220921220528-5916 --label created_by.minikube.sigs.k8s.io=true returned with exit code 1
	I0921 22:14:22.843496    5552 client.go:171] LocalClient.Create took 1.2559747s
	I0921 22:14:24.861444    5552 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0921 22:14:24.868259    5552 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220921220528-5916
	W0921 22:14:25.054255    5552 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220921220528-5916 returned with exit code 1
	I0921 22:14:25.054486    5552 retry.go:31] will retry after 276.165072ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "enable-default-cni-20220921220528-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220921220528-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-20220921220528-5916
	I0921 22:14:25.343527    5552 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220921220528-5916
	W0921 22:14:25.536849    5552 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220921220528-5916 returned with exit code 1
	I0921 22:14:25.536849    5552 retry.go:31] will retry after 540.190908ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "enable-default-cni-20220921220528-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220921220528-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-20220921220528-5916
	I0921 22:14:26.086876    5552 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220921220528-5916
	W0921 22:14:26.281300    5552 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220921220528-5916 returned with exit code 1
	W0921 22:14:26.281572    5552 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "enable-default-cni-20220921220528-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220921220528-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-20220921220528-5916
	
	W0921 22:14:26.281572    5552 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "enable-default-cni-20220921220528-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220921220528-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-20220921220528-5916
	I0921 22:14:26.292783    5552 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0921 22:14:26.301490    5552 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220921220528-5916
	W0921 22:14:26.481820    5552 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220921220528-5916 returned with exit code 1
	I0921 22:14:26.482235    5552 retry.go:31] will retry after 234.428547ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "enable-default-cni-20220921220528-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220921220528-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-20220921220528-5916
	I0921 22:14:26.727445    5552 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220921220528-5916
	W0921 22:14:26.944712    5552 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220921220528-5916 returned with exit code 1
	I0921 22:14:26.945039    5552 retry.go:31] will retry after 346.739061ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "enable-default-cni-20220921220528-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220921220528-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-20220921220528-5916
	I0921 22:14:27.301562    5552 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220921220528-5916
	W0921 22:14:27.495853    5552 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220921220528-5916 returned with exit code 1
	I0921 22:14:27.496289    5552 retry.go:31] will retry after 667.587979ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "enable-default-cni-20220921220528-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220921220528-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-20220921220528-5916
	I0921 22:14:28.182917    5552 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220921220528-5916
	W0921 22:14:28.374566    5552 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220921220528-5916 returned with exit code 1
	W0921 22:14:28.374800    5552 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "enable-default-cni-20220921220528-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220921220528-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-20220921220528-5916
	
	W0921 22:14:28.374965    5552 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "enable-default-cni-20220921220528-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220921220528-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-20220921220528-5916
	I0921 22:14:28.374965    5552 start.go:128] duration metric: createHost completed in 6.7914472s
	I0921 22:14:28.374965    5552 start.go:83] releasing machines lock for "enable-default-cni-20220921220528-5916", held for 6.7924504s
	W0921 22:14:28.375147    5552 start.go:602] error starting host: creating host: create: creating: setting up container node: creating volume for enable-default-cni-20220921220528-5916 container: docker volume create enable-default-cni-20220921220528-5916 --label name.minikube.sigs.k8s.io=enable-default-cni-20220921220528-5916 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create enable-default-cni-20220921220528-5916: error while creating volume root path '/var/lib/docker/volumes/enable-default-cni-20220921220528-5916': mkdir /var/lib/docker/volumes/enable-default-cni-20220921220528-5916: read-only file system
	I0921 22:14:28.389210    5552 cli_runner.go:164] Run: docker container inspect enable-default-cni-20220921220528-5916 --format={{.State.Status}}
	W0921 22:14:28.597475    5552 cli_runner.go:211] docker container inspect enable-default-cni-20220921220528-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:14:28.597475    5552 delete.go:82] Unable to get host status for enable-default-cni-20220921220528-5916, assuming it has already been deleted: state: unknown state "enable-default-cni-20220921220528-5916": docker container inspect enable-default-cni-20220921220528-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-20220921220528-5916
	W0921 22:14:28.597475    5552 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: setting up container node: creating volume for enable-default-cni-20220921220528-5916 container: docker volume create enable-default-cni-20220921220528-5916 --label name.minikube.sigs.k8s.io=enable-default-cni-20220921220528-5916 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create enable-default-cni-20220921220528-5916: error while creating volume root path '/var/lib/docker/volumes/enable-default-cni-20220921220528-5916': mkdir /var/lib/docker/volumes/enable-default-cni-20220921220528-5916: read-only file system
	
	! StartHost failed, but will try again: creating host: create: creating: setting up container node: creating volume for enable-default-cni-20220921220528-5916 container: docker volume create enable-default-cni-20220921220528-5916 --label name.minikube.sigs.k8s.io=enable-default-cni-20220921220528-5916 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create enable-default-cni-20220921220528-5916: error while creating volume root path '/var/lib/docker/volumes/enable-default-cni-20220921220528-5916': mkdir /var/lib/docker/volumes/enable-default-cni-20220921220528-5916: read-only file system
	
	I0921 22:14:28.597475    5552 start.go:617] Will try again in 5 seconds ...
	I0921 22:14:33.607802    5552 start.go:364] acquiring machines lock for enable-default-cni-20220921220528-5916: {Name:mkb45a33bffccff54796d50008c9ac38e0bcf5e6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0921 22:14:33.607802    5552 start.go:368] acquired machines lock for "enable-default-cni-20220921220528-5916" in 0s
	I0921 22:14:33.607802    5552 start.go:96] Skipping create...Using existing machine configuration
	I0921 22:14:33.607802    5552 fix.go:55] fixHost starting: 
	I0921 22:14:33.625091    5552 cli_runner.go:164] Run: docker container inspect enable-default-cni-20220921220528-5916 --format={{.State.Status}}
	W0921 22:14:33.808722    5552 cli_runner.go:211] docker container inspect enable-default-cni-20220921220528-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:14:33.808722    5552 fix.go:103] recreateIfNeeded on enable-default-cni-20220921220528-5916: state= err=unknown state "enable-default-cni-20220921220528-5916": docker container inspect enable-default-cni-20220921220528-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-20220921220528-5916
	I0921 22:14:33.808722    5552 fix.go:108] machineExists: false. err=machine does not exist
	I0921 22:14:33.813726    5552 out.go:177] * docker "enable-default-cni-20220921220528-5916" container is missing, will recreate.
	I0921 22:14:33.815722    5552 delete.go:124] DEMOLISHING enable-default-cni-20220921220528-5916 ...
	I0921 22:14:33.828769    5552 cli_runner.go:164] Run: docker container inspect enable-default-cni-20220921220528-5916 --format={{.State.Status}}
	W0921 22:14:34.026801    5552 cli_runner.go:211] docker container inspect enable-default-cni-20220921220528-5916 --format={{.State.Status}} returned with exit code 1
	W0921 22:14:34.026801    5552 stop.go:75] unable to get state: unknown state "enable-default-cni-20220921220528-5916": docker container inspect enable-default-cni-20220921220528-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-20220921220528-5916
	I0921 22:14:34.026801    5552 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "enable-default-cni-20220921220528-5916": docker container inspect enable-default-cni-20220921220528-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-20220921220528-5916
	I0921 22:14:34.042735    5552 cli_runner.go:164] Run: docker container inspect enable-default-cni-20220921220528-5916 --format={{.State.Status}}
	W0921 22:14:34.243410    5552 cli_runner.go:211] docker container inspect enable-default-cni-20220921220528-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:14:34.243410    5552 delete.go:82] Unable to get host status for enable-default-cni-20220921220528-5916, assuming it has already been deleted: state: unknown state "enable-default-cni-20220921220528-5916": docker container inspect enable-default-cni-20220921220528-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-20220921220528-5916
	I0921 22:14:34.251406    5552 cli_runner.go:164] Run: docker container inspect -f {{.Id}} enable-default-cni-20220921220528-5916
	W0921 22:14:34.474877    5552 cli_runner.go:211] docker container inspect -f {{.Id}} enable-default-cni-20220921220528-5916 returned with exit code 1
	I0921 22:14:34.475026    5552 kic.go:356] could not find the container enable-default-cni-20220921220528-5916 to remove it. will try anyways
	I0921 22:14:34.483296    5552 cli_runner.go:164] Run: docker container inspect enable-default-cni-20220921220528-5916 --format={{.State.Status}}
	W0921 22:14:34.675802    5552 cli_runner.go:211] docker container inspect enable-default-cni-20220921220528-5916 --format={{.State.Status}} returned with exit code 1
	W0921 22:14:34.675939    5552 oci.go:84] error getting container status, will try to delete anyways: unknown state "enable-default-cni-20220921220528-5916": docker container inspect enable-default-cni-20220921220528-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-20220921220528-5916
	I0921 22:14:34.683267    5552 cli_runner.go:164] Run: docker exec --privileged -t enable-default-cni-20220921220528-5916 /bin/bash -c "sudo init 0"
	W0921 22:14:34.863326    5552 cli_runner.go:211] docker exec --privileged -t enable-default-cni-20220921220528-5916 /bin/bash -c "sudo init 0" returned with exit code 1
	I0921 22:14:34.863326    5552 oci.go:646] error shutdown enable-default-cni-20220921220528-5916: docker exec --privileged -t enable-default-cni-20220921220528-5916 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: enable-default-cni-20220921220528-5916
	I0921 22:14:35.883059    5552 cli_runner.go:164] Run: docker container inspect enable-default-cni-20220921220528-5916 --format={{.State.Status}}
	W0921 22:14:36.060189    5552 cli_runner.go:211] docker container inspect enable-default-cni-20220921220528-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:14:36.060189    5552 oci.go:658] temporary error verifying shutdown: unknown state "enable-default-cni-20220921220528-5916": docker container inspect enable-default-cni-20220921220528-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-20220921220528-5916
	I0921 22:14:36.060189    5552 oci.go:660] temporary error: container enable-default-cni-20220921220528-5916 status is  but expect it to be exited
	I0921 22:14:36.060189    5552 retry.go:31] will retry after 328.259627ms: couldn't verify container is exited. %v: unknown state "enable-default-cni-20220921220528-5916": docker container inspect enable-default-cni-20220921220528-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-20220921220528-5916
	I0921 22:14:36.398114    5552 cli_runner.go:164] Run: docker container inspect enable-default-cni-20220921220528-5916 --format={{.State.Status}}
	W0921 22:14:36.606611    5552 cli_runner.go:211] docker container inspect enable-default-cni-20220921220528-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:14:36.606889    5552 oci.go:658] temporary error verifying shutdown: unknown state "enable-default-cni-20220921220528-5916": docker container inspect enable-default-cni-20220921220528-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-20220921220528-5916
	I0921 22:14:36.606889    5552 oci.go:660] temporary error: container enable-default-cni-20220921220528-5916 status is  but expect it to be exited
	I0921 22:14:36.606941    5552 retry.go:31] will retry after 447.727139ms: couldn't verify container is exited. %v: unknown state "enable-default-cni-20220921220528-5916": docker container inspect enable-default-cni-20220921220528-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-20220921220528-5916
	I0921 22:14:37.066031    5552 cli_runner.go:164] Run: docker container inspect enable-default-cni-20220921220528-5916 --format={{.State.Status}}
	W0921 22:14:37.273831    5552 cli_runner.go:211] docker container inspect enable-default-cni-20220921220528-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:14:37.273831    5552 oci.go:658] temporary error verifying shutdown: unknown state "enable-default-cni-20220921220528-5916": docker container inspect enable-default-cni-20220921220528-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-20220921220528-5916
	I0921 22:14:37.273831    5552 oci.go:660] temporary error: container enable-default-cni-20220921220528-5916 status is  but expect it to be exited
	I0921 22:14:37.273831    5552 retry.go:31] will retry after 901.025843ms: couldn't verify container is exited. %v: unknown state "enable-default-cni-20220921220528-5916": docker container inspect enable-default-cni-20220921220528-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-20220921220528-5916
	I0921 22:14:38.192630    5552 cli_runner.go:164] Run: docker container inspect enable-default-cni-20220921220528-5916 --format={{.State.Status}}
	W0921 22:14:38.397330    5552 cli_runner.go:211] docker container inspect enable-default-cni-20220921220528-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:14:38.397330    5552 oci.go:658] temporary error verifying shutdown: unknown state "enable-default-cni-20220921220528-5916": docker container inspect enable-default-cni-20220921220528-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-20220921220528-5916
	I0921 22:14:38.397330    5552 oci.go:660] temporary error: container enable-default-cni-20220921220528-5916 status is  but expect it to be exited
	I0921 22:14:38.397330    5552 retry.go:31] will retry after 1.713171311s: couldn't verify container is exited. %v: unknown state "enable-default-cni-20220921220528-5916": docker container inspect enable-default-cni-20220921220528-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-20220921220528-5916
	I0921 22:14:40.126079    5552 cli_runner.go:164] Run: docker container inspect enable-default-cni-20220921220528-5916 --format={{.State.Status}}
	W0921 22:14:40.335511    5552 cli_runner.go:211] docker container inspect enable-default-cni-20220921220528-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:14:40.335778    5552 oci.go:658] temporary error verifying shutdown: unknown state "enable-default-cni-20220921220528-5916": docker container inspect enable-default-cni-20220921220528-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-20220921220528-5916
	I0921 22:14:40.335923    5552 oci.go:660] temporary error: container enable-default-cni-20220921220528-5916 status is  but expect it to be exited
	I0921 22:14:40.336012    5552 retry.go:31] will retry after 3.325151152s: couldn't verify container is exited. %v: unknown state "enable-default-cni-20220921220528-5916": docker container inspect enable-default-cni-20220921220528-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-20220921220528-5916
	I0921 22:14:43.678943    5552 cli_runner.go:164] Run: docker container inspect enable-default-cni-20220921220528-5916 --format={{.State.Status}}
	W0921 22:14:43.886575    5552 cli_runner.go:211] docker container inspect enable-default-cni-20220921220528-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:14:43.886575    5552 oci.go:658] temporary error verifying shutdown: unknown state "enable-default-cni-20220921220528-5916": docker container inspect enable-default-cni-20220921220528-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-20220921220528-5916
	I0921 22:14:43.886575    5552 oci.go:660] temporary error: container enable-default-cni-20220921220528-5916 status is  but expect it to be exited
	I0921 22:14:43.886575    5552 retry.go:31] will retry after 2.711970641s: couldn't verify container is exited. %v: unknown state "enable-default-cni-20220921220528-5916": docker container inspect enable-default-cni-20220921220528-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-20220921220528-5916
	I0921 22:14:46.616703    5552 cli_runner.go:164] Run: docker container inspect enable-default-cni-20220921220528-5916 --format={{.State.Status}}
	W0921 22:14:46.796222    5552 cli_runner.go:211] docker container inspect enable-default-cni-20220921220528-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:14:46.796222    5552 oci.go:658] temporary error verifying shutdown: unknown state "enable-default-cni-20220921220528-5916": docker container inspect enable-default-cni-20220921220528-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-20220921220528-5916
	I0921 22:14:46.796222    5552 oci.go:660] temporary error: container enable-default-cni-20220921220528-5916 status is  but expect it to be exited
	I0921 22:14:46.796222    5552 retry.go:31] will retry after 5.015617898s: couldn't verify container is exited. %v: unknown state "enable-default-cni-20220921220528-5916": docker container inspect enable-default-cni-20220921220528-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-20220921220528-5916
	I0921 22:14:51.831357    5552 cli_runner.go:164] Run: docker container inspect enable-default-cni-20220921220528-5916 --format={{.State.Status}}
	W0921 22:14:52.042450    5552 cli_runner.go:211] docker container inspect enable-default-cni-20220921220528-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:14:52.042450    5552 oci.go:658] temporary error verifying shutdown: unknown state "enable-default-cni-20220921220528-5916": docker container inspect enable-default-cni-20220921220528-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-20220921220528-5916
	I0921 22:14:52.042450    5552 oci.go:660] temporary error: container enable-default-cni-20220921220528-5916 status is  but expect it to be exited
	I0921 22:14:52.042450    5552 oci.go:88] couldn't shut down enable-default-cni-20220921220528-5916 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "enable-default-cni-20220921220528-5916": docker container inspect enable-default-cni-20220921220528-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-20220921220528-5916
	 
	I0921 22:14:52.049480    5552 cli_runner.go:164] Run: docker rm -f -v enable-default-cni-20220921220528-5916
	I0921 22:14:52.285185    5552 cli_runner.go:164] Run: docker container inspect -f {{.Id}} enable-default-cni-20220921220528-5916
	W0921 22:14:52.510232    5552 cli_runner.go:211] docker container inspect -f {{.Id}} enable-default-cni-20220921220528-5916 returned with exit code 1
	I0921 22:14:52.516218    5552 cli_runner.go:164] Run: docker network inspect enable-default-cni-20220921220528-5916 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0921 22:14:52.742665    5552 cli_runner.go:211] docker network inspect enable-default-cni-20220921220528-5916 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0921 22:14:52.755368    5552 network_create.go:272] running [docker network inspect enable-default-cni-20220921220528-5916] to gather additional debugging logs...
	I0921 22:14:52.755433    5552 cli_runner.go:164] Run: docker network inspect enable-default-cni-20220921220528-5916
	W0921 22:14:52.978365    5552 cli_runner.go:211] docker network inspect enable-default-cni-20220921220528-5916 returned with exit code 1
	I0921 22:14:52.978365    5552 network_create.go:275] error running [docker network inspect enable-default-cni-20220921220528-5916]: docker network inspect enable-default-cni-20220921220528-5916: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: enable-default-cni-20220921220528-5916
	I0921 22:14:52.978365    5552 network_create.go:277] output of [docker network inspect enable-default-cni-20220921220528-5916]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: enable-default-cni-20220921220528-5916
	
	** /stderr **
	W0921 22:14:52.979371    5552 delete.go:139] delete failed (probably ok) <nil>
	I0921 22:14:52.979371    5552 fix.go:115] Sleeping 1 second for extra luck!
	I0921 22:14:53.993980    5552 start.go:125] createHost starting for "" (driver="docker")
	I0921 22:14:53.996595    5552 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0921 22:14:53.997263    5552 start.go:159] libmachine.API.Create for "enable-default-cni-20220921220528-5916" (driver="docker")
	I0921 22:14:53.997263    5552 client.go:168] LocalClient.Create starting
	I0921 22:14:53.997841    5552 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I0921 22:14:53.998115    5552 main.go:134] libmachine: Decoding PEM data...
	I0921 22:14:53.998115    5552 main.go:134] libmachine: Parsing certificate...
	I0921 22:14:53.998115    5552 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I0921 22:14:53.998115    5552 main.go:134] libmachine: Decoding PEM data...
	I0921 22:14:53.998115    5552 main.go:134] libmachine: Parsing certificate...
	I0921 22:14:54.007491    5552 cli_runner.go:164] Run: docker network inspect enable-default-cni-20220921220528-5916 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0921 22:14:54.222532    5552 cli_runner.go:211] docker network inspect enable-default-cni-20220921220528-5916 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0921 22:14:54.231465    5552 network_create.go:272] running [docker network inspect enable-default-cni-20220921220528-5916] to gather additional debugging logs...
	I0921 22:14:54.231538    5552 cli_runner.go:164] Run: docker network inspect enable-default-cni-20220921220528-5916
	W0921 22:14:54.469238    5552 cli_runner.go:211] docker network inspect enable-default-cni-20220921220528-5916 returned with exit code 1
	I0921 22:14:54.469238    5552 network_create.go:275] error running [docker network inspect enable-default-cni-20220921220528-5916]: docker network inspect enable-default-cni-20220921220528-5916: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: enable-default-cni-20220921220528-5916
	I0921 22:14:54.469238    5552 network_create.go:277] output of [docker network inspect enable-default-cni-20220921220528-5916]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: enable-default-cni-20220921220528-5916
	
	** /stderr **
	I0921 22:14:54.489238    5552 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0921 22:14:54.725239    5552 network.go:281] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0006e8b58] amended:false}} dirty:map[] misses:0}
	I0921 22:14:54.725239    5552 network.go:239] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0921 22:14:54.742243    5552 network.go:290] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0006e8b58] amended:true}} dirty:map[192.168.49.0:0xc0006e8b58 192.168.58.0:0xc0004cc640] misses:0}
	I0921 22:14:54.742243    5552 network.go:236] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0921 22:14:54.742243    5552 network_create.go:115] attempt to create docker network enable-default-cni-20220921220528-5916 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0921 22:14:54.749228    5552 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=enable-default-cni-20220921220528-5916 enable-default-cni-20220921220528-5916
	W0921 22:14:54.930136    5552 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=enable-default-cni-20220921220528-5916 enable-default-cni-20220921220528-5916 returned with exit code 1
	E0921 22:14:54.930367    5552 network_create.go:104] error while trying to create docker network enable-default-cni-20220921220528-5916 192.168.58.0/24: create docker network enable-default-cni-20220921220528-5916 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=enable-default-cni-20220921220528-5916 enable-default-cni-20220921220528-5916: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 385764e4b957531b2e27dc51ccc52ca86cfc9628ebc0e1ed6cbd3962309dc9a2 (br-385764e4b957): conflicts with network 8a3cd8d165a4af032276277dbee93b52b8ee4aed7ca6d8aceee6814901b31792 (br-8a3cd8d165a4): networks have overlapping IPv4
	W0921 22:14:54.938011    5552 out.go:239] ! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network enable-default-cni-20220921220528-5916 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=enable-default-cni-20220921220528-5916 enable-default-cni-20220921220528-5916: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 385764e4b957531b2e27dc51ccc52ca86cfc9628ebc0e1ed6cbd3962309dc9a2 (br-385764e4b957): conflicts with network 8a3cd8d165a4af032276277dbee93b52b8ee4aed7ca6d8aceee6814901b31792 (br-8a3cd8d165a4): networks have overlapping IPv4
	
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network enable-default-cni-20220921220528-5916 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=enable-default-cni-20220921220528-5916 enable-default-cni-20220921220528-5916: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 385764e4b957531b2e27dc51ccc52ca86cfc9628ebc0e1ed6cbd3962309dc9a2 (br-385764e4b957): conflicts with network 8a3cd8d165a4af032276277dbee93b52b8ee4aed7ca6d8aceee6814901b31792 (br-8a3cd8d165a4): networks have overlapping IPv4
	
	I0921 22:14:54.959148    5552 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0921 22:14:55.157102    5552 cli_runner.go:164] Run: docker volume create enable-default-cni-20220921220528-5916 --label name.minikube.sigs.k8s.io=enable-default-cni-20220921220528-5916 --label created_by.minikube.sigs.k8s.io=true
	W0921 22:14:55.372112    5552 cli_runner.go:211] docker volume create enable-default-cni-20220921220528-5916 --label name.minikube.sigs.k8s.io=enable-default-cni-20220921220528-5916 --label created_by.minikube.sigs.k8s.io=true returned with exit code 1
	I0921 22:14:55.372112    5552 client.go:171] LocalClient.Create took 1.3748377s
	I0921 22:14:57.403127    5552 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0921 22:14:57.411133    5552 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220921220528-5916
	W0921 22:14:57.608134    5552 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220921220528-5916 returned with exit code 1
	I0921 22:14:57.608134    5552 retry.go:31] will retry after 242.222461ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "enable-default-cni-20220921220528-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220921220528-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-20220921220528-5916
	I0921 22:14:57.862904    5552 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220921220528-5916
	W0921 22:14:58.061955    5552 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220921220528-5916 returned with exit code 1
	I0921 22:14:58.061955    5552 retry.go:31] will retry after 293.637806ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "enable-default-cni-20220921220528-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220921220528-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-20220921220528-5916
	I0921 22:14:58.371339    5552 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220921220528-5916
	W0921 22:14:58.599476    5552 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220921220528-5916 returned with exit code 1
	I0921 22:14:58.599796    5552 retry.go:31] will retry after 446.119795ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "enable-default-cni-20220921220528-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220921220528-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-20220921220528-5916
	I0921 22:14:59.067611    5552 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220921220528-5916
	W0921 22:14:59.276156    5552 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220921220528-5916 returned with exit code 1
	W0921 22:14:59.276156    5552 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "enable-default-cni-20220921220528-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220921220528-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-20220921220528-5916
	
	W0921 22:14:59.276156    5552 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "enable-default-cni-20220921220528-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220921220528-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-20220921220528-5916
	I0921 22:14:59.286169    5552 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0921 22:14:59.292155    5552 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220921220528-5916
	W0921 22:14:59.481683    5552 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220921220528-5916 returned with exit code 1
	I0921 22:14:59.481683    5552 retry.go:31] will retry after 179.638263ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "enable-default-cni-20220921220528-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220921220528-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-20220921220528-5916
	I0921 22:14:59.687214    5552 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220921220528-5916
	W0921 22:14:59.873385    5552 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220921220528-5916 returned with exit code 1
	I0921 22:14:59.873385    5552 retry.go:31] will retry after 263.695078ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "enable-default-cni-20220921220528-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220921220528-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-20220921220528-5916
	I0921 22:15:00.148893    5552 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220921220528-5916
	W0921 22:15:00.352549    5552 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220921220528-5916 returned with exit code 1
	I0921 22:15:00.352549    5552 retry.go:31] will retry after 484.240172ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "enable-default-cni-20220921220528-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220921220528-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-20220921220528-5916
	I0921 22:15:00.847429    5552 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220921220528-5916
	W0921 22:15:01.054551    5552 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220921220528-5916 returned with exit code 1
	W0921 22:15:01.054551    5552 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "enable-default-cni-20220921220528-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220921220528-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-20220921220528-5916
	
	W0921 22:15:01.054551    5552 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "enable-default-cni-20220921220528-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220921220528-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-20220921220528-5916
	I0921 22:15:01.054551    5552 start.go:128] duration metric: createHost completed in 7.0602699s
	I0921 22:15:01.064600    5552 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0921 22:15:01.071549    5552 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220921220528-5916
	W0921 22:15:01.273122    5552 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220921220528-5916 returned with exit code 1
	I0921 22:15:01.273122    5552 retry.go:31] will retry after 340.62286ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "enable-default-cni-20220921220528-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220921220528-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-20220921220528-5916
	I0921 22:15:01.622635    5552 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220921220528-5916
	W0921 22:15:01.829462    5552 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220921220528-5916 returned with exit code 1
	I0921 22:15:01.829462    5552 retry.go:31] will retry after 297.417842ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "enable-default-cni-20220921220528-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220921220528-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-20220921220528-5916
	I0921 22:15:02.139931    5552 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220921220528-5916
	W0921 22:15:02.322518    5552 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220921220528-5916 returned with exit code 1
	I0921 22:15:02.322518    5552 retry.go:31] will retry after 448.358942ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "enable-default-cni-20220921220528-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220921220528-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-20220921220528-5916
	I0921 22:15:02.786529    5552 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220921220528-5916
	W0921 22:15:03.000419    5552 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220921220528-5916 returned with exit code 1
	W0921 22:15:03.000419    5552 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "enable-default-cni-20220921220528-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220921220528-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-20220921220528-5916
	
	W0921 22:15:03.000419    5552 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "enable-default-cni-20220921220528-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220921220528-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-20220921220528-5916
	I0921 22:15:03.011429    5552 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0921 22:15:03.018420    5552 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220921220528-5916
	W0921 22:15:03.207417    5552 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220921220528-5916 returned with exit code 1
	I0921 22:15:03.207417    5552 retry.go:31] will retry after 176.645665ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "enable-default-cni-20220921220528-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220921220528-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-20220921220528-5916
	I0921 22:15:03.402517    5552 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220921220528-5916
	W0921 22:15:03.607167    5552 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220921220528-5916 returned with exit code 1
	I0921 22:15:03.607167    5552 retry.go:31] will retry after 512.00063ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "enable-default-cni-20220921220528-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220921220528-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-20220921220528-5916
	I0921 22:15:04.133527    5552 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220921220528-5916
	W0921 22:15:04.334085    5552 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220921220528-5916 returned with exit code 1
	I0921 22:15:04.334325    5552 retry.go:31] will retry after 673.154531ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "enable-default-cni-20220921220528-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220921220528-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-20220921220528-5916
	I0921 22:15:05.028166    5552 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220921220528-5916
	W0921 22:15:05.241311    5552 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220921220528-5916 returned with exit code 1
	W0921 22:15:05.241311    5552 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "enable-default-cni-20220921220528-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220921220528-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-20220921220528-5916
	
	W0921 22:15:05.241311    5552 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "enable-default-cni-20220921220528-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220921220528-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-20220921220528-5916
	I0921 22:15:05.241311    5552 fix.go:57] fixHost completed within 31.6332552s
	I0921 22:15:05.241311    5552 start.go:83] releasing machines lock for "enable-default-cni-20220921220528-5916", held for 31.6332552s
	W0921 22:15:05.241311    5552 out.go:239] * Failed to start docker container. Running "minikube delete -p enable-default-cni-20220921220528-5916" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for enable-default-cni-20220921220528-5916 container: docker volume create enable-default-cni-20220921220528-5916 --label name.minikube.sigs.k8s.io=enable-default-cni-20220921220528-5916 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create enable-default-cni-20220921220528-5916: error while creating volume root path '/var/lib/docker/volumes/enable-default-cni-20220921220528-5916': mkdir /var/lib/docker/volumes/enable-default-cni-20220921220528-5916: read-only file system
	
	* Failed to start docker container. Running "minikube delete -p enable-default-cni-20220921220528-5916" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for enable-default-cni-20220921220528-5916 container: docker volume create enable-default-cni-20220921220528-5916 --label name.minikube.sigs.k8s.io=enable-default-cni-20220921220528-5916 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create enable-default-cni-20220921220528-5916: error while creating volume root path '/var/lib/docker/volumes/enable-default-cni-20220921220528-5916': mkdir /var/lib/docker/volumes/enable-default-cni-20220921220528-5916: read-only file system
	
	I0921 22:15:05.247448    5552 out.go:177] 
	W0921 22:15:05.249467    5552 out.go:239] X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for enable-default-cni-20220921220528-5916 container: docker volume create enable-default-cni-20220921220528-5916 --label name.minikube.sigs.k8s.io=enable-default-cni-20220921220528-5916 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create enable-default-cni-20220921220528-5916: error while creating volume root path '/var/lib/docker/volumes/enable-default-cni-20220921220528-5916': mkdir /var/lib/docker/volumes/enable-default-cni-20220921220528-5916: read-only file system
	
	X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for enable-default-cni-20220921220528-5916 container: docker volume create enable-default-cni-20220921220528-5916 --label name.minikube.sigs.k8s.io=enable-default-cni-20220921220528-5916 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create enable-default-cni-20220921220528-5916: error while creating volume root path '/var/lib/docker/volumes/enable-default-cni-20220921220528-5916': mkdir /var/lib/docker/volumes/enable-default-cni-20220921220528-5916: read-only file system
	
	W0921 22:15:05.250467    5552 out.go:239] * Suggestion: Restart Docker
	* Suggestion: Restart Docker
	W0921 22:15:05.250467    5552 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/6825
	* Related issue: https://github.com/kubernetes/minikube/issues/6825
	I0921 22:15:05.253461    5552 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:103: failed start: exit status 60
--- FAIL: TestNetworkPlugins/group/enable-default-cni/Start (50.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (2.04s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p newest-cni-20220921221222-5916 "sudo crictl images -o json"

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Non-zero exit: out/minikube-windows-amd64.exe ssh -p newest-cni-20220921221222-5916 "sudo crictl images -o json": exit status 80 (1.1868711s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "newest-cni-20220921221222-5916": docker container inspect newest-cni-20220921221222-5916 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220921221222-5916
	
	* 
	╭──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                      │
	│    * If the above advice does not help, please let us know:                                                          │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                        │
	│                                                                                                                      │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                             │
	│    * Please also attach the following file to the GitHub issue:                                                      │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_ssh_2ebd0b017f5d88727e5083393ee181280e239d1d_18.log    │
	│                                                                                                                      │
	╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:304: failed tp get images inside minikube. args "out/minikube-windows-amd64.exe ssh -p newest-cni-20220921221222-5916 \"sudo crictl images -o json\"": exit status 80
start_stop_delete_test.go:304: failed to decode images json unexpected end of JSON input. output:

                                                
                                                

                                                
                                                
start_stop_delete_test.go:304: v1.25.2 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.9.3",
- 	"registry.k8s.io/etcd:3.5.4-0",
- 	"registry.k8s.io/kube-apiserver:v1.25.2",
- 	"registry.k8s.io/kube-controller-manager:v1.25.2",
- 	"registry.k8s.io/kube-proxy:v1.25.2",
- 	"registry.k8s.io/kube-scheduler:v1.25.2",
- 	"registry.k8s.io/pause:3.8",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/newest-cni/serial/VerifyKubernetesImages]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect newest-cni-20220921221222-5916
helpers_test.go:231: (dbg) Non-zero exit: docker inspect newest-cni-20220921221222-5916: exit status 1 (239.3375ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: newest-cni-20220921221222-5916

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p newest-cni-20220921221222-5916 -n newest-cni-20220921221222-5916

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p newest-cni-20220921221222-5916 -n newest-cni-20220921221222-5916: exit status 7 (598.3578ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0921 22:14:55.749248    6248 status.go:247] status error: host: state: unknown state "newest-cni-20220921221222-5916": docker container inspect newest-cni-20220921221222-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220921221222-5916

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-20220921221222-5916" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (2.04s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop (0.79s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-different-port-20220921221221-5916" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect default-k8s-different-port-20220921221221-5916
helpers_test.go:231: (dbg) Non-zero exit: docker inspect default-k8s-different-port-20220921221221-5916: exit status 1 (242.1353ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: default-k8s-different-port-20220921221221-5916

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p default-k8s-different-port-20220921221221-5916 -n default-k8s-different-port-20220921221221-5916

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p default-k8s-different-port-20220921221221-5916 -n default-k8s-different-port-20220921221221-5916: exit status 7 (541.2377ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0921 22:14:56.034492    8348 status.go:247] status error: host: state: unknown state "default-k8s-different-port-20220921221221-5916": docker container inspect default-k8s-different-port-20220921221221-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220921221221-5916

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-different-port-20220921221221-5916" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop (0.79s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.81s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe pause -p newest-cni-20220921221222-5916 --alsologtostderr -v=1

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-windows-amd64.exe pause -p newest-cni-20220921221222-5916 --alsologtostderr -v=1: exit status 80 (1.095041s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0921 22:14:56.023497    8924 out.go:296] Setting OutFile to fd 1968 ...
	I0921 22:14:56.086497    8924 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0921 22:14:56.086497    8924 out.go:309] Setting ErrFile to fd 1388...
	I0921 22:14:56.086497    8924 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0921 22:14:56.098753    8924 out.go:303] Setting JSON to false
	I0921 22:14:56.098753    8924 mustload.go:65] Loading cluster: newest-cni-20220921221222-5916
	I0921 22:14:56.099494    8924 config.go:180] Loaded profile config "newest-cni-20220921221222-5916": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.2
	I0921 22:14:56.114046    8924 cli_runner.go:164] Run: docker container inspect newest-cni-20220921221222-5916 --format={{.State.Status}}
	W0921 22:14:56.317286    8924 cli_runner.go:211] docker container inspect newest-cni-20220921221222-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:14:56.324264    8924 out.go:177] 
	W0921 22:14:56.328287    8924 out.go:239] X Exiting due to GUEST_STATUS: state: unknown state "newest-cni-20220921221222-5916": docker container inspect newest-cni-20220921221222-5916 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220921221222-5916
	
	X Exiting due to GUEST_STATUS: state: unknown state "newest-cni-20220921221222-5916": docker container inspect newest-cni-20220921221222-5916 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220921221222-5916
	
	W0921 22:14:56.328287    8924 out.go:239] * 
	* 
	W0921 22:14:56.839986    8924 out.go:239] ╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                        │
	│    * If the above advice does not help, please let us know:                                                            │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                          │
	│                                                                                                                        │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                               │
	│    * Please also attach the following file to the GitHub issue:                                                        │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_pause_8893f1c977cc86351b34571029ffce3d31854fd6_26.log    │
	│                                                                                                                        │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                        │
	│    * If the above advice does not help, please let us know:                                                            │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                          │
	│                                                                                                                        │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                               │
	│    * Please also attach the following file to the GitHub issue:                                                        │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_pause_8893f1c977cc86351b34571029ffce3d31854fd6_26.log    │
	│                                                                                                                        │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0921 22:14:56.842989    8924 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-windows-amd64.exe pause -p newest-cni-20220921221222-5916 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect newest-cni-20220921221222-5916

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/Pause
helpers_test.go:231: (dbg) Non-zero exit: docker inspect newest-cni-20220921221222-5916: exit status 1 (244.2099ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: newest-cni-20220921221222-5916

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p newest-cni-20220921221222-5916 -n newest-cni-20220921221222-5916
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p newest-cni-20220921221222-5916 -n newest-cni-20220921221222-5916: exit status 7 (584.2499ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0921 22:14:57.687089    2256 status.go:247] status error: host: state: unknown state "newest-cni-20220921221222-5916": docker container inspect newest-cni-20220921221222-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220921221222-5916

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-20220921221222-5916" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect newest-cni-20220921221222-5916
helpers_test.go:231: (dbg) Non-zero exit: docker inspect newest-cni-20220921221222-5916: exit status 1 (256.5646ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: newest-cni-20220921221222-5916

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p newest-cni-20220921221222-5916 -n newest-cni-20220921221222-5916

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/Pause
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p newest-cni-20220921221222-5916 -n newest-cni-20220921221222-5916: exit status 7 (604.414ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0921 22:14:58.567671    7180 status.go:247] status error: host: state: unknown state "newest-cni-20220921221222-5916": docker container inspect newest-cni-20220921221222-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20220921221222-5916

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-20220921221222-5916" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (2.81s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop (1.05s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-different-port-20220921221221-5916" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-different-port-20220921221221-5916 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-different-port-20220921221221-5916 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (181.7529ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-different-port-20220921221221-5916" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-different-port-20220921221221-5916 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " k8s.gcr.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect default-k8s-different-port-20220921221221-5916
helpers_test.go:231: (dbg) Non-zero exit: docker inspect default-k8s-different-port-20220921221221-5916: exit status 1 (246.7509ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: default-k8s-different-port-20220921221221-5916

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p default-k8s-different-port-20220921221221-5916 -n default-k8s-different-port-20220921221221-5916

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p default-k8s-different-port-20220921221221-5916 -n default-k8s-different-port-20220921221221-5916: exit status 7 (610.4182ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0921 22:14:57.082858    7628 status.go:247] status error: host: state: unknown state "default-k8s-different-port-20220921221221-5916": docker container inspect default-k8s-different-port-20220921221221-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220921221221-5916

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-different-port-20220921221221-5916" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop (1.05s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/VerifyKubernetesImages (2.06s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p default-k8s-different-port-20220921221221-5916 "sudo crictl images -o json"

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Non-zero exit: out/minikube-windows-amd64.exe ssh -p default-k8s-different-port-20220921221221-5916 "sudo crictl images -o json": exit status 80 (1.1691518s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "default-k8s-different-port-20220921221221-5916": docker container inspect default-k8s-different-port-20220921221221-5916 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220921221221-5916
	
	* 
	╭──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                      │
	│    * If the above advice does not help, please let us know:                                                          │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                        │
	│                                                                                                                      │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                             │
	│    * Please also attach the following file to the GitHub issue:                                                      │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_ssh_2ebd0b017f5d88727e5083393ee181280e239d1d_18.log    │
	│                                                                                                                      │
	╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:304: failed tp get images inside minikube. args "out/minikube-windows-amd64.exe ssh -p default-k8s-different-port-20220921221221-5916 \"sudo crictl images -o json\"": exit status 80
start_stop_delete_test.go:304: failed to decode images json unexpected end of JSON input. output:

                                                
                                                

                                                
                                                
start_stop_delete_test.go:304: v1.25.2 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.9.3",
- 	"registry.k8s.io/etcd:3.5.4-0",
- 	"registry.k8s.io/kube-apiserver:v1.25.2",
- 	"registry.k8s.io/kube-controller-manager:v1.25.2",
- 	"registry.k8s.io/kube-proxy:v1.25.2",
- 	"registry.k8s.io/kube-scheduler:v1.25.2",
- 	"registry.k8s.io/pause:3.8",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/default-k8s-different-port/serial/VerifyKubernetesImages]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect default-k8s-different-port-20220921221221-5916
helpers_test.go:231: (dbg) Non-zero exit: docker inspect default-k8s-different-port-20220921221221-5916: exit status 1 (282.0144ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: default-k8s-different-port-20220921221221-5916

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p default-k8s-different-port-20220921221221-5916 -n default-k8s-different-port-20220921221221-5916

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/VerifyKubernetesImages
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p default-k8s-different-port-20220921221221-5916 -n default-k8s-different-port-20220921221221-5916: exit status 7 (600.4236ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0921 22:14:59.150111    6904 status.go:247] status error: host: state: unknown state "default-k8s-different-port-20220921221221-5916": docker container inspect default-k8s-different-port-20220921221221-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220921221221-5916

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-different-port-20220921221221-5916" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/default-k8s-different-port/serial/VerifyKubernetesImages (2.06s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/Pause (2.89s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe pause -p default-k8s-different-port-20220921221221-5916 --alsologtostderr -v=1

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-windows-amd64.exe pause -p default-k8s-different-port-20220921221221-5916 --alsologtostderr -v=1: exit status 80 (1.1550548s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0921 22:14:59.466691    8988 out.go:296] Setting OutFile to fd 1760 ...
	I0921 22:14:59.534339    8988 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0921 22:14:59.534339    8988 out.go:309] Setting ErrFile to fd 1796...
	I0921 22:14:59.534339    8988 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0921 22:14:59.546766    8988 out.go:303] Setting JSON to false
	I0921 22:14:59.546832    8988 mustload.go:65] Loading cluster: default-k8s-different-port-20220921221221-5916
	I0921 22:14:59.547206    8988 config.go:180] Loaded profile config "default-k8s-different-port-20220921221221-5916": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.2
	I0921 22:14:59.571945    8988 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220921221221-5916 --format={{.State.Status}}
	W0921 22:14:59.776460    8988 cli_runner.go:211] docker container inspect default-k8s-different-port-20220921221221-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:14:59.780436    8988 out.go:177] 
	W0921 22:14:59.782421    8988 out.go:239] X Exiting due to GUEST_STATUS: state: unknown state "default-k8s-different-port-20220921221221-5916": docker container inspect default-k8s-different-port-20220921221221-5916 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220921221221-5916
	
	X Exiting due to GUEST_STATUS: state: unknown state "default-k8s-different-port-20220921221221-5916": docker container inspect default-k8s-different-port-20220921221221-5916 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220921221221-5916
	
	W0921 22:14:59.783474    8988 out.go:239] * 
	* 
	W0921 22:15:00.302044    8988 out.go:239] ╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                        │
	│    * If the above advice does not help, please let us know:                                                            │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                          │
	│                                                                                                                        │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                               │
	│    * Please also attach the following file to the GitHub issue:                                                        │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_pause_8893f1c977cc86351b34571029ffce3d31854fd6_26.log    │
	│                                                                                                                        │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                        │
	│    * If the above advice does not help, please let us know:                                                            │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                          │
	│                                                                                                                        │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                               │
	│    * Please also attach the following file to the GitHub issue:                                                        │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_pause_8893f1c977cc86351b34571029ffce3d31854fd6_26.log    │
	│                                                                                                                        │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0921 22:15:00.305008    8988 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-windows-amd64.exe pause -p default-k8s-different-port-20220921221221-5916 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/default-k8s-different-port/serial/Pause]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect default-k8s-different-port-20220921221221-5916

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/Pause
helpers_test.go:231: (dbg) Non-zero exit: docker inspect default-k8s-different-port-20220921221221-5916: exit status 1 (252.4652ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: default-k8s-different-port-20220921221221-5916

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p default-k8s-different-port-20220921221221-5916 -n default-k8s-different-port-20220921221221-5916
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p default-k8s-different-port-20220921221221-5916 -n default-k8s-different-port-20220921221221-5916: exit status 7 (564.9683ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0921 22:15:01.134070    4104 status.go:247] status error: host: state: unknown state "default-k8s-different-port-20220921221221-5916": docker container inspect default-k8s-different-port-20220921221221-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220921221221-5916

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-different-port-20220921221221-5916" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/default-k8s-different-port/serial/Pause]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect default-k8s-different-port-20220921221221-5916
helpers_test.go:231: (dbg) Non-zero exit: docker inspect default-k8s-different-port-20220921221221-5916: exit status 1 (270.4172ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: default-k8s-different-port-20220921221221-5916

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p default-k8s-different-port-20220921221221-5916 -n default-k8s-different-port-20220921221221-5916

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/Pause
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p default-k8s-different-port-20220921221221-5916 -n default-k8s-different-port-20220921221221-5916: exit status 7 (621.9967ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0921 22:15:02.039290    5420 status.go:247] status error: host: state: unknown state "default-k8s-different-port-20220921221221-5916": docker container inspect default-k8s-different-port-20220921221221-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20220921221221-5916

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-different-port-20220921221221-5916" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/default-k8s-different-port/serial/Pause (2.89s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (49.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:101: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubenet-20220921220528-5916 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --network-plugin=kubenet --driver=docker

                                                
                                                
=== CONT  TestNetworkPlugins/group/kubenet/Start
net_test.go:101: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p kubenet-20220921220528-5916 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --network-plugin=kubenet --driver=docker: exit status 60 (49.1138355s)

                                                
                                                
-- stdout --
	* [kubenet-20220921220528-5916] minikube v1.27.0 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	  - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=14995
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting control plane node kubenet-20220921220528-5916 in cluster kubenet-20220921220528-5916
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* docker "kubenet-20220921220528-5916" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0921 22:15:02.288505    6868 out.go:296] Setting OutFile to fd 1572 ...
	I0921 22:15:02.357506    6868 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0921 22:15:02.357506    6868 out.go:309] Setting ErrFile to fd 1804...
	I0921 22:15:02.357506    6868 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0921 22:15:02.392264    6868 out.go:303] Setting JSON to false
	I0921 22:15:02.395408    6868 start.go:115] hostinfo: {"hostname":"minikube2","uptime":4570,"bootTime":1663793932,"procs":152,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19044 Build 19044","kernelVersion":"10.0.19044 Build 19044","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"29dced62-21fb-45d8-a34e-472b66ced002"}
	W0921 22:15:02.395578    6868 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0921 22:15:02.400006    6868 out.go:177] * [kubenet-20220921220528-5916] minikube v1.27.0 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	I0921 22:15:02.403816    6868 notify.go:214] Checking for updates...
	I0921 22:15:02.406355    6868 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	I0921 22:15:02.408868    6868 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	I0921 22:15:02.413615    6868 out.go:177]   - MINIKUBE_LOCATION=14995
	I0921 22:15:02.416411    6868 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0921 22:15:02.420901    6868 config.go:180] Loaded profile config "bridge-20220921220528-5916": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.2
	I0921 22:15:02.421279    6868 config.go:180] Loaded profile config "default-k8s-different-port-20220921221221-5916": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.2
	I0921 22:15:02.421279    6868 config.go:180] Loaded profile config "enable-default-cni-20220921220528-5916": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.2
	I0921 22:15:02.422606    6868 config.go:180] Loaded profile config "multinode-20220921215635-5916-m01": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.2
	I0921 22:15:02.422731    6868 driver.go:365] Setting default libvirt URI to qemu:///system
	I0921 22:15:02.731806    6868 docker.go:137] docker version: linux-20.10.17
	I0921 22:15:02.740172    6868 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0921 22:15:03.300935    6868 info.go:265] docker info: {ID:6VFI:N2PO:PLBP:KXLR:EN7R:OR7P:564S:UY7N:ZWDA:5V2U:KGL2:KJZ7 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:43 OomKillDisable:true NGoroutines:95 SystemTime:2022-09-21 22:15:02.890136 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_6
4 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,p
rofile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.10.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.9] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-p
lugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.19.0]] Warnings:<nil>}}
	I0921 22:15:03.309054    6868 out.go:177] * Using the docker driver based on user configuration
	I0921 22:15:03.315982    6868 start.go:284] selected driver: docker
	I0921 22:15:03.315982    6868 start.go:808] validating driver "docker" against <nil>
	I0921 22:15:03.315982    6868 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0921 22:15:03.401516    6868 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0921 22:15:04.015016    6868 info.go:265] docker info: {ID:6VFI:N2PO:PLBP:KXLR:EN7R:OR7P:564S:UY7N:ZWDA:5V2U:KGL2:KJZ7 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:43 OomKillDisable:true NGoroutines:95 SystemTime:2022-09-21 22:15:03.5920886 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.10.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.9] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-
plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.19.0]] Warnings:<nil>}}
	I0921 22:15:04.015016    6868 start_flags.go:302] no existing cluster config was found, will generate one from the flags 
	I0921 22:15:04.016049    6868 start_flags.go:867] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0921 22:15:04.019028    6868 out.go:177] * Using Docker Desktop driver with root privileges
	I0921 22:15:04.022030    6868 cni.go:91] network plugin configured as "kubenet", returning disabled
	I0921 22:15:04.022030    6868 start_flags.go:316] config:
	{Name:kubenet-20220921220528-5916 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.2 ClusterName:kubenet-20220921220528-5916 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:dock
er CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I0921 22:15:04.026023    6868 out.go:177] * Starting control plane node kubenet-20220921220528-5916 in cluster kubenet-20220921220528-5916
	I0921 22:15:04.028064    6868 cache.go:120] Beginning downloading kic base image for docker with docker
	I0921 22:15:04.032077    6868 out.go:177] * Pulling base image ...
	I0921 22:15:04.035029    6868 preload.go:132] Checking if preload exists for k8s version v1.25.2 and runtime docker
	I0921 22:15:04.035029    6868 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c in local docker daemon
	I0921 22:15:04.035029    6868 preload.go:148] Found local preload: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.25.2-docker-overlay2-amd64.tar.lz4
	I0921 22:15:04.035029    6868 cache.go:57] Caching tarball of preloaded images
	I0921 22:15:04.035029    6868 preload.go:174] Found C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.25.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0921 22:15:04.036047    6868 cache.go:60] Finished verifying existence of preloaded tar for  v1.25.2 on docker
	I0921 22:15:04.036047    6868 profile.go:148] Saving config to C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\kubenet-20220921220528-5916\config.json ...
	I0921 22:15:04.036047    6868 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\kubenet-20220921220528-5916\config.json: {Name:mk79bd3832b784dac1b662220bc02c2771e81cac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0921 22:15:04.252530    6868 cache.go:147] Downloading gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c to local cache
	I0921 22:15:04.252530    6868 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase_v0.0.34@sha256_f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c.tar
	I0921 22:15:04.252530    6868 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase_v0.0.34@sha256_f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c.tar
	I0921 22:15:04.252530    6868 image.go:59] Checking for gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c in local cache directory
	I0921 22:15:04.252530    6868 image.go:62] Found gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c in local cache directory, skipping pull
	I0921 22:15:04.252530    6868 image.go:103] gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c exists in cache, skipping pull
	I0921 22:15:04.252530    6868 cache.go:150] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c as a tarball
	I0921 22:15:04.252530    6868 cache.go:161] Loading gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c from local cache
	I0921 22:15:04.252530    6868 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase_v0.0.34@sha256_f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c.tar
	I0921 22:15:06.753024    6868 cache.go:164] successfully loaded gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c from cached tarball
	I0921 22:15:06.753145    6868 cache.go:170] Downloading gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c to local daemon
	I0921 22:15:06.753145    6868 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c.d.lock -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase_v0.0.34@sha256_f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c.d.lock
	I0921 22:15:06.753145    6868 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c in local docker daemon
	I0921 22:15:06.968603    6868 image.go:243] Writing gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c to local daemon
	    > gcr.io/k8s-minikube/kicbase:  0 B [__________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase:  0 B [__________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase:  0 B [__________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase:  0 B [__________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase:  0 B [______________________] ?% ? p/s 800msI0921 22:15:08.512420    6868 cache.go:177] use image loaded from cache gcr.io/k8s-minikube/kicbase:v0.0.34
	W0921 22:15:08.512420    6868 out.go:239] ! minikube was unable to download gcr.io/k8s-minikube/kicbase:v0.0.34, but successfully downloaded gcr.io/k8s-minikube/kicbase:v0.0.34 as a fallback image
	! minikube was unable to download gcr.io/k8s-minikube/kicbase:v0.0.34, but successfully downloaded gcr.io/k8s-minikube/kicbase:v0.0.34 as a fallback image
	I0921 22:15:08.512420    6868 cache.go:208] Successfully downloaded all kic artifacts
	I0921 22:15:08.512420    6868 start.go:364] acquiring machines lock for kubenet-20220921220528-5916: {Name:mkb53d82cac267bd5a4b8d80d3130c6d8fa01e46 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0921 22:15:08.512420    6868 start.go:368] acquired machines lock for "kubenet-20220921220528-5916" in 0s
	I0921 22:15:08.513028    6868 start.go:93] Provisioning new machine with config: &{Name:kubenet-20220921220528-5916 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.34 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.2 ClusterName:kubenet-20220921220528-5916 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.25.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVM
netClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet} &{Name: IP: Port:8443 KubernetesVersion:v1.25.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0921 22:15:08.513028    6868 start.go:125] createHost starting for "" (driver="docker")
	I0921 22:15:08.516825    6868 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0921 22:15:08.517494    6868 start.go:159] libmachine.API.Create for "kubenet-20220921220528-5916" (driver="docker")
	I0921 22:15:08.517544    6868 client.go:168] LocalClient.Create starting
	I0921 22:15:08.517544    6868 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I0921 22:15:08.518239    6868 main.go:134] libmachine: Decoding PEM data...
	I0921 22:15:08.518239    6868 main.go:134] libmachine: Parsing certificate...
	I0921 22:15:08.518239    6868 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I0921 22:15:08.518239    6868 main.go:134] libmachine: Decoding PEM data...
	I0921 22:15:08.518239    6868 main.go:134] libmachine: Parsing certificate...
	I0921 22:15:08.527984    6868 cli_runner.go:164] Run: docker network inspect kubenet-20220921220528-5916 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0921 22:15:08.712684    6868 cli_runner.go:211] docker network inspect kubenet-20220921220528-5916 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0921 22:15:08.720702    6868 network_create.go:272] running [docker network inspect kubenet-20220921220528-5916] to gather additional debugging logs...
	I0921 22:15:08.720702    6868 cli_runner.go:164] Run: docker network inspect kubenet-20220921220528-5916
	W0921 22:15:08.930168    6868 cli_runner.go:211] docker network inspect kubenet-20220921220528-5916 returned with exit code 1
	I0921 22:15:08.930168    6868 network_create.go:275] error running [docker network inspect kubenet-20220921220528-5916]: docker network inspect kubenet-20220921220528-5916: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: kubenet-20220921220528-5916
	I0921 22:15:08.930168    6868 network_create.go:277] output of [docker network inspect kubenet-20220921220528-5916]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: kubenet-20220921220528-5916
	
	** /stderr **
	I0921 22:15:08.938024    6868 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0921 22:15:09.183213    6868 network.go:290] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc000adc048] misses:0}
	I0921 22:15:09.183213    6868 network.go:236] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0921 22:15:09.183213    6868 network_create.go:115] attempt to create docker network kubenet-20220921220528-5916 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0921 22:15:09.190826    6868 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubenet-20220921220528-5916 kubenet-20220921220528-5916
	W0921 22:15:09.394132    6868 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubenet-20220921220528-5916 kubenet-20220921220528-5916 returned with exit code 1
	E0921 22:15:09.394132    6868 network_create.go:104] error while trying to create docker network kubenet-20220921220528-5916 192.168.49.0/24: create docker network kubenet-20220921220528-5916 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubenet-20220921220528-5916 kubenet-20220921220528-5916: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network f3715464c8b932657ae45651e0857df3fa990cf0c3c3dc59a3943138e789990a (br-f3715464c8b9): conflicts with network a04d36bfb3cf4b1b1ab643a2ec70fadfc4a485878b32a275b4c7653037d66dd0 (br-a04d36bfb3cf): networks have overlapping IPv4
	W0921 22:15:09.394132    6868 out.go:239] ! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network kubenet-20220921220528-5916 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubenet-20220921220528-5916 kubenet-20220921220528-5916: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network f3715464c8b932657ae45651e0857df3fa990cf0c3c3dc59a3943138e789990a (br-f3715464c8b9): conflicts with network a04d36bfb3cf4b1b1ab643a2ec70fadfc4a485878b32a275b4c7653037d66dd0 (br-a04d36bfb3cf): networks have overlapping IPv4
	
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network kubenet-20220921220528-5916 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubenet-20220921220528-5916 kubenet-20220921220528-5916: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network f3715464c8b932657ae45651e0857df3fa990cf0c3c3dc59a3943138e789990a (br-f3715464c8b9): conflicts with network a04d36bfb3cf4b1b1ab643a2ec70fadfc4a485878b32a275b4c7653037d66dd0 (br-a04d36bfb3cf): networks have overlapping IPv4
	
	I0921 22:15:09.416496    6868 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0921 22:15:09.634491    6868 cli_runner.go:164] Run: docker volume create kubenet-20220921220528-5916 --label name.minikube.sigs.k8s.io=kubenet-20220921220528-5916 --label created_by.minikube.sigs.k8s.io=true
	W0921 22:15:09.829741    6868 cli_runner.go:211] docker volume create kubenet-20220921220528-5916 --label name.minikube.sigs.k8s.io=kubenet-20220921220528-5916 --label created_by.minikube.sigs.k8s.io=true returned with exit code 1
	I0921 22:15:09.829741    6868 client.go:171] LocalClient.Create took 1.3121864s
	I0921 22:15:11.844426    6868 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0921 22:15:11.850551    6868 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20220921220528-5916
	W0921 22:15:12.051641    6868 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20220921220528-5916 returned with exit code 1
	I0921 22:15:12.051641    6868 retry.go:31] will retry after 276.165072ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubenet-20220921220528-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20220921220528-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubenet-20220921220528-5916
	I0921 22:15:12.340233    6868 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20220921220528-5916
	W0921 22:15:12.534181    6868 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20220921220528-5916 returned with exit code 1
	I0921 22:15:12.534580    6868 retry.go:31] will retry after 540.190908ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubenet-20220921220528-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20220921220528-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubenet-20220921220528-5916
	I0921 22:15:13.091629    6868 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20220921220528-5916
	W0921 22:15:13.291526    6868 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20220921220528-5916 returned with exit code 1
	W0921 22:15:13.291526    6868 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubenet-20220921220528-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20220921220528-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubenet-20220921220528-5916
	
	W0921 22:15:13.291526    6868 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubenet-20220921220528-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20220921220528-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubenet-20220921220528-5916
	I0921 22:15:13.301186    6868 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0921 22:15:13.307236    6868 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20220921220528-5916
	W0921 22:15:13.497211    6868 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20220921220528-5916 returned with exit code 1
	I0921 22:15:13.497582    6868 retry.go:31] will retry after 234.428547ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubenet-20220921220528-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20220921220528-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubenet-20220921220528-5916
	I0921 22:15:13.753885    6868 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20220921220528-5916
	W0921 22:15:13.974677    6868 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20220921220528-5916 returned with exit code 1
	I0921 22:15:13.974982    6868 retry.go:31] will retry after 346.739061ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubenet-20220921220528-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20220921220528-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubenet-20220921220528-5916
	I0921 22:15:14.342240    6868 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20220921220528-5916
	W0921 22:15:14.519744    6868 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20220921220528-5916 returned with exit code 1
	I0921 22:15:14.519744    6868 retry.go:31] will retry after 667.587979ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubenet-20220921220528-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20220921220528-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubenet-20220921220528-5916
	I0921 22:15:15.208742    6868 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20220921220528-5916
	W0921 22:15:15.420476    6868 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20220921220528-5916 returned with exit code 1
	W0921 22:15:15.420684    6868 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubenet-20220921220528-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20220921220528-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubenet-20220921220528-5916
	
	W0921 22:15:15.420684    6868 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubenet-20220921220528-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20220921220528-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubenet-20220921220528-5916
	I0921 22:15:15.420788    6868 start.go:128] duration metric: createHost completed in 6.9076003s
	I0921 22:15:15.420788    6868 start.go:83] releasing machines lock for "kubenet-20220921220528-5916", held for 6.9083115s
	W0921 22:15:15.420967    6868 start.go:602] error starting host: creating host: create: creating: setting up container node: creating volume for kubenet-20220921220528-5916 container: docker volume create kubenet-20220921220528-5916 --label name.minikube.sigs.k8s.io=kubenet-20220921220528-5916 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create kubenet-20220921220528-5916: error while creating volume root path '/var/lib/docker/volumes/kubenet-20220921220528-5916': mkdir /var/lib/docker/volumes/kubenet-20220921220528-5916: read-only file system
	I0921 22:15:15.435246    6868 cli_runner.go:164] Run: docker container inspect kubenet-20220921220528-5916 --format={{.State.Status}}
	W0921 22:15:15.637506    6868 cli_runner.go:211] docker container inspect kubenet-20220921220528-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:15:15.637506    6868 delete.go:82] Unable to get host status for kubenet-20220921220528-5916, assuming it has already been deleted: state: unknown state "kubenet-20220921220528-5916": docker container inspect kubenet-20220921220528-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubenet-20220921220528-5916
	W0921 22:15:15.637506    6868 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: setting up container node: creating volume for kubenet-20220921220528-5916 container: docker volume create kubenet-20220921220528-5916 --label name.minikube.sigs.k8s.io=kubenet-20220921220528-5916 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create kubenet-20220921220528-5916: error while creating volume root path '/var/lib/docker/volumes/kubenet-20220921220528-5916': mkdir /var/lib/docker/volumes/kubenet-20220921220528-5916: read-only file system
	
	! StartHost failed, but will try again: creating host: create: creating: setting up container node: creating volume for kubenet-20220921220528-5916 container: docker volume create kubenet-20220921220528-5916 --label name.minikube.sigs.k8s.io=kubenet-20220921220528-5916 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create kubenet-20220921220528-5916: error while creating volume root path '/var/lib/docker/volumes/kubenet-20220921220528-5916': mkdir /var/lib/docker/volumes/kubenet-20220921220528-5916: read-only file system
	
	I0921 22:15:15.637506    6868 start.go:617] Will try again in 5 seconds ...
	I0921 22:15:20.652158    6868 start.go:364] acquiring machines lock for kubenet-20220921220528-5916: {Name:mkb53d82cac267bd5a4b8d80d3130c6d8fa01e46 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0921 22:15:20.652405    6868 start.go:368] acquired machines lock for "kubenet-20220921220528-5916" in 0s
	I0921 22:15:20.652405    6868 start.go:96] Skipping create...Using existing machine configuration
	I0921 22:15:20.652405    6868 fix.go:55] fixHost starting: 
	I0921 22:15:20.666787    6868 cli_runner.go:164] Run: docker container inspect kubenet-20220921220528-5916 --format={{.State.Status}}
	W0921 22:15:20.839873    6868 cli_runner.go:211] docker container inspect kubenet-20220921220528-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:15:20.839873    6868 fix.go:103] recreateIfNeeded on kubenet-20220921220528-5916: state= err=unknown state "kubenet-20220921220528-5916": docker container inspect kubenet-20220921220528-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubenet-20220921220528-5916
	I0921 22:15:20.839873    6868 fix.go:108] machineExists: false. err=machine does not exist
	I0921 22:15:20.843934    6868 out.go:177] * docker "kubenet-20220921220528-5916" container is missing, will recreate.
	I0921 22:15:20.845853    6868 delete.go:124] DEMOLISHING kubenet-20220921220528-5916 ...
	I0921 22:15:20.858902    6868 cli_runner.go:164] Run: docker container inspect kubenet-20220921220528-5916 --format={{.State.Status}}
	W0921 22:15:21.058253    6868 cli_runner.go:211] docker container inspect kubenet-20220921220528-5916 --format={{.State.Status}} returned with exit code 1
	W0921 22:15:21.058355    6868 stop.go:75] unable to get state: unknown state "kubenet-20220921220528-5916": docker container inspect kubenet-20220921220528-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubenet-20220921220528-5916
	I0921 22:15:21.058442    6868 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "kubenet-20220921220528-5916": docker container inspect kubenet-20220921220528-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubenet-20220921220528-5916
	I0921 22:15:21.071611    6868 cli_runner.go:164] Run: docker container inspect kubenet-20220921220528-5916 --format={{.State.Status}}
	W0921 22:15:21.256345    6868 cli_runner.go:211] docker container inspect kubenet-20220921220528-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:15:21.256624    6868 delete.go:82] Unable to get host status for kubenet-20220921220528-5916, assuming it has already been deleted: state: unknown state "kubenet-20220921220528-5916": docker container inspect kubenet-20220921220528-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubenet-20220921220528-5916
	I0921 22:15:21.264507    6868 cli_runner.go:164] Run: docker container inspect -f {{.Id}} kubenet-20220921220528-5916
	W0921 22:15:21.443610    6868 cli_runner.go:211] docker container inspect -f {{.Id}} kubenet-20220921220528-5916 returned with exit code 1
	I0921 22:15:21.443610    6868 kic.go:356] could not find the container kubenet-20220921220528-5916 to remove it. will try anyways
	I0921 22:15:21.454252    6868 cli_runner.go:164] Run: docker container inspect kubenet-20220921220528-5916 --format={{.State.Status}}
	W0921 22:15:21.676591    6868 cli_runner.go:211] docker container inspect kubenet-20220921220528-5916 --format={{.State.Status}} returned with exit code 1
	W0921 22:15:21.676591    6868 oci.go:84] error getting container status, will try to delete anyways: unknown state "kubenet-20220921220528-5916": docker container inspect kubenet-20220921220528-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubenet-20220921220528-5916
	I0921 22:15:21.682702    6868 cli_runner.go:164] Run: docker exec --privileged -t kubenet-20220921220528-5916 /bin/bash -c "sudo init 0"
	W0921 22:15:21.877131    6868 cli_runner.go:211] docker exec --privileged -t kubenet-20220921220528-5916 /bin/bash -c "sudo init 0" returned with exit code 1
	I0921 22:15:21.877131    6868 oci.go:646] error shutdown kubenet-20220921220528-5916: docker exec --privileged -t kubenet-20220921220528-5916 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: kubenet-20220921220528-5916
	I0921 22:15:22.895072    6868 cli_runner.go:164] Run: docker container inspect kubenet-20220921220528-5916 --format={{.State.Status}}
	W0921 22:15:23.092812    6868 cli_runner.go:211] docker container inspect kubenet-20220921220528-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:15:23.092812    6868 oci.go:658] temporary error verifying shutdown: unknown state "kubenet-20220921220528-5916": docker container inspect kubenet-20220921220528-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubenet-20220921220528-5916
	I0921 22:15:23.092812    6868 oci.go:660] temporary error: container kubenet-20220921220528-5916 status is  but expect it to be exited
	I0921 22:15:23.092812    6868 retry.go:31] will retry after 328.259627ms: couldn't verify container is exited. %v: unknown state "kubenet-20220921220528-5916": docker container inspect kubenet-20220921220528-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubenet-20220921220528-5916
	I0921 22:15:23.434767    6868 cli_runner.go:164] Run: docker container inspect kubenet-20220921220528-5916 --format={{.State.Status}}
	W0921 22:15:23.640861    6868 cli_runner.go:211] docker container inspect kubenet-20220921220528-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:15:23.640950    6868 oci.go:658] temporary error verifying shutdown: unknown state "kubenet-20220921220528-5916": docker container inspect kubenet-20220921220528-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubenet-20220921220528-5916
	I0921 22:15:23.640950    6868 oci.go:660] temporary error: container kubenet-20220921220528-5916 status is  but expect it to be exited
	I0921 22:15:23.640950    6868 retry.go:31] will retry after 447.727139ms: couldn't verify container is exited. %v: unknown state "kubenet-20220921220528-5916": docker container inspect kubenet-20220921220528-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubenet-20220921220528-5916
	I0921 22:15:24.102170    6868 cli_runner.go:164] Run: docker container inspect kubenet-20220921220528-5916 --format={{.State.Status}}
	W0921 22:15:24.280934    6868 cli_runner.go:211] docker container inspect kubenet-20220921220528-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:15:24.281131    6868 oci.go:658] temporary error verifying shutdown: unknown state "kubenet-20220921220528-5916": docker container inspect kubenet-20220921220528-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubenet-20220921220528-5916
	I0921 22:15:24.281157    6868 oci.go:660] temporary error: container kubenet-20220921220528-5916 status is  but expect it to be exited
	I0921 22:15:24.281238    6868 retry.go:31] will retry after 901.025843ms: couldn't verify container is exited. %v: unknown state "kubenet-20220921220528-5916": docker container inspect kubenet-20220921220528-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubenet-20220921220528-5916
	I0921 22:15:25.205356    6868 cli_runner.go:164] Run: docker container inspect kubenet-20220921220528-5916 --format={{.State.Status}}
	W0921 22:15:25.416034    6868 cli_runner.go:211] docker container inspect kubenet-20220921220528-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:15:25.416448    6868 oci.go:658] temporary error verifying shutdown: unknown state "kubenet-20220921220528-5916": docker container inspect kubenet-20220921220528-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubenet-20220921220528-5916
	I0921 22:15:25.416448    6868 oci.go:660] temporary error: container kubenet-20220921220528-5916 status is  but expect it to be exited
	I0921 22:15:25.416520    6868 retry.go:31] will retry after 1.713171311s: couldn't verify container is exited. %v: unknown state "kubenet-20220921220528-5916": docker container inspect kubenet-20220921220528-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubenet-20220921220528-5916
	I0921 22:15:27.139633    6868 cli_runner.go:164] Run: docker container inspect kubenet-20220921220528-5916 --format={{.State.Status}}
	W0921 22:15:27.349004    6868 cli_runner.go:211] docker container inspect kubenet-20220921220528-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:15:27.349004    6868 oci.go:658] temporary error verifying shutdown: unknown state "kubenet-20220921220528-5916": docker container inspect kubenet-20220921220528-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubenet-20220921220528-5916
	I0921 22:15:27.349004    6868 oci.go:660] temporary error: container kubenet-20220921220528-5916 status is  but expect it to be exited
	I0921 22:15:27.349004    6868 retry.go:31] will retry after 3.325151152s: couldn't verify container is exited. %v: unknown state "kubenet-20220921220528-5916": docker container inspect kubenet-20220921220528-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubenet-20220921220528-5916
	I0921 22:15:30.686221    6868 cli_runner.go:164] Run: docker container inspect kubenet-20220921220528-5916 --format={{.State.Status}}
	W0921 22:15:30.895814    6868 cli_runner.go:211] docker container inspect kubenet-20220921220528-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:15:30.895814    6868 oci.go:658] temporary error verifying shutdown: unknown state "kubenet-20220921220528-5916": docker container inspect kubenet-20220921220528-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubenet-20220921220528-5916
	I0921 22:15:30.895814    6868 oci.go:660] temporary error: container kubenet-20220921220528-5916 status is  but expect it to be exited
	I0921 22:15:30.895814    6868 retry.go:31] will retry after 2.711970641s: couldn't verify container is exited. %v: unknown state "kubenet-20220921220528-5916": docker container inspect kubenet-20220921220528-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubenet-20220921220528-5916
	I0921 22:15:33.628286    6868 cli_runner.go:164] Run: docker container inspect kubenet-20220921220528-5916 --format={{.State.Status}}
	W0921 22:15:33.806997    6868 cli_runner.go:211] docker container inspect kubenet-20220921220528-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:15:33.807265    6868 oci.go:658] temporary error verifying shutdown: unknown state "kubenet-20220921220528-5916": docker container inspect kubenet-20220921220528-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubenet-20220921220528-5916
	I0921 22:15:33.807265    6868 oci.go:660] temporary error: container kubenet-20220921220528-5916 status is  but expect it to be exited
	I0921 22:15:33.807265    6868 retry.go:31] will retry after 5.015617898s: couldn't verify container is exited. %v: unknown state "kubenet-20220921220528-5916": docker container inspect kubenet-20220921220528-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubenet-20220921220528-5916
	I0921 22:15:38.842744    6868 cli_runner.go:164] Run: docker container inspect kubenet-20220921220528-5916 --format={{.State.Status}}
	W0921 22:15:39.034584    6868 cli_runner.go:211] docker container inspect kubenet-20220921220528-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:15:39.034584    6868 oci.go:658] temporary error verifying shutdown: unknown state "kubenet-20220921220528-5916": docker container inspect kubenet-20220921220528-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubenet-20220921220528-5916
	I0921 22:15:39.034584    6868 oci.go:660] temporary error: container kubenet-20220921220528-5916 status is  but expect it to be exited
	I0921 22:15:39.034584    6868 oci.go:88] couldn't shut down kubenet-20220921220528-5916 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "kubenet-20220921220528-5916": docker container inspect kubenet-20220921220528-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubenet-20220921220528-5916
	 
	I0921 22:15:39.042463    6868 cli_runner.go:164] Run: docker rm -f -v kubenet-20220921220528-5916
	I0921 22:15:39.255370    6868 cli_runner.go:164] Run: docker container inspect -f {{.Id}} kubenet-20220921220528-5916
	W0921 22:15:39.479554    6868 cli_runner.go:211] docker container inspect -f {{.Id}} kubenet-20220921220528-5916 returned with exit code 1
	I0921 22:15:39.487583    6868 cli_runner.go:164] Run: docker network inspect kubenet-20220921220528-5916 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0921 22:15:39.680871    6868 cli_runner.go:211] docker network inspect kubenet-20220921220528-5916 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0921 22:15:39.690050    6868 network_create.go:272] running [docker network inspect kubenet-20220921220528-5916] to gather additional debugging logs...
	I0921 22:15:39.690050    6868 cli_runner.go:164] Run: docker network inspect kubenet-20220921220528-5916
	W0921 22:15:39.900776    6868 cli_runner.go:211] docker network inspect kubenet-20220921220528-5916 returned with exit code 1
	I0921 22:15:39.900967    6868 network_create.go:275] error running [docker network inspect kubenet-20220921220528-5916]: docker network inspect kubenet-20220921220528-5916: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: kubenet-20220921220528-5916
	I0921 22:15:39.901051    6868 network_create.go:277] output of [docker network inspect kubenet-20220921220528-5916]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: kubenet-20220921220528-5916
	
	** /stderr **
	W0921 22:15:39.901893    6868 delete.go:139] delete failed (probably ok) <nil>
	I0921 22:15:39.901893    6868 fix.go:115] Sleeping 1 second for extra luck!
	I0921 22:15:40.917034    6868 start.go:125] createHost starting for "" (driver="docker")
	I0921 22:15:40.923040    6868 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0921 22:15:40.923040    6868 start.go:159] libmachine.API.Create for "kubenet-20220921220528-5916" (driver="docker")
	I0921 22:15:40.923040    6868 client.go:168] LocalClient.Create starting
	I0921 22:15:40.923040    6868 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I0921 22:15:40.924035    6868 main.go:134] libmachine: Decoding PEM data...
	I0921 22:15:40.924035    6868 main.go:134] libmachine: Parsing certificate...
	I0921 22:15:40.924035    6868 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I0921 22:15:40.924035    6868 main.go:134] libmachine: Decoding PEM data...
	I0921 22:15:40.924035    6868 main.go:134] libmachine: Parsing certificate...
	I0921 22:15:40.932031    6868 cli_runner.go:164] Run: docker network inspect kubenet-20220921220528-5916 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0921 22:15:41.149580    6868 cli_runner.go:211] docker network inspect kubenet-20220921220528-5916 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0921 22:15:41.156887    6868 network_create.go:272] running [docker network inspect kubenet-20220921220528-5916] to gather additional debugging logs...
	I0921 22:15:41.156887    6868 cli_runner.go:164] Run: docker network inspect kubenet-20220921220528-5916
	W0921 22:15:41.351608    6868 cli_runner.go:211] docker network inspect kubenet-20220921220528-5916 returned with exit code 1
	I0921 22:15:41.351608    6868 network_create.go:275] error running [docker network inspect kubenet-20220921220528-5916]: docker network inspect kubenet-20220921220528-5916: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: kubenet-20220921220528-5916
	I0921 22:15:41.351608    6868 network_create.go:277] output of [docker network inspect kubenet-20220921220528-5916]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: kubenet-20220921220528-5916
	
	** /stderr **
	I0921 22:15:41.358615    6868 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0921 22:15:41.574209    6868 network.go:281] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000adc048] amended:false}} dirty:map[] misses:0}
	I0921 22:15:41.574209    6868 network.go:239] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0921 22:15:41.590572    6868 network.go:290] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000adc048] amended:true}} dirty:map[192.168.49.0:0xc000adc048 192.168.58.0:0xc0006e8f78] misses:0}
	I0921 22:15:41.591580    6868 network.go:236] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0921 22:15:41.591580    6868 network_create.go:115] attempt to create docker network kubenet-20220921220528-5916 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0921 22:15:41.597618    6868 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubenet-20220921220528-5916 kubenet-20220921220528-5916
	W0921 22:15:41.791721    6868 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubenet-20220921220528-5916 kubenet-20220921220528-5916 returned with exit code 1
	E0921 22:15:41.791721    6868 network_create.go:104] error while trying to create docker network kubenet-20220921220528-5916 192.168.58.0/24: create docker network kubenet-20220921220528-5916 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubenet-20220921220528-5916 kubenet-20220921220528-5916: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 61228a5d0793886bfb0741785f65393973f871af2822a5698d9c4d5bac63cc6b (br-61228a5d0793): conflicts with network 8a3cd8d165a4af032276277dbee93b52b8ee4aed7ca6d8aceee6814901b31792 (br-8a3cd8d165a4): networks have overlapping IPv4
	W0921 22:15:41.791721    6868 out.go:239] ! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network kubenet-20220921220528-5916 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubenet-20220921220528-5916 kubenet-20220921220528-5916: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 61228a5d0793886bfb0741785f65393973f871af2822a5698d9c4d5bac63cc6b (br-61228a5d0793): conflicts with network 8a3cd8d165a4af032276277dbee93b52b8ee4aed7ca6d8aceee6814901b31792 (br-8a3cd8d165a4): networks have overlapping IPv4
	
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network kubenet-20220921220528-5916 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubenet-20220921220528-5916 kubenet-20220921220528-5916: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network 61228a5d0793886bfb0741785f65393973f871af2822a5698d9c4d5bac63cc6b (br-61228a5d0793): conflicts with network 8a3cd8d165a4af032276277dbee93b52b8ee4aed7ca6d8aceee6814901b31792 (br-8a3cd8d165a4): networks have overlapping IPv4
	
	I0921 22:15:41.805721    6868 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0921 22:15:42.000696    6868 cli_runner.go:164] Run: docker volume create kubenet-20220921220528-5916 --label name.minikube.sigs.k8s.io=kubenet-20220921220528-5916 --label created_by.minikube.sigs.k8s.io=true
	W0921 22:15:42.183645    6868 cli_runner.go:211] docker volume create kubenet-20220921220528-5916 --label name.minikube.sigs.k8s.io=kubenet-20220921220528-5916 --label created_by.minikube.sigs.k8s.io=true returned with exit code 1
	I0921 22:15:42.183983    6868 client.go:171] LocalClient.Create took 1.260933s
	I0921 22:15:44.202680    6868 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0921 22:15:44.209122    6868 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20220921220528-5916
	W0921 22:15:44.393786    6868 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20220921220528-5916 returned with exit code 1
	I0921 22:15:44.393842    6868 retry.go:31] will retry after 242.222461ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubenet-20220921220528-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20220921220528-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubenet-20220921220528-5916
	I0921 22:15:44.650213    6868 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20220921220528-5916
	W0921 22:15:44.859935    6868 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20220921220528-5916 returned with exit code 1
	I0921 22:15:44.860306    6868 retry.go:31] will retry after 293.637806ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubenet-20220921220528-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20220921220528-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubenet-20220921220528-5916
	I0921 22:15:45.164588    6868 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20220921220528-5916
	W0921 22:15:45.358598    6868 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20220921220528-5916 returned with exit code 1
	I0921 22:15:45.359101    6868 retry.go:31] will retry after 446.119795ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubenet-20220921220528-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20220921220528-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubenet-20220921220528-5916
	I0921 22:15:45.823705    6868 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20220921220528-5916
	W0921 22:15:46.016110    6868 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20220921220528-5916 returned with exit code 1
	W0921 22:15:46.016110    6868 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubenet-20220921220528-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20220921220528-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubenet-20220921220528-5916
	
	W0921 22:15:46.016110    6868 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubenet-20220921220528-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20220921220528-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubenet-20220921220528-5916
	I0921 22:15:46.027095    6868 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0921 22:15:46.033387    6868 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20220921220528-5916
	W0921 22:15:46.232316    6868 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20220921220528-5916 returned with exit code 1
	I0921 22:15:46.232494    6868 retry.go:31] will retry after 179.638263ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubenet-20220921220528-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20220921220528-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubenet-20220921220528-5916
	I0921 22:15:46.426186    6868 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20220921220528-5916
	W0921 22:15:46.635584    6868 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20220921220528-5916 returned with exit code 1
	I0921 22:15:46.635584    6868 retry.go:31] will retry after 263.695078ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubenet-20220921220528-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20220921220528-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubenet-20220921220528-5916
	I0921 22:15:46.914509    6868 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20220921220528-5916
	W0921 22:15:47.112824    6868 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20220921220528-5916 returned with exit code 1
	I0921 22:15:47.112824    6868 retry.go:31] will retry after 484.240172ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubenet-20220921220528-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20220921220528-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubenet-20220921220528-5916
	I0921 22:15:47.607170    6868 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20220921220528-5916
	W0921 22:15:47.798760    6868 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20220921220528-5916 returned with exit code 1
	W0921 22:15:47.798760    6868 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubenet-20220921220528-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20220921220528-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubenet-20220921220528-5916
	
	W0921 22:15:47.798760    6868 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubenet-20220921220528-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20220921220528-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubenet-20220921220528-5916
	I0921 22:15:47.798760    6868 start.go:128] duration metric: createHost completed in 6.8816694s
	I0921 22:15:47.808755    6868 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0921 22:15:47.815720    6868 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20220921220528-5916
	W0921 22:15:48.010205    6868 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20220921220528-5916 returned with exit code 1
	I0921 22:15:48.010205    6868 retry.go:31] will retry after 340.62286ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubenet-20220921220528-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20220921220528-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubenet-20220921220528-5916
	I0921 22:15:48.363896    6868 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20220921220528-5916
	W0921 22:15:48.589373    6868 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20220921220528-5916 returned with exit code 1
	I0921 22:15:48.589631    6868 retry.go:31] will retry after 297.417842ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubenet-20220921220528-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20220921220528-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubenet-20220921220528-5916
	I0921 22:15:48.901648    6868 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20220921220528-5916
	W0921 22:15:49.096759    6868 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20220921220528-5916 returned with exit code 1
	I0921 22:15:49.096759    6868 retry.go:31] will retry after 448.358942ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubenet-20220921220528-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20220921220528-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubenet-20220921220528-5916
	I0921 22:15:49.560823    6868 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20220921220528-5916
	W0921 22:15:49.758324    6868 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20220921220528-5916 returned with exit code 1
	W0921 22:15:49.758324    6868 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubenet-20220921220528-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20220921220528-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubenet-20220921220528-5916
	
	W0921 22:15:49.758324    6868 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubenet-20220921220528-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20220921220528-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubenet-20220921220528-5916
	I0921 22:15:49.770239    6868 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0921 22:15:49.777540    6868 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20220921220528-5916
	W0921 22:15:50.008058    6868 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20220921220528-5916 returned with exit code 1
	I0921 22:15:50.008058    6868 retry.go:31] will retry after 176.645665ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubenet-20220921220528-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20220921220528-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubenet-20220921220528-5916
	I0921 22:15:50.196285    6868 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20220921220528-5916
	W0921 22:15:50.388595    6868 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20220921220528-5916 returned with exit code 1
	I0921 22:15:50.388595    6868 retry.go:31] will retry after 512.00063ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubenet-20220921220528-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20220921220528-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubenet-20220921220528-5916
	I0921 22:15:50.928094    6868 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20220921220528-5916
	W0921 22:15:51.106618    6868 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20220921220528-5916 returned with exit code 1
	W0921 22:15:51.106618    6868 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubenet-20220921220528-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20220921220528-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubenet-20220921220528-5916
	
	W0921 22:15:51.106915    6868 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubenet-20220921220528-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20220921220528-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubenet-20220921220528-5916
	I0921 22:15:51.106915    6868 fix.go:57] fixHost completed within 30.4542613s
	I0921 22:15:51.106915    6868 start.go:83] releasing machines lock for "kubenet-20220921220528-5916", held for 30.4542613s
	W0921 22:15:51.106915    6868 out.go:239] * Failed to start docker container. Running "minikube delete -p kubenet-20220921220528-5916" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for kubenet-20220921220528-5916 container: docker volume create kubenet-20220921220528-5916 --label name.minikube.sigs.k8s.io=kubenet-20220921220528-5916 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create kubenet-20220921220528-5916: error while creating volume root path '/var/lib/docker/volumes/kubenet-20220921220528-5916': mkdir /var/lib/docker/volumes/kubenet-20220921220528-5916: read-only file system
	
	* Failed to start docker container. Running "minikube delete -p kubenet-20220921220528-5916" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for kubenet-20220921220528-5916 container: docker volume create kubenet-20220921220528-5916 --label name.minikube.sigs.k8s.io=kubenet-20220921220528-5916 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create kubenet-20220921220528-5916: error while creating volume root path '/var/lib/docker/volumes/kubenet-20220921220528-5916': mkdir /var/lib/docker/volumes/kubenet-20220921220528-5916: read-only file system
	
	I0921 22:15:51.113903    6868 out.go:177] 
	W0921 22:15:51.115944    6868 out.go:239] X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for kubenet-20220921220528-5916 container: docker volume create kubenet-20220921220528-5916 --label name.minikube.sigs.k8s.io=kubenet-20220921220528-5916 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create kubenet-20220921220528-5916: error while creating volume root path '/var/lib/docker/volumes/kubenet-20220921220528-5916': mkdir /var/lib/docker/volumes/kubenet-20220921220528-5916: read-only file system
	
	X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for kubenet-20220921220528-5916 container: docker volume create kubenet-20220921220528-5916 --label name.minikube.sigs.k8s.io=kubenet-20220921220528-5916 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create kubenet-20220921220528-5916: error while creating volume root path '/var/lib/docker/volumes/kubenet-20220921220528-5916': mkdir /var/lib/docker/volumes/kubenet-20220921220528-5916: read-only file system
	
	W0921 22:15:51.115944    6868 out.go:239] * Suggestion: Restart Docker
	* Suggestion: Restart Docker
	W0921 22:15:51.115944    6868 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/6825
	* Related issue: https://github.com/kubernetes/minikube/issues/6825
	I0921 22:15:51.118958    6868 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:103: failed start: exit status 60
--- FAIL: TestNetworkPlugins/group/kubenet/Start (49.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (48.94s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:101: (dbg) Run:  out/minikube-windows-amd64.exe start -p kindnet-20220921220530-5916 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=kindnet --driver=docker

                                                
                                                
=== CONT  TestNetworkPlugins/group/kindnet/Start
net_test.go:101: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p kindnet-20220921220530-5916 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=kindnet --driver=docker: exit status 60 (48.8291231s)

                                                
                                                
-- stdout --
	* [kindnet-20220921220530-5916] minikube v1.27.0 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	  - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=14995
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting control plane node kindnet-20220921220530-5916 in cluster kindnet-20220921220530-5916
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* docker "kindnet-20220921220530-5916" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0921 22:15:04.534590    8960 out.go:296] Setting OutFile to fd 1620 ...
	I0921 22:15:04.595212    8960 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0921 22:15:04.595212    8960 out.go:309] Setting ErrFile to fd 1740...
	I0921 22:15:04.595212    8960 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0921 22:15:04.614217    8960 out.go:303] Setting JSON to false
	I0921 22:15:04.617217    8960 start.go:115] hostinfo: {"hostname":"minikube2","uptime":4573,"bootTime":1663793931,"procs":153,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19044 Build 19044","kernelVersion":"10.0.19044 Build 19044","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"29dced62-21fb-45d8-a34e-472b66ced002"}
	W0921 22:15:04.617217    8960 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0921 22:15:04.621221    8960 out.go:177] * [kindnet-20220921220530-5916] minikube v1.27.0 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	I0921 22:15:04.625279    8960 notify.go:214] Checking for updates...
	I0921 22:15:04.627212    8960 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	I0921 22:15:04.629223    8960 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	I0921 22:15:04.632216    8960 out.go:177]   - MINIKUBE_LOCATION=14995
	I0921 22:15:04.634214    8960 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0921 22:15:04.638213    8960 config.go:180] Loaded profile config "enable-default-cni-20220921220528-5916": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.2
	I0921 22:15:04.639214    8960 config.go:180] Loaded profile config "kubenet-20220921220528-5916": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.2
	I0921 22:15:04.639214    8960 config.go:180] Loaded profile config "multinode-20220921215635-5916-m01": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.2
	I0921 22:15:04.639214    8960 driver.go:365] Setting default libvirt URI to qemu:///system
	I0921 22:15:04.942195    8960 docker.go:137] docker version: linux-20.10.17
	I0921 22:15:04.950167    8960 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0921 22:15:05.514704    8960 info.go:265] docker info: {ID:6VFI:N2PO:PLBP:KXLR:EN7R:OR7P:564S:UY7N:ZWDA:5V2U:KGL2:KJZ7 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:44 OomKillDisable:true NGoroutines:96 SystemTime:2022-09-21 22:15:05.0984487 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.10.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.9] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-
plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.19.0]] Warnings:<nil>}}
	I0921 22:15:05.517703    8960 out.go:177] * Using the docker driver based on user configuration
	I0921 22:15:05.520702    8960 start.go:284] selected driver: docker
	I0921 22:15:05.520702    8960 start.go:808] validating driver "docker" against <nil>
	I0921 22:15:05.521711    8960 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0921 22:15:05.620652    8960 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0921 22:15:06.186972    8960 info.go:265] docker info: {ID:6VFI:N2PO:PLBP:KXLR:EN7R:OR7P:564S:UY7N:ZWDA:5V2U:KGL2:KJZ7 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:44 OomKillDisable:true NGoroutines:96 SystemTime:2022-09-21 22:15:05.7808462 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.10.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.9] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-
plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.19.0]] Warnings:<nil>}}
	I0921 22:15:06.187504    8960 start_flags.go:302] no existing cluster config was found, will generate one from the flags 
	I0921 22:15:06.188276    8960 start_flags.go:867] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0921 22:15:06.192141    8960 out.go:177] * Using Docker Desktop driver with root privileges
	I0921 22:15:06.194449    8960 cni.go:95] Creating CNI manager for "kindnet"
	I0921 22:15:06.194494    8960 start_flags.go:311] Found "CNI" CNI - setting NetworkPlugin=cni
	I0921 22:15:06.194535    8960 start_flags.go:316] config:
	{Name:kindnet-20220921220530-5916 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.2 ClusterName:kindnet-20220921220530-5916 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:dock
er CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I0921 22:15:06.198305    8960 out.go:177] * Starting control plane node kindnet-20220921220530-5916 in cluster kindnet-20220921220530-5916
	I0921 22:15:06.200552    8960 cache.go:120] Beginning downloading kic base image for docker with docker
	I0921 22:15:06.203516    8960 out.go:177] * Pulling base image ...
	I0921 22:15:06.206330    8960 preload.go:132] Checking if preload exists for k8s version v1.25.2 and runtime docker
	I0921 22:15:06.206382    8960 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c in local docker daemon
	I0921 22:15:06.206536    8960 preload.go:148] Found local preload: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.25.2-docker-overlay2-amd64.tar.lz4
	I0921 22:15:06.206536    8960 cache.go:57] Caching tarball of preloaded images
	I0921 22:15:06.207114    8960 preload.go:174] Found C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.25.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0921 22:15:06.207114    8960 cache.go:60] Finished verifying existence of preloaded tar for  v1.25.2 on docker
	I0921 22:15:06.207114    8960 profile.go:148] Saving config to C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\kindnet-20220921220530-5916\config.json ...
	I0921 22:15:06.207854    8960 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\kindnet-20220921220530-5916\config.json: {Name:mk94eca89429db185db12bdcac6f867ce771e616 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0921 22:15:06.419344    8960 cache.go:147] Downloading gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c to local cache
	I0921 22:15:06.419344    8960 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase_v0.0.34@sha256_f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c.tar
	I0921 22:15:06.419344    8960 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase_v0.0.34@sha256_f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c.tar
	I0921 22:15:06.419344    8960 image.go:59] Checking for gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c in local cache directory
	I0921 22:15:06.419344    8960 image.go:62] Found gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c in local cache directory, skipping pull
	I0921 22:15:06.419344    8960 image.go:103] gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c exists in cache, skipping pull
	I0921 22:15:06.419344    8960 cache.go:150] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c as a tarball
	I0921 22:15:06.419344    8960 cache.go:161] Loading gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c from local cache
	I0921 22:15:06.419344    8960 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase_v0.0.34@sha256_f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c.tar
	I0921 22:15:08.851559    8960 cache.go:164] successfully loaded gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c from cached tarball
	I0921 22:15:08.851559    8960 cache.go:170] Downloading gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c to local daemon
	I0921 22:15:08.851559    8960 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c.d.lock -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase_v0.0.34@sha256_f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c.d.lock
	I0921 22:15:08.852332    8960 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c in local docker daemon
	I0921 22:15:09.068126    8960 image.go:243] Writing gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c to local daemon
	    > gcr.io/k8s-minikube/kicbase:  0 B [__________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase:  0 B [__________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase:  0 B [__________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase:  0 B [______________________] ?% ? p/s 800msI0921 22:15:10.545266    8960 cache.go:177] use image loaded from cache gcr.io/k8s-minikube/kicbase:v0.0.34
	W0921 22:15:10.545266    8960 out.go:239] ! minikube was unable to download gcr.io/k8s-minikube/kicbase:v0.0.34, but successfully downloaded gcr.io/k8s-minikube/kicbase:v0.0.34 as a fallback image
	! minikube was unable to download gcr.io/k8s-minikube/kicbase:v0.0.34, but successfully downloaded gcr.io/k8s-minikube/kicbase:v0.0.34 as a fallback image
	I0921 22:15:10.545266    8960 cache.go:208] Successfully downloaded all kic artifacts
	I0921 22:15:10.545266    8960 start.go:364] acquiring machines lock for kindnet-20220921220530-5916: {Name:mk15ee06a6b8c50f68d3f04d046af028b39f7798 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0921 22:15:10.545896    8960 start.go:368] acquired machines lock for "kindnet-20220921220530-5916" in 531.9µs
	I0921 22:15:10.546162    8960 start.go:93] Provisioning new machine with config: &{Name:kindnet-20220921220530-5916 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.34 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.2 ClusterName:kindnet-20220921220530-5916 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.25.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: Socke
tVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet} &{Name: IP: Port:8443 KubernetesVersion:v1.25.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0921 22:15:10.546376    8960 start.go:125] createHost starting for "" (driver="docker")
	I0921 22:15:10.550879    8960 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0921 22:15:10.551383    8960 start.go:159] libmachine.API.Create for "kindnet-20220921220530-5916" (driver="docker")
	I0921 22:15:10.551465    8960 client.go:168] LocalClient.Create starting
	I0921 22:15:10.552029    8960 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I0921 22:15:10.552029    8960 main.go:134] libmachine: Decoding PEM data...
	I0921 22:15:10.552029    8960 main.go:134] libmachine: Parsing certificate...
	I0921 22:15:10.552634    8960 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I0921 22:15:10.552811    8960 main.go:134] libmachine: Decoding PEM data...
	I0921 22:15:10.552811    8960 main.go:134] libmachine: Parsing certificate...
	I0921 22:15:10.561453    8960 cli_runner.go:164] Run: docker network inspect kindnet-20220921220530-5916 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0921 22:15:10.745283    8960 cli_runner.go:211] docker network inspect kindnet-20220921220530-5916 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0921 22:15:10.754750    8960 network_create.go:272] running [docker network inspect kindnet-20220921220530-5916] to gather additional debugging logs...
	I0921 22:15:10.754750    8960 cli_runner.go:164] Run: docker network inspect kindnet-20220921220530-5916
	W0921 22:15:10.947698    8960 cli_runner.go:211] docker network inspect kindnet-20220921220530-5916 returned with exit code 1
	I0921 22:15:10.947698    8960 network_create.go:275] error running [docker network inspect kindnet-20220921220530-5916]: docker network inspect kindnet-20220921220530-5916: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: kindnet-20220921220530-5916
	I0921 22:15:10.947698    8960 network_create.go:277] output of [docker network inspect kindnet-20220921220530-5916]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: kindnet-20220921220530-5916
	
	** /stderr **
	I0921 22:15:10.957424    8960 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0921 22:15:11.166647    8960 network.go:290] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc00000ad38] misses:0}
	I0921 22:15:11.167702    8960 network.go:236] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0921 22:15:11.167702    8960 network_create.go:115] attempt to create docker network kindnet-20220921220530-5916 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0921 22:15:11.173990    8960 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kindnet-20220921220530-5916 kindnet-20220921220530-5916
	W0921 22:15:11.366431    8960 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kindnet-20220921220530-5916 kindnet-20220921220530-5916 returned with exit code 1
	E0921 22:15:11.366431    8960 network_create.go:104] error while trying to create docker network kindnet-20220921220530-5916 192.168.49.0/24: create docker network kindnet-20220921220530-5916 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kindnet-20220921220530-5916 kindnet-20220921220530-5916: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network ac3153a7e6d21b7f1ac507685a0aec06be5e466f4cbae5a2cd8bd287deff2aee (br-ac3153a7e6d2): conflicts with network a04d36bfb3cf4b1b1ab643a2ec70fadfc4a485878b32a275b4c7653037d66dd0 (br-a04d36bfb3cf): networks have overlapping IPv4
	W0921 22:15:11.366431    8960 out.go:239] ! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network kindnet-20220921220530-5916 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kindnet-20220921220530-5916 kindnet-20220921220530-5916: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network ac3153a7e6d21b7f1ac507685a0aec06be5e466f4cbae5a2cd8bd287deff2aee (br-ac3153a7e6d2): conflicts with network a04d36bfb3cf4b1b1ab643a2ec70fadfc4a485878b32a275b4c7653037d66dd0 (br-a04d36bfb3cf): networks have overlapping IPv4
	
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network kindnet-20220921220530-5916 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kindnet-20220921220530-5916 kindnet-20220921220530-5916: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network ac3153a7e6d21b7f1ac507685a0aec06be5e466f4cbae5a2cd8bd287deff2aee (br-ac3153a7e6d2): conflicts with network a04d36bfb3cf4b1b1ab643a2ec70fadfc4a485878b32a275b4c7653037d66dd0 (br-a04d36bfb3cf): networks have overlapping IPv4
	
	I0921 22:15:11.381070    8960 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0921 22:15:11.594046    8960 cli_runner.go:164] Run: docker volume create kindnet-20220921220530-5916 --label name.minikube.sigs.k8s.io=kindnet-20220921220530-5916 --label created_by.minikube.sigs.k8s.io=true
	W0921 22:15:11.786839    8960 cli_runner.go:211] docker volume create kindnet-20220921220530-5916 --label name.minikube.sigs.k8s.io=kindnet-20220921220530-5916 --label created_by.minikube.sigs.k8s.io=true returned with exit code 1
	I0921 22:15:11.787057    8960 client.go:171] LocalClient.Create took 1.2354407s
	I0921 22:15:13.801806    8960 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0921 22:15:13.807402    8960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220921220530-5916
	W0921 22:15:14.006274    8960 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220921220530-5916 returned with exit code 1
	I0921 22:15:14.006606    8960 retry.go:31] will retry after 276.165072ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kindnet-20220921220530-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220921220530-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-20220921220530-5916
	I0921 22:15:14.294486    8960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220921220530-5916
	W0921 22:15:14.487762    8960 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220921220530-5916 returned with exit code 1
	I0921 22:15:14.487762    8960 retry.go:31] will retry after 540.190908ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kindnet-20220921220530-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220921220530-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-20220921220530-5916
	I0921 22:15:15.050692    8960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220921220530-5916
	W0921 22:15:15.229537    8960 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220921220530-5916 returned with exit code 1
	W0921 22:15:15.229764    8960 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kindnet-20220921220530-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220921220530-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-20220921220530-5916
	
	W0921 22:15:15.229764    8960 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kindnet-20220921220530-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220921220530-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-20220921220530-5916
	I0921 22:15:15.243244    8960 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0921 22:15:15.251239    8960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220921220530-5916
	W0921 22:15:15.467149    8960 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220921220530-5916 returned with exit code 1
	I0921 22:15:15.467149    8960 retry.go:31] will retry after 234.428547ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kindnet-20220921220530-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220921220530-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-20220921220530-5916
	I0921 22:15:15.722755    8960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220921220530-5916
	W0921 22:15:15.931775    8960 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220921220530-5916 returned with exit code 1
	I0921 22:15:15.932094    8960 retry.go:31] will retry after 346.739061ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kindnet-20220921220530-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220921220530-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-20220921220530-5916
	I0921 22:15:16.296285    8960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220921220530-5916
	W0921 22:15:16.489905    8960 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220921220530-5916 returned with exit code 1
	I0921 22:15:16.489905    8960 retry.go:31] will retry after 667.587979ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kindnet-20220921220530-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220921220530-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-20220921220530-5916
	I0921 22:15:17.181671    8960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220921220530-5916
	W0921 22:15:17.362688    8960 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220921220530-5916 returned with exit code 1
	W0921 22:15:17.363041    8960 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kindnet-20220921220530-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220921220530-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-20220921220530-5916
	
	W0921 22:15:17.363041    8960 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kindnet-20220921220530-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220921220530-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-20220921220530-5916
	I0921 22:15:17.363041    8960 start.go:128] duration metric: createHost completed in 6.8166105s
	I0921 22:15:17.363041    8960 start.go:83] releasing machines lock for "kindnet-20220921220530-5916", held for 6.8170901s
	W0921 22:15:17.363041    8960 start.go:602] error starting host: creating host: create: creating: setting up container node: creating volume for kindnet-20220921220530-5916 container: docker volume create kindnet-20220921220530-5916 --label name.minikube.sigs.k8s.io=kindnet-20220921220530-5916 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create kindnet-20220921220530-5916: error while creating volume root path '/var/lib/docker/volumes/kindnet-20220921220530-5916': mkdir /var/lib/docker/volumes/kindnet-20220921220530-5916: read-only file system
	I0921 22:15:17.372016    8960 cli_runner.go:164] Run: docker container inspect kindnet-20220921220530-5916 --format={{.State.Status}}
	W0921 22:15:17.566069    8960 cli_runner.go:211] docker container inspect kindnet-20220921220530-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:15:17.566245    8960 delete.go:82] Unable to get host status for kindnet-20220921220530-5916, assuming it has already been deleted: state: unknown state "kindnet-20220921220530-5916": docker container inspect kindnet-20220921220530-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-20220921220530-5916
	W0921 22:15:17.566432    8960 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: setting up container node: creating volume for kindnet-20220921220530-5916 container: docker volume create kindnet-20220921220530-5916 --label name.minikube.sigs.k8s.io=kindnet-20220921220530-5916 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create kindnet-20220921220530-5916: error while creating volume root path '/var/lib/docker/volumes/kindnet-20220921220530-5916': mkdir /var/lib/docker/volumes/kindnet-20220921220530-5916: read-only file system
	
	! StartHost failed, but will try again: creating host: create: creating: setting up container node: creating volume for kindnet-20220921220530-5916 container: docker volume create kindnet-20220921220530-5916 --label name.minikube.sigs.k8s.io=kindnet-20220921220530-5916 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create kindnet-20220921220530-5916: error while creating volume root path '/var/lib/docker/volumes/kindnet-20220921220530-5916': mkdir /var/lib/docker/volumes/kindnet-20220921220530-5916: read-only file system
	
	I0921 22:15:17.566432    8960 start.go:617] Will try again in 5 seconds ...
	I0921 22:15:22.576013    8960 start.go:364] acquiring machines lock for kindnet-20220921220530-5916: {Name:mk15ee06a6b8c50f68d3f04d046af028b39f7798 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0921 22:15:22.576316    8960 start.go:368] acquired machines lock for "kindnet-20220921220530-5916" in 303.2µs
	I0921 22:15:22.576541    8960 start.go:96] Skipping create...Using existing machine configuration
	I0921 22:15:22.576541    8960 fix.go:55] fixHost starting: 
	I0921 22:15:22.597130    8960 cli_runner.go:164] Run: docker container inspect kindnet-20220921220530-5916 --format={{.State.Status}}
	W0921 22:15:22.795088    8960 cli_runner.go:211] docker container inspect kindnet-20220921220530-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:15:22.795088    8960 fix.go:103] recreateIfNeeded on kindnet-20220921220530-5916: state= err=unknown state "kindnet-20220921220530-5916": docker container inspect kindnet-20220921220530-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-20220921220530-5916
	I0921 22:15:22.795088    8960 fix.go:108] machineExists: false. err=machine does not exist
	I0921 22:15:22.801546    8960 out.go:177] * docker "kindnet-20220921220530-5916" container is missing, will recreate.
	I0921 22:15:22.805066    8960 delete.go:124] DEMOLISHING kindnet-20220921220530-5916 ...
	I0921 22:15:22.818169    8960 cli_runner.go:164] Run: docker container inspect kindnet-20220921220530-5916 --format={{.State.Status}}
	W0921 22:15:23.028816    8960 cli_runner.go:211] docker container inspect kindnet-20220921220530-5916 --format={{.State.Status}} returned with exit code 1
	W0921 22:15:23.028816    8960 stop.go:75] unable to get state: unknown state "kindnet-20220921220530-5916": docker container inspect kindnet-20220921220530-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-20220921220530-5916
	I0921 22:15:23.028816    8960 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "kindnet-20220921220530-5916": docker container inspect kindnet-20220921220530-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-20220921220530-5916
	I0921 22:15:23.042799    8960 cli_runner.go:164] Run: docker container inspect kindnet-20220921220530-5916 --format={{.State.Status}}
	W0921 22:15:23.247769    8960 cli_runner.go:211] docker container inspect kindnet-20220921220530-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:15:23.247769    8960 delete.go:82] Unable to get host status for kindnet-20220921220530-5916, assuming it has already been deleted: state: unknown state "kindnet-20220921220530-5916": docker container inspect kindnet-20220921220530-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-20220921220530-5916
	I0921 22:15:23.254728    8960 cli_runner.go:164] Run: docker container inspect -f {{.Id}} kindnet-20220921220530-5916
	W0921 22:15:23.453522    8960 cli_runner.go:211] docker container inspect -f {{.Id}} kindnet-20220921220530-5916 returned with exit code 1
	I0921 22:15:23.453522    8960 kic.go:356] could not find the container kindnet-20220921220530-5916 to remove it. will try anyways
	I0921 22:15:23.463312    8960 cli_runner.go:164] Run: docker container inspect kindnet-20220921220530-5916 --format={{.State.Status}}
	W0921 22:15:23.640861    8960 cli_runner.go:211] docker container inspect kindnet-20220921220530-5916 --format={{.State.Status}} returned with exit code 1
	W0921 22:15:23.640950    8960 oci.go:84] error getting container status, will try to delete anyways: unknown state "kindnet-20220921220530-5916": docker container inspect kindnet-20220921220530-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-20220921220530-5916
	I0921 22:15:23.647960    8960 cli_runner.go:164] Run: docker exec --privileged -t kindnet-20220921220530-5916 /bin/bash -c "sudo init 0"
	W0921 22:15:23.861026    8960 cli_runner.go:211] docker exec --privileged -t kindnet-20220921220530-5916 /bin/bash -c "sudo init 0" returned with exit code 1
	I0921 22:15:23.861026    8960 oci.go:646] error shutdown kindnet-20220921220530-5916: docker exec --privileged -t kindnet-20220921220530-5916 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: kindnet-20220921220530-5916
	I0921 22:15:24.886140    8960 cli_runner.go:164] Run: docker container inspect kindnet-20220921220530-5916 --format={{.State.Status}}
	W0921 22:15:25.078825    8960 cli_runner.go:211] docker container inspect kindnet-20220921220530-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:15:25.078907    8960 oci.go:658] temporary error verifying shutdown: unknown state "kindnet-20220921220530-5916": docker container inspect kindnet-20220921220530-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-20220921220530-5916
	I0921 22:15:25.078907    8960 oci.go:660] temporary error: container kindnet-20220921220530-5916 status is  but expect it to be exited
	I0921 22:15:25.078907    8960 retry.go:31] will retry after 328.259627ms: couldn't verify container is exited. %v: unknown state "kindnet-20220921220530-5916": docker container inspect kindnet-20220921220530-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-20220921220530-5916
	I0921 22:15:25.425376    8960 cli_runner.go:164] Run: docker container inspect kindnet-20220921220530-5916 --format={{.State.Status}}
	W0921 22:15:25.618397    8960 cli_runner.go:211] docker container inspect kindnet-20220921220530-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:15:25.618397    8960 oci.go:658] temporary error verifying shutdown: unknown state "kindnet-20220921220530-5916": docker container inspect kindnet-20220921220530-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-20220921220530-5916
	I0921 22:15:25.618397    8960 oci.go:660] temporary error: container kindnet-20220921220530-5916 status is  but expect it to be exited
	I0921 22:15:25.618397    8960 retry.go:31] will retry after 447.727139ms: couldn't verify container is exited. %v: unknown state "kindnet-20220921220530-5916": docker container inspect kindnet-20220921220530-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-20220921220530-5916
	I0921 22:15:26.086551    8960 cli_runner.go:164] Run: docker container inspect kindnet-20220921220530-5916 --format={{.State.Status}}
	W0921 22:15:26.278635    8960 cli_runner.go:211] docker container inspect kindnet-20220921220530-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:15:26.278743    8960 oci.go:658] temporary error verifying shutdown: unknown state "kindnet-20220921220530-5916": docker container inspect kindnet-20220921220530-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-20220921220530-5916
	I0921 22:15:26.278743    8960 oci.go:660] temporary error: container kindnet-20220921220530-5916 status is  but expect it to be exited
	I0921 22:15:26.278743    8960 retry.go:31] will retry after 901.025843ms: couldn't verify container is exited. %v: unknown state "kindnet-20220921220530-5916": docker container inspect kindnet-20220921220530-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-20220921220530-5916
	I0921 22:15:27.203076    8960 cli_runner.go:164] Run: docker container inspect kindnet-20220921220530-5916 --format={{.State.Status}}
	W0921 22:15:27.395985    8960 cli_runner.go:211] docker container inspect kindnet-20220921220530-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:15:27.395985    8960 oci.go:658] temporary error verifying shutdown: unknown state "kindnet-20220921220530-5916": docker container inspect kindnet-20220921220530-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-20220921220530-5916
	I0921 22:15:27.395985    8960 oci.go:660] temporary error: container kindnet-20220921220530-5916 status is  but expect it to be exited
	I0921 22:15:27.395985    8960 retry.go:31] will retry after 1.713171311s: couldn't verify container is exited. %v: unknown state "kindnet-20220921220530-5916": docker container inspect kindnet-20220921220530-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-20220921220530-5916
	I0921 22:15:29.117874    8960 cli_runner.go:164] Run: docker container inspect kindnet-20220921220530-5916 --format={{.State.Status}}
	W0921 22:15:29.294910    8960 cli_runner.go:211] docker container inspect kindnet-20220921220530-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:15:29.294910    8960 oci.go:658] temporary error verifying shutdown: unknown state "kindnet-20220921220530-5916": docker container inspect kindnet-20220921220530-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-20220921220530-5916
	I0921 22:15:29.294910    8960 oci.go:660] temporary error: container kindnet-20220921220530-5916 status is  but expect it to be exited
	I0921 22:15:29.294910    8960 retry.go:31] will retry after 3.325151152s: couldn't verify container is exited. %v: unknown state "kindnet-20220921220530-5916": docker container inspect kindnet-20220921220530-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-20220921220530-5916
	I0921 22:15:32.630952    8960 cli_runner.go:164] Run: docker container inspect kindnet-20220921220530-5916 --format={{.State.Status}}
	W0921 22:15:32.834415    8960 cli_runner.go:211] docker container inspect kindnet-20220921220530-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:15:32.834492    8960 oci.go:658] temporary error verifying shutdown: unknown state "kindnet-20220921220530-5916": docker container inspect kindnet-20220921220530-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-20220921220530-5916
	I0921 22:15:32.834492    8960 oci.go:660] temporary error: container kindnet-20220921220530-5916 status is  but expect it to be exited
	I0921 22:15:32.834551    8960 retry.go:31] will retry after 2.711970641s: couldn't verify container is exited. %v: unknown state "kindnet-20220921220530-5916": docker container inspect kindnet-20220921220530-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-20220921220530-5916
	I0921 22:15:35.569448    8960 cli_runner.go:164] Run: docker container inspect kindnet-20220921220530-5916 --format={{.State.Status}}
	W0921 22:15:35.780320    8960 cli_runner.go:211] docker container inspect kindnet-20220921220530-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:15:35.780424    8960 oci.go:658] temporary error verifying shutdown: unknown state "kindnet-20220921220530-5916": docker container inspect kindnet-20220921220530-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-20220921220530-5916
	I0921 22:15:35.780424    8960 oci.go:660] temporary error: container kindnet-20220921220530-5916 status is  but expect it to be exited
	I0921 22:15:35.780580    8960 retry.go:31] will retry after 5.015617898s: couldn't verify container is exited. %v: unknown state "kindnet-20220921220530-5916": docker container inspect kindnet-20220921220530-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-20220921220530-5916
	I0921 22:15:40.814019    8960 cli_runner.go:164] Run: docker container inspect kindnet-20220921220530-5916 --format={{.State.Status}}
	W0921 22:15:41.011024    8960 cli_runner.go:211] docker container inspect kindnet-20220921220530-5916 --format={{.State.Status}} returned with exit code 1
	I0921 22:15:41.011024    8960 oci.go:658] temporary error verifying shutdown: unknown state "kindnet-20220921220530-5916": docker container inspect kindnet-20220921220530-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-20220921220530-5916
	I0921 22:15:41.011024    8960 oci.go:660] temporary error: container kindnet-20220921220530-5916 status is  but expect it to be exited
	I0921 22:15:41.011024    8960 oci.go:88] couldn't shut down kindnet-20220921220530-5916 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "kindnet-20220921220530-5916": docker container inspect kindnet-20220921220530-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-20220921220530-5916
	 
	I0921 22:15:41.018035    8960 cli_runner.go:164] Run: docker rm -f -v kindnet-20220921220530-5916
	I0921 22:15:41.218899    8960 cli_runner.go:164] Run: docker container inspect -f {{.Id}} kindnet-20220921220530-5916
	W0921 22:15:41.413608    8960 cli_runner.go:211] docker container inspect -f {{.Id}} kindnet-20220921220530-5916 returned with exit code 1
	I0921 22:15:41.419608    8960 cli_runner.go:164] Run: docker network inspect kindnet-20220921220530-5916 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0921 22:15:41.617650    8960 cli_runner.go:211] docker network inspect kindnet-20220921220530-5916 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0921 22:15:41.623648    8960 network_create.go:272] running [docker network inspect kindnet-20220921220530-5916] to gather additional debugging logs...
	I0921 22:15:41.623648    8960 cli_runner.go:164] Run: docker network inspect kindnet-20220921220530-5916
	W0921 22:15:41.807079    8960 cli_runner.go:211] docker network inspect kindnet-20220921220530-5916 returned with exit code 1
	I0921 22:15:41.807120    8960 network_create.go:275] error running [docker network inspect kindnet-20220921220530-5916]: docker network inspect kindnet-20220921220530-5916: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: kindnet-20220921220530-5916
	I0921 22:15:41.807166    8960 network_create.go:277] output of [docker network inspect kindnet-20220921220530-5916]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: kindnet-20220921220530-5916
	
	** /stderr **
	W0921 22:15:41.808218    8960 delete.go:139] delete failed (probably ok) <nil>
	I0921 22:15:41.808218    8960 fix.go:115] Sleeping 1 second for extra luck!
	I0921 22:15:42.819912    8960 start.go:125] createHost starting for "" (driver="docker")
	I0921 22:15:42.824749    8960 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0921 22:15:42.825186    8960 start.go:159] libmachine.API.Create for "kindnet-20220921220530-5916" (driver="docker")
	I0921 22:15:42.825246    8960 client.go:168] LocalClient.Create starting
	I0921 22:15:42.825246    8960 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I0921 22:15:42.825967    8960 main.go:134] libmachine: Decoding PEM data...
	I0921 22:15:42.825967    8960 main.go:134] libmachine: Parsing certificate...
	I0921 22:15:42.825967    8960 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I0921 22:15:42.825967    8960 main.go:134] libmachine: Decoding PEM data...
	I0921 22:15:42.825967    8960 main.go:134] libmachine: Parsing certificate...
	I0921 22:15:42.836181    8960 cli_runner.go:164] Run: docker network inspect kindnet-20220921220530-5916 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0921 22:15:43.035446    8960 cli_runner.go:211] docker network inspect kindnet-20220921220530-5916 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0921 22:15:43.043000    8960 network_create.go:272] running [docker network inspect kindnet-20220921220530-5916] to gather additional debugging logs...
	I0921 22:15:43.043000    8960 cli_runner.go:164] Run: docker network inspect kindnet-20220921220530-5916
	W0921 22:15:43.251188    8960 cli_runner.go:211] docker network inspect kindnet-20220921220530-5916 returned with exit code 1
	I0921 22:15:43.251188    8960 network_create.go:275] error running [docker network inspect kindnet-20220921220530-5916]: docker network inspect kindnet-20220921220530-5916: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: kindnet-20220921220530-5916
	I0921 22:15:43.251267    8960 network_create.go:277] output of [docker network inspect kindnet-20220921220530-5916]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: kindnet-20220921220530-5916
	
	** /stderr **
	I0921 22:15:43.260054    8960 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0921 22:15:43.475290    8960 network.go:281] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00000ad38] amended:false}} dirty:map[] misses:0}
	I0921 22:15:43.475290    8960 network.go:239] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0921 22:15:43.489250    8960 network.go:290] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00000ad38] amended:true}} dirty:map[192.168.49.0:0xc00000ad38 192.168.58.0:0xc000514b80] misses:0}
	I0921 22:15:43.489250    8960 network.go:236] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0921 22:15:43.489250    8960 network_create.go:115] attempt to create docker network kindnet-20220921220530-5916 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0921 22:15:43.498913    8960 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kindnet-20220921220530-5916 kindnet-20220921220530-5916
	W0921 22:15:43.679737    8960 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kindnet-20220921220530-5916 kindnet-20220921220530-5916 returned with exit code 1
	E0921 22:15:43.679959    8960 network_create.go:104] error while trying to create docker network kindnet-20220921220530-5916 192.168.58.0/24: create docker network kindnet-20220921220530-5916 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kindnet-20220921220530-5916 kindnet-20220921220530-5916: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network cdfaa633a77aa2a733cb423e76e7b7489a2026c396d0cf35d7080a619502f90f (br-cdfaa633a77a): conflicts with network 8a3cd8d165a4af032276277dbee93b52b8ee4aed7ca6d8aceee6814901b31792 (br-8a3cd8d165a4): networks have overlapping IPv4
	W0921 22:15:43.680051    8960 out.go:239] ! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network kindnet-20220921220530-5916 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kindnet-20220921220530-5916 kindnet-20220921220530-5916: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network cdfaa633a77aa2a733cb423e76e7b7489a2026c396d0cf35d7080a619502f90f (br-cdfaa633a77a): conflicts with network 8a3cd8d165a4af032276277dbee93b52b8ee4aed7ca6d8aceee6814901b31792 (br-8a3cd8d165a4): networks have overlapping IPv4
	
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network kindnet-20220921220530-5916 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kindnet-20220921220530-5916 kindnet-20220921220530-5916: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: cannot create network cdfaa633a77aa2a733cb423e76e7b7489a2026c396d0cf35d7080a619502f90f (br-cdfaa633a77a): conflicts with network 8a3cd8d165a4af032276277dbee93b52b8ee4aed7ca6d8aceee6814901b31792 (br-8a3cd8d165a4): networks have overlapping IPv4
	
	I0921 22:15:43.697461    8960 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0921 22:15:43.904596    8960 cli_runner.go:164] Run: docker volume create kindnet-20220921220530-5916 --label name.minikube.sigs.k8s.io=kindnet-20220921220530-5916 --label created_by.minikube.sigs.k8s.io=true
	W0921 22:15:44.082494    8960 cli_runner.go:211] docker volume create kindnet-20220921220530-5916 --label name.minikube.sigs.k8s.io=kindnet-20220921220530-5916 --label created_by.minikube.sigs.k8s.io=true returned with exit code 1
	I0921 22:15:44.082651    8960 client.go:171] LocalClient.Create took 1.2573949s
	I0921 22:15:46.104805    8960 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0921 22:15:46.112415    8960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220921220530-5916
	W0921 22:15:46.308722    8960 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220921220530-5916 returned with exit code 1
	I0921 22:15:46.309202    8960 retry.go:31] will retry after 242.222461ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kindnet-20220921220530-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220921220530-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-20220921220530-5916
	I0921 22:15:46.566746    8960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220921220530-5916
	W0921 22:15:46.792646    8960 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220921220530-5916 returned with exit code 1
	I0921 22:15:46.792982    8960 retry.go:31] will retry after 293.637806ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kindnet-20220921220530-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220921220530-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-20220921220530-5916
	I0921 22:15:47.105721    8960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220921220530-5916
	W0921 22:15:47.299245    8960 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220921220530-5916 returned with exit code 1
	I0921 22:15:47.299245    8960 retry.go:31] will retry after 446.119795ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kindnet-20220921220530-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220921220530-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-20220921220530-5916
	I0921 22:15:47.762015    8960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220921220530-5916
	W0921 22:15:47.954832    8960 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220921220530-5916 returned with exit code 1
	W0921 22:15:47.954832    8960 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kindnet-20220921220530-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220921220530-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-20220921220530-5916
	
	W0921 22:15:47.954832    8960 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kindnet-20220921220530-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220921220530-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-20220921220530-5916
	I0921 22:15:47.964839    8960 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0921 22:15:47.972836    8960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220921220530-5916
	W0921 22:15:48.169305    8960 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220921220530-5916 returned with exit code 1
	I0921 22:15:48.169520    8960 retry.go:31] will retry after 179.638263ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kindnet-20220921220530-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220921220530-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-20220921220530-5916
	I0921 22:15:48.368962    8960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220921220530-5916
	W0921 22:15:48.573934    8960 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220921220530-5916 returned with exit code 1
	I0921 22:15:48.574182    8960 retry.go:31] will retry after 263.695078ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kindnet-20220921220530-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220921220530-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-20220921220530-5916
	I0921 22:15:48.852788    8960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220921220530-5916
	W0921 22:15:49.048765    8960 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220921220530-5916 returned with exit code 1
	I0921 22:15:49.048765    8960 retry.go:31] will retry after 484.240172ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kindnet-20220921220530-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220921220530-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-20220921220530-5916
	I0921 22:15:49.544093    8960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220921220530-5916
	W0921 22:15:49.742339    8960 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220921220530-5916 returned with exit code 1
	W0921 22:15:49.742339    8960 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kindnet-20220921220530-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220921220530-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-20220921220530-5916
	
	W0921 22:15:49.742339    8960 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kindnet-20220921220530-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220921220530-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-20220921220530-5916
	I0921 22:15:49.742339    8960 start.go:128] duration metric: createHost completed in 6.92237s
	I0921 22:15:49.753332    8960 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0921 22:15:49.760326    8960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220921220530-5916
	W0921 22:15:49.992093    8960 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220921220530-5916 returned with exit code 1
	I0921 22:15:49.992093    8960 retry.go:31] will retry after 340.62286ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kindnet-20220921220530-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220921220530-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-20220921220530-5916
	I0921 22:15:50.353498    8960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220921220530-5916
	W0921 22:15:50.563725    8960 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220921220530-5916 returned with exit code 1
	I0921 22:15:50.563914    8960 retry.go:31] will retry after 297.417842ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kindnet-20220921220530-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220921220530-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-20220921220530-5916
	I0921 22:15:50.875890    8960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220921220530-5916
	W0921 22:15:51.058911    8960 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220921220530-5916 returned with exit code 1
	I0921 22:15:51.058911    8960 retry.go:31] will retry after 448.358942ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kindnet-20220921220530-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220921220530-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-20220921220530-5916
	I0921 22:15:51.523127    8960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220921220530-5916
	W0921 22:15:51.719878    8960 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220921220530-5916 returned with exit code 1
	W0921 22:15:51.719878    8960 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kindnet-20220921220530-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220921220530-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-20220921220530-5916
	
	W0921 22:15:51.719878    8960 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kindnet-20220921220530-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220921220530-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-20220921220530-5916
	I0921 22:15:51.733098    8960 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0921 22:15:51.740762    8960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220921220530-5916
	W0921 22:15:51.936583    8960 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220921220530-5916 returned with exit code 1
	I0921 22:15:51.936583    8960 retry.go:31] will retry after 176.645665ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kindnet-20220921220530-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220921220530-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-20220921220530-5916
	I0921 22:15:52.138228    8960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220921220530-5916
	W0921 22:15:52.330603    8960 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220921220530-5916 returned with exit code 1
	I0921 22:15:52.330603    8960 retry.go:31] will retry after 512.00063ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kindnet-20220921220530-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220921220530-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-20220921220530-5916
	I0921 22:15:52.864719    8960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220921220530-5916
	W0921 22:15:53.066849    8960 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220921220530-5916 returned with exit code 1
	W0921 22:15:53.066849    8960 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kindnet-20220921220530-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220921220530-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-20220921220530-5916
	
	W0921 22:15:53.066849    8960 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kindnet-20220921220530-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220921220530-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-20220921220530-5916
	I0921 22:15:53.066849    8960 fix.go:57] fixHost completed within 30.4900584s
	I0921 22:15:53.066849    8960 start.go:83] releasing machines lock for "kindnet-20220921220530-5916", held for 30.4902832s
	W0921 22:15:53.066849    8960 out.go:239] * Failed to start docker container. Running "minikube delete -p kindnet-20220921220530-5916" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for kindnet-20220921220530-5916 container: docker volume create kindnet-20220921220530-5916 --label name.minikube.sigs.k8s.io=kindnet-20220921220530-5916 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create kindnet-20220921220530-5916: error while creating volume root path '/var/lib/docker/volumes/kindnet-20220921220530-5916': mkdir /var/lib/docker/volumes/kindnet-20220921220530-5916: read-only file system
	
	* Failed to start docker container. Running "minikube delete -p kindnet-20220921220530-5916" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for kindnet-20220921220530-5916 container: docker volume create kindnet-20220921220530-5916 --label name.minikube.sigs.k8s.io=kindnet-20220921220530-5916 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create kindnet-20220921220530-5916: error while creating volume root path '/var/lib/docker/volumes/kindnet-20220921220530-5916': mkdir /var/lib/docker/volumes/kindnet-20220921220530-5916: read-only file system
	
	I0921 22:15:53.071849    8960 out.go:177] 
	W0921 22:15:53.073845    8960 out.go:239] X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for kindnet-20220921220530-5916 container: docker volume create kindnet-20220921220530-5916 --label name.minikube.sigs.k8s.io=kindnet-20220921220530-5916 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create kindnet-20220921220530-5916: error while creating volume root path '/var/lib/docker/volumes/kindnet-20220921220530-5916': mkdir /var/lib/docker/volumes/kindnet-20220921220530-5916: read-only file system
	
	X Exiting due to PR_DOCKER_READONLY_VOL: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for kindnet-20220921220530-5916 container: docker volume create kindnet-20220921220530-5916 --label name.minikube.sigs.k8s.io=kindnet-20220921220530-5916 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: create kindnet-20220921220530-5916: error while creating volume root path '/var/lib/docker/volumes/kindnet-20220921220530-5916': mkdir /var/lib/docker/volumes/kindnet-20220921220530-5916: read-only file system
	
	W0921 22:15:53.074848    8960 out.go:239] * Suggestion: Restart Docker
	* Suggestion: Restart Docker
	W0921 22:15:53.074848    8960 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/6825
	* Related issue: https://github.com/kubernetes/minikube/issues/6825
	I0921 22:15:53.077843    8960 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:103: failed start: exit status 60
--- FAIL: TestNetworkPlugins/group/kindnet/Start (48.94s)

                                                
                                    

Test pass (57/224)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 11.06
4 TestDownloadOnly/v1.16.0/preload-exists 0
7 TestDownloadOnly/v1.16.0/kubectl 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.51
10 TestDownloadOnly/v1.25.2/json-events 9.46
11 TestDownloadOnly/v1.25.2/preload-exists 0
14 TestDownloadOnly/v1.25.2/kubectl 0
15 TestDownloadOnly/v1.25.2/LogsDuration 0.7
16 TestDownloadOnly/DeleteAll 2.53
17 TestDownloadOnly/DeleteAlwaysSucceeds 1.53
18 TestDownloadOnlyKic 35.3
19 TestBinaryMirror 4.36
33 TestErrorSpam/start 5.58
34 TestErrorSpam/status 1.7
35 TestErrorSpam/pause 3.26
36 TestErrorSpam/unpause 3.11
37 TestErrorSpam/stop 57.12
40 TestFunctional/serial/CopySyncFile 0.04
42 TestFunctional/serial/AuditLog 0
48 TestFunctional/serial/CacheCmd/cache/add_remote 4.26
50 TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 0.34
51 TestFunctional/serial/CacheCmd/cache/list 0.33
54 TestFunctional/serial/CacheCmd/cache/delete 0.71
62 TestFunctional/parallel/ConfigCmd 2.42
64 TestFunctional/parallel/DryRun 3.45
65 TestFunctional/parallel/InternationalLanguage 1.46
71 TestFunctional/parallel/AddonsCmd 1.07
86 TestFunctional/parallel/ProfileCmd/profile_not_create 1.51
87 TestFunctional/parallel/ProfileCmd/profile_list 1.22
88 TestFunctional/parallel/ProfileCmd/profile_json_output 1.21
90 TestFunctional/parallel/Version/short 0.38
100 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
107 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.13
114 TestFunctional/parallel/ImageCommands/ImageRemove 1.17
117 TestFunctional/delete_addon-resizer_images 0.4
118 TestFunctional/delete_my-image_image 0.19
119 TestFunctional/delete_minikube_cached_images 0.22
125 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 0.57
130 TestJSONOutput/start/Audit 0
136 TestJSONOutput/pause/Audit 0
138 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
139 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
142 TestJSONOutput/unpause/Audit 0
144 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
145 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
148 TestJSONOutput/stop/Audit 0
152 TestErrorJSONOutput 1.83
155 TestKicCustomNetwork/use_default_bridge_network 193.57
158 TestMainNoArgs 0.33
192 TestNoKubernetes/serial/StartNoK8sWithVersion 0.57
193 TestStoppedBinaryUpgrade/Setup 0.51
219 TestNoKubernetes/serial/VerifyK8sNotRunning 1.5
220 TestNoKubernetes/serial/ProfileList 3.59
263 TestStartStop/group/newest-cni/serial/DeployApp 0
264 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.59
276 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
277 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
x
+
TestDownloadOnly/v1.16.0/json-events (11.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:71: (dbg) Run:  out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-20220921212952-5916 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=docker
aaa_download_only_test.go:71: (dbg) Done: out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-20220921212952-5916 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=docker: (11.063908s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (11.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
--- PASS: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.51s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:173: (dbg) Run:  out/minikube-windows-amd64.exe logs -p download-only-20220921212952-5916
aaa_download_only_test.go:173: (dbg) Non-zero exit: out/minikube-windows-amd64.exe logs -p download-only-20220921212952-5916: exit status 85 (513.1805ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|-----------------------------------|-----------------------------------|-------------------|---------|---------------------|----------|
	| Command |               Args                |              Profile              |       User        | Version |     Start Time      | End Time |
	|---------|-----------------------------------|-----------------------------------|-------------------|---------|---------------------|----------|
	| start   | -o=json --download-only -p        | download-only-20220921212952-5916 | minikube2\jenkins | v1.27.0 | 21 Sep 22 21:29 GMT |          |
	|         | download-only-20220921212952-5916 |                                   |                   |         |                     |          |
	|         | --force --alsologtostderr         |                                   |                   |         |                     |          |
	|         | --kubernetes-version=v1.16.0      |                                   |                   |         |                     |          |
	|         | --container-runtime=docker        |                                   |                   |         |                     |          |
	|         | --driver=docker                   |                                   |                   |         |                     |          |
	|---------|-----------------------------------|-----------------------------------|-------------------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/09/21 21:29:53
	Running on machine: minikube2
	Binary: Built with gc go1.19.1 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0921 21:29:52.974877    6888 out.go:296] Setting OutFile to fd 632 ...
	I0921 21:29:53.036020    6888 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0921 21:29:53.036020    6888 out.go:309] Setting ErrFile to fd 636...
	I0921 21:29:53.036020    6888 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W0921 21:29:53.048305    6888 root.go:310] Error reading config file at C:\Users\jenkins.minikube2\minikube-integration\.minikube\config\config.json: open C:\Users\jenkins.minikube2\minikube-integration\.minikube\config\config.json: The system cannot find the path specified.
	I0921 21:29:53.058818    6888 out.go:303] Setting JSON to true
	I0921 21:29:53.065856    6888 start.go:115] hostinfo: {"hostname":"minikube2","uptime":1861,"bootTime":1663793932,"procs":149,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19044 Build 19044","kernelVersion":"10.0.19044 Build 19044","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"29dced62-21fb-45d8-a34e-472b66ced002"}
	W0921 21:29:53.065856    6888 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0921 21:29:53.103761    6888 out.go:97] [download-only-20220921212952-5916] minikube v1.27.0 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	I0921 21:29:53.103761    6888 notify.go:214] Checking for updates...
	W0921 21:29:53.104814    6888 preload.go:295] Failed to list preload files: open C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball: The system cannot find the file specified.
	I0921 21:29:53.107576    6888 out.go:169] KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	I0921 21:29:53.109762    6888 out.go:169] MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	I0921 21:29:53.113068    6888 out.go:169] MINIKUBE_LOCATION=14995
	I0921 21:29:53.116216    6888 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	W0921 21:29:53.121049    6888 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0921 21:29:53.121741    6888 driver.go:365] Setting default libvirt URI to qemu:///system
	I0921 21:29:53.441161    6888 docker.go:137] docker version: linux-20.10.17
	I0921 21:29:53.448141    6888 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0921 21:29:53.967965    6888 info.go:265] docker info: {ID:6VFI:N2PO:PLBP:KXLR:EN7R:OR7P:564S:UY7N:ZWDA:5V2U:KGL2:KJZ7 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:43 OomKillDisable:true NGoroutines:44 SystemTime:2022-09-21 21:29:53.6049709 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.10.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.9] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-
plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.19.0]] Warnings:<nil>}}
	I0921 21:29:53.993137    6888 out.go:97] Using the docker driver based on user configuration
	I0921 21:29:53.993466    6888 start.go:284] selected driver: docker
	I0921 21:29:53.993520    6888 start.go:808] validating driver "docker" against <nil>
	I0921 21:29:54.007607    6888 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0921 21:29:54.528863    6888 info.go:265] docker info: {ID:6VFI:N2PO:PLBP:KXLR:EN7R:OR7P:564S:UY7N:ZWDA:5V2U:KGL2:KJZ7 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:43 OomKillDisable:true NGoroutines:44 SystemTime:2022-09-21 21:29:54.1658238 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.10.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.9] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-
plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.19.0]] Warnings:<nil>}}
	I0921 21:29:54.528863    6888 start_flags.go:302] no existing cluster config was found, will generate one from the flags 
	I0921 21:29:54.650571    6888 start_flags.go:383] Using suggested 16300MB memory alloc based on sys=65534MB, container=51405MB
	I0921 21:29:54.651374    6888 start_flags.go:849] Wait components to verify : map[apiserver:true system_pods:true]
	I0921 21:29:54.655019    6888 out.go:169] Using Docker Desktop driver with root privileges
	I0921 21:29:54.656984    6888 cni.go:95] Creating CNI manager for ""
	I0921 21:29:54.656984    6888 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0921 21:29:54.656984    6888 start_flags.go:316] config:
	{Name:download-only-20220921212952-5916 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c Memory:16300 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-20220921212952-5916 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I0921 21:29:54.659707    6888 out.go:97] Starting control plane node download-only-20220921212952-5916 in cluster download-only-20220921212952-5916
	I0921 21:29:54.659707    6888 cache.go:120] Beginning downloading kic base image for docker with docker
	I0921 21:29:54.662126    6888 out.go:97] Pulling base image ...
	I0921 21:29:54.662347    6888 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0921 21:29:54.662347    6888 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c in local docker daemon
	I0921 21:29:54.724318    6888 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0921 21:29:54.724318    6888 cache.go:57] Caching tarball of preloaded images
	I0921 21:29:54.725024    6888 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0921 21:29:54.727762    6888 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I0921 21:29:54.727762    6888 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I0921 21:29:54.847981    6888 download.go:101] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4?checksum=md5:326f3ce331abb64565b50b8c9e791244 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0921 21:29:54.854974    6888 cache.go:147] Downloading gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c to local cache
	I0921 21:29:54.854974    6888 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase_v0.0.34@sha256_f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c.tar
	I0921 21:29:54.854974    6888 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase_v0.0.34@sha256_f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c.tar
	I0921 21:29:54.854974    6888 image.go:59] Checking for gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c in local cache directory
	I0921 21:29:54.855976    6888 image.go:119] Writing gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c to local cache
	I0921 21:29:58.271303    6888 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I0921 21:29:58.287194    6888 preload.go:256] verifying checksum of C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I0921 21:29:59.353483    6888 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0921 21:29:59.354465    6888 profile.go:148] Saving config to C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\download-only-20220921212952-5916\config.json ...
	I0921 21:29:59.354465    6888 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\download-only-20220921212952-5916\config.json: {Name:mkc612e665fc17f411d444f37918705dafce5611 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0921 21:29:59.355475    6888 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0921 21:29:59.357479    6888 download.go:101] Downloading: https://storage.googleapis.com/kubernetes-release/release/v1.16.0/bin/windows/amd64/kubectl.exe?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.16.0/bin/windows/amd64/kubectl.exe.sha1 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\windows\amd64\v1.16.0/kubectl.exe
	I0921 21:30:02.979679    6888 cache.go:150] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c as a tarball
	I0921 21:30:02.979679    6888 cache.go:208] Successfully downloaded all kic artifacts
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-20220921212952-5916"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:174: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.51s)

                                                
                                    
x
+
TestDownloadOnly/v1.25.2/json-events (9.46s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.25.2/json-events
aaa_download_only_test.go:71: (dbg) Run:  out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-20220921212952-5916 --force --alsologtostderr --kubernetes-version=v1.25.2 --container-runtime=docker --driver=docker
aaa_download_only_test.go:71: (dbg) Done: out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-20220921212952-5916 --force --alsologtostderr --kubernetes-version=v1.25.2 --container-runtime=docker --driver=docker: (9.4644966s)
--- PASS: TestDownloadOnly/v1.25.2/json-events (9.46s)

                                                
                                    
x
+
TestDownloadOnly/v1.25.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.25.2/preload-exists
--- PASS: TestDownloadOnly/v1.25.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.25.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.25.2/kubectl
--- PASS: TestDownloadOnly/v1.25.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.25.2/LogsDuration (0.7s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.25.2/LogsDuration
aaa_download_only_test.go:173: (dbg) Run:  out/minikube-windows-amd64.exe logs -p download-only-20220921212952-5916
aaa_download_only_test.go:173: (dbg) Non-zero exit: out/minikube-windows-amd64.exe logs -p download-only-20220921212952-5916: exit status 85 (699.3526ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|-----------------------------------|-----------------------------------|-------------------|---------|---------------------|----------|
	| Command |               Args                |              Profile              |       User        | Version |     Start Time      | End Time |
	|---------|-----------------------------------|-----------------------------------|-------------------|---------|---------------------|----------|
	| start   | -o=json --download-only -p        | download-only-20220921212952-5916 | minikube2\jenkins | v1.27.0 | 21 Sep 22 21:29 GMT |          |
	|         | download-only-20220921212952-5916 |                                   |                   |         |                     |          |
	|         | --force --alsologtostderr         |                                   |                   |         |                     |          |
	|         | --kubernetes-version=v1.16.0      |                                   |                   |         |                     |          |
	|         | --container-runtime=docker        |                                   |                   |         |                     |          |
	|         | --driver=docker                   |                                   |                   |         |                     |          |
	| start   | -o=json --download-only -p        | download-only-20220921212952-5916 | minikube2\jenkins | v1.27.0 | 21 Sep 22 21:30 GMT |          |
	|         | download-only-20220921212952-5916 |                                   |                   |         |                     |          |
	|         | --force --alsologtostderr         |                                   |                   |         |                     |          |
	|         | --kubernetes-version=v1.25.2      |                                   |                   |         |                     |          |
	|         | --container-runtime=docker        |                                   |                   |         |                     |          |
	|         | --driver=docker                   |                                   |                   |         |                     |          |
	|---------|-----------------------------------|-----------------------------------|-------------------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/09/21 21:30:04
	Running on machine: minikube2
	Binary: Built with gc go1.19.1 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0921 21:30:04.547050    9188 out.go:296] Setting OutFile to fd 648 ...
	I0921 21:30:04.613704    9188 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0921 21:30:04.613704    9188 out.go:309] Setting ErrFile to fd 664...
	I0921 21:30:04.613704    9188 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W0921 21:30:04.628344    9188 root.go:310] Error reading config file at C:\Users\jenkins.minikube2\minikube-integration\.minikube\config\config.json: open C:\Users\jenkins.minikube2\minikube-integration\.minikube\config\config.json: The system cannot find the file specified.
	I0921 21:30:04.637564    9188 out.go:303] Setting JSON to true
	I0921 21:30:04.640187    9188 start.go:115] hostinfo: {"hostname":"minikube2","uptime":1873,"bootTime":1663793931,"procs":150,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19044 Build 19044","kernelVersion":"10.0.19044 Build 19044","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"29dced62-21fb-45d8-a34e-472b66ced002"}
	W0921 21:30:04.640187    9188 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0921 21:30:04.899183    9188 out.go:97] [download-only-20220921212952-5916] minikube v1.27.0 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	I0921 21:30:04.900065    9188 notify.go:214] Checking for updates...
	I0921 21:30:04.903158    9188 out.go:169] KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	I0921 21:30:04.905699    9188 out.go:169] MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	I0921 21:30:04.908269    9188 out.go:169] MINIKUBE_LOCATION=14995
	I0921 21:30:04.910814    9188 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	W0921 21:30:04.915505    9188 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0921 21:30:04.916510    9188 config.go:180] Loaded profile config "download-only-20220921212952-5916": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	W0921 21:30:04.916742    9188 start.go:716] api.Load failed for download-only-20220921212952-5916: filestore "download-only-20220921212952-5916": Docker machine "download-only-20220921212952-5916" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0921 21:30:04.916742    9188 driver.go:365] Setting default libvirt URI to qemu:///system
	W0921 21:30:04.916742    9188 start.go:716] api.Load failed for download-only-20220921212952-5916: filestore "download-only-20220921212952-5916": Docker machine "download-only-20220921212952-5916" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0921 21:30:05.220880    9188 docker.go:137] docker version: linux-20.10.17
	I0921 21:30:05.228650    9188 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0921 21:30:05.803260    9188 info.go:265] docker info: {ID:6VFI:N2PO:PLBP:KXLR:EN7R:OR7P:564S:UY7N:ZWDA:5V2U:KGL2:KJZ7 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:43 OomKillDisable:true NGoroutines:44 SystemTime:2022-09-21 21:30:05.3785859 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.10.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.9] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-
plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.19.0]] Warnings:<nil>}}
	I0921 21:30:05.985670    9188 out.go:97] Using the docker driver based on existing profile
	I0921 21:30:05.985670    9188 start.go:284] selected driver: docker
	I0921 21:30:05.985670    9188 start.go:808] validating driver "docker" against &{Name:download-only-20220921212952-5916 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c Memory:16300 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-20220921212952-5916 Namespace:default APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: Socket
VMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I0921 21:30:06.001541    9188 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0921 21:30:06.554472    9188 info.go:265] docker info: {ID:6VFI:N2PO:PLBP:KXLR:EN7R:OR7P:564S:UY7N:ZWDA:5V2U:KGL2:KJZ7 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:43 OomKillDisable:true NGoroutines:44 SystemTime:2022-09-21 21:30:06.1627807 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.10.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.9] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-
plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.19.0]] Warnings:<nil>}}
	I0921 21:30:06.602136    9188 cni.go:95] Creating CNI manager for ""
	I0921 21:30:06.602136    9188 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0921 21:30:06.602136    9188 start_flags.go:316] config:
	{Name:download-only-20220921212952-5916 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c Memory:16300 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.2 ClusterName:download-only-20220921212952-5916 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/r
un/socket_vmnet}
	I0921 21:30:06.785933    9188 out.go:97] Starting control plane node download-only-20220921212952-5916 in cluster download-only-20220921212952-5916
	I0921 21:30:06.786622    9188 cache.go:120] Beginning downloading kic base image for docker with docker
	I0921 21:30:06.789277    9188 out.go:97] Pulling base image ...
	I0921 21:30:06.789277    9188 preload.go:132] Checking if preload exists for k8s version v1.25.2 and runtime docker
	I0921 21:30:06.789277    9188 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c in local docker daemon
	I0921 21:30:06.850046    9188 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.25.2/preloaded-images-k8s-v18-v1.25.2-docker-overlay2-amd64.tar.lz4
	I0921 21:30:06.850046    9188 cache.go:57] Caching tarball of preloaded images
	I0921 21:30:06.850046    9188 preload.go:132] Checking if preload exists for k8s version v1.25.2 and runtime docker
	I0921 21:30:06.853218    9188 out.go:97] Downloading Kubernetes v1.25.2 preload ...
	I0921 21:30:06.853218    9188 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.25.2-docker-overlay2-amd64.tar.lz4 ...
	I0921 21:30:06.956828    9188 download.go:101] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.25.2/preloaded-images-k8s-v18-v1.25.2-docker-overlay2-amd64.tar.lz4?checksum=md5:b0e374b6adbebc5b5e0cfc12622b2408 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.25.2-docker-overlay2-amd64.tar.lz4
	I0921 21:30:07.029033    9188 cache.go:147] Downloading gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c to local cache
	I0921 21:30:07.029757    9188 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase_v0.0.34@sha256_f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c.tar
	I0921 21:30:07.030107    9188 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase_v0.0.34@sha256_f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c.tar
	I0921 21:30:07.030107    9188 image.go:59] Checking for gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c in local cache directory
	I0921 21:30:07.030181    9188 image.go:62] Found gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c in local cache directory, skipping pull
	I0921 21:30:07.030181    9188 image.go:103] gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c exists in cache, skipping pull
	I0921 21:30:07.030439    9188 cache.go:150] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c as a tarball
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-20220921212952-5916"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:174: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.25.2/LogsDuration (0.70s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (2.53s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:191: (dbg) Run:  out/minikube-windows-amd64.exe delete --all
aaa_download_only_test.go:191: (dbg) Done: out/minikube-windows-amd64.exe delete --all: (2.5253682s)
--- PASS: TestDownloadOnly/DeleteAll (2.53s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (1.53s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:203: (dbg) Run:  out/minikube-windows-amd64.exe delete -p download-only-20220921212952-5916
aaa_download_only_test.go:203: (dbg) Done: out/minikube-windows-amd64.exe delete -p download-only-20220921212952-5916: (1.5260969s)
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (1.53s)

                                                
                                    
x
+
TestDownloadOnlyKic (35.3s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:228: (dbg) Run:  out/minikube-windows-amd64.exe start --download-only -p download-docker-20220921213020-5916 --force --alsologtostderr --driver=docker
aaa_download_only_test.go:228: (dbg) Done: out/minikube-windows-amd64.exe start --download-only -p download-docker-20220921213020-5916 --force --alsologtostderr --driver=docker: (32.5196924s)
helpers_test.go:175: Cleaning up "download-docker-20220921213020-5916" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p download-docker-20220921213020-5916
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p download-docker-20220921213020-5916: (1.68964s)
--- PASS: TestDownloadOnlyKic (35.30s)

                                                
                                    
x
+
TestBinaryMirror (4.36s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:310: (dbg) Run:  out/minikube-windows-amd64.exe start --download-only -p binary-mirror-20220921213055-5916 --alsologtostderr --binary-mirror http://127.0.0.1:57904 --driver=docker
aaa_download_only_test.go:310: (dbg) Done: out/minikube-windows-amd64.exe start --download-only -p binary-mirror-20220921213055-5916 --alsologtostderr --binary-mirror http://127.0.0.1:57904 --driver=docker: (2.3960576s)
helpers_test.go:175: Cleaning up "binary-mirror-20220921213055-5916" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p binary-mirror-20220921213055-5916
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p binary-mirror-20220921213055-5916: (1.7458882s)
--- PASS: TestBinaryMirror (4.36s)

                                                
                                    
x
+
TestErrorSpam/start (5.58s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-20220921213151-5916 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-20220921213151-5916 start --dry-run
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-20220921213151-5916 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-20220921213151-5916 start --dry-run: (1.8445952s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-20220921213151-5916 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-20220921213151-5916 start --dry-run
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-20220921213151-5916 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-20220921213151-5916 start --dry-run: (1.8225884s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-20220921213151-5916 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-20220921213151-5916 start --dry-run
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-20220921213151-5916 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-20220921213151-5916 start --dry-run: (1.9104209s)
--- PASS: TestErrorSpam/start (5.58s)

                                                
                                    
x
+
TestErrorSpam/status (1.7s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-20220921213151-5916 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-20220921213151-5916 status
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p nospam-20220921213151-5916 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-20220921213151-5916 status: exit status 7 (532.8951ms)

                                                
                                                
-- stdout --
	nospam-20220921213151-5916
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0921 21:32:45.916221    2180 status.go:258] status error: host: state: unknown state "nospam-20220921213151-5916": docker container inspect nospam-20220921213151-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: nospam-20220921213151-5916
	E0921 21:32:45.916221    2180 status.go:261] The "nospam-20220921213151-5916" host does not exist!

                                                
                                                
** /stderr **
error_spam_test.go:161: "out/minikube-windows-amd64.exe -p nospam-20220921213151-5916 --log_dir C:\\Users\\jenkins.minikube2\\AppData\\Local\\Temp\\nospam-20220921213151-5916 status" failed: exit status 7
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-20220921213151-5916 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-20220921213151-5916 status
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p nospam-20220921213151-5916 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-20220921213151-5916 status: exit status 7 (546.7133ms)

                                                
                                                
-- stdout --
	nospam-20220921213151-5916
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0921 21:32:46.463409    2964 status.go:258] status error: host: state: unknown state "nospam-20220921213151-5916": docker container inspect nospam-20220921213151-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: nospam-20220921213151-5916
	E0921 21:32:46.463472    2964 status.go:261] The "nospam-20220921213151-5916" host does not exist!

                                                
                                                
** /stderr **
error_spam_test.go:161: "out/minikube-windows-amd64.exe -p nospam-20220921213151-5916 --log_dir C:\\Users\\jenkins.minikube2\\AppData\\Local\\Temp\\nospam-20220921213151-5916 status" failed: exit status 7
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-20220921213151-5916 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-20220921213151-5916 status
error_spam_test.go:182: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p nospam-20220921213151-5916 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-20220921213151-5916 status: exit status 7 (613.7674ms)

                                                
                                                
-- stdout --
	nospam-20220921213151-5916
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0921 21:32:47.079119    8296 status.go:258] status error: host: state: unknown state "nospam-20220921213151-5916": docker container inspect nospam-20220921213151-5916 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: nospam-20220921213151-5916
	E0921 21:32:47.079119    8296 status.go:261] The "nospam-20220921213151-5916" host does not exist!

                                                
                                                
** /stderr **
error_spam_test.go:184: "out/minikube-windows-amd64.exe -p nospam-20220921213151-5916 --log_dir C:\\Users\\jenkins.minikube2\\AppData\\Local\\Temp\\nospam-20220921213151-5916 status" failed: exit status 7
--- PASS: TestErrorSpam/status (1.70s)

                                                
                                    
x
+
TestErrorSpam/pause (3.26s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-20220921213151-5916 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-20220921213151-5916 pause
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p nospam-20220921213151-5916 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-20220921213151-5916 pause: exit status 80 (1.1850608s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "nospam-20220921213151-5916": docker container inspect nospam-20220921213151-5916 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: nospam-20220921213151-5916
	
	* 
	╭──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                          │
	│    * If the above advice does not help, please let us know:                                                              │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                            │
	│                                                                                                                          │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                 │
	│    * Please also attach the following file to the GitHub issue:                                                          │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_delete_05e3a674b6e518bcc2eafc8a77eb8b77017a009c_466.log    │
	│                                                                                                                          │
	╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:161: "out/minikube-windows-amd64.exe -p nospam-20220921213151-5916 --log_dir C:\\Users\\jenkins.minikube2\\AppData\\Local\\Temp\\nospam-20220921213151-5916 pause" failed: exit status 80
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-20220921213151-5916 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-20220921213151-5916 pause
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p nospam-20220921213151-5916 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-20220921213151-5916 pause: exit status 80 (1.0771149s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "nospam-20220921213151-5916": docker container inspect nospam-20220921213151-5916 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: nospam-20220921213151-5916
	
	* 
	╭──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                          │
	│    * If the above advice does not help, please let us know:                                                              │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                            │
	│                                                                                                                          │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                 │
	│    * Please also attach the following file to the GitHub issue:                                                          │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_delete_05e3a674b6e518bcc2eafc8a77eb8b77017a009c_466.log    │
	│                                                                                                                          │
	╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:161: "out/minikube-windows-amd64.exe -p nospam-20220921213151-5916 --log_dir C:\\Users\\jenkins.minikube2\\AppData\\Local\\Temp\\nospam-20220921213151-5916 pause" failed: exit status 80
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-20220921213151-5916 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-20220921213151-5916 pause
error_spam_test.go:182: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p nospam-20220921213151-5916 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-20220921213151-5916 pause: exit status 80 (994.8849ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "nospam-20220921213151-5916": docker container inspect nospam-20220921213151-5916 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: nospam-20220921213151-5916
	
	* 
	╭──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                          │
	│    * If the above advice does not help, please let us know:                                                              │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                            │
	│                                                                                                                          │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                 │
	│    * Please also attach the following file to the GitHub issue:                                                          │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_delete_05e3a674b6e518bcc2eafc8a77eb8b77017a009c_466.log    │
	│                                                                                                                          │
	╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:184: "out/minikube-windows-amd64.exe -p nospam-20220921213151-5916 --log_dir C:\\Users\\jenkins.minikube2\\AppData\\Local\\Temp\\nospam-20220921213151-5916 pause" failed: exit status 80
--- PASS: TestErrorSpam/pause (3.26s)

                                                
                                    
x
+
TestErrorSpam/unpause (3.11s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-20220921213151-5916 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-20220921213151-5916 unpause
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p nospam-20220921213151-5916 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-20220921213151-5916 unpause: exit status 80 (1.0174828s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "nospam-20220921213151-5916": docker container inspect nospam-20220921213151-5916 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: nospam-20220921213151-5916
	
	* 
	╭──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                          │
	│    * If the above advice does not help, please let us know:                                                              │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                            │
	│                                                                                                                          │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                 │
	│    * Please also attach the following file to the GitHub issue:                                                          │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_delete_05e3a674b6e518bcc2eafc8a77eb8b77017a009c_466.log    │
	│                                                                                                                          │
	╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:161: "out/minikube-windows-amd64.exe -p nospam-20220921213151-5916 --log_dir C:\\Users\\jenkins.minikube2\\AppData\\Local\\Temp\\nospam-20220921213151-5916 unpause" failed: exit status 80
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-20220921213151-5916 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-20220921213151-5916 unpause
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p nospam-20220921213151-5916 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-20220921213151-5916 unpause: exit status 80 (1.0691266s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "nospam-20220921213151-5916": docker container inspect nospam-20220921213151-5916 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: nospam-20220921213151-5916
	
	* 
	╭──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                          │
	│    * If the above advice does not help, please let us know:                                                              │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                            │
	│                                                                                                                          │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                 │
	│    * Please also attach the following file to the GitHub issue:                                                          │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_delete_05e3a674b6e518bcc2eafc8a77eb8b77017a009c_466.log    │
	│                                                                                                                          │
	╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:161: "out/minikube-windows-amd64.exe -p nospam-20220921213151-5916 --log_dir C:\\Users\\jenkins.minikube2\\AppData\\Local\\Temp\\nospam-20220921213151-5916 unpause" failed: exit status 80
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-20220921213151-5916 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-20220921213151-5916 unpause
error_spam_test.go:182: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p nospam-20220921213151-5916 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-20220921213151-5916 unpause: exit status 80 (1.0162864s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "nospam-20220921213151-5916": docker container inspect nospam-20220921213151-5916 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: nospam-20220921213151-5916
	
	* 
	╭──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                          │
	│    * If the above advice does not help, please let us know:                                                              │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                            │
	│                                                                                                                          │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                 │
	│    * Please also attach the following file to the GitHub issue:                                                          │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_delete_05e3a674b6e518bcc2eafc8a77eb8b77017a009c_466.log    │
	│                                                                                                                          │
	╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:184: "out/minikube-windows-amd64.exe -p nospam-20220921213151-5916 --log_dir C:\\Users\\jenkins.minikube2\\AppData\\Local\\Temp\\nospam-20220921213151-5916 unpause" failed: exit status 80
--- PASS: TestErrorSpam/unpause (3.11s)

                                                
                                    
x
+
TestErrorSpam/stop (57.12s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-20220921213151-5916 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-20220921213151-5916 stop
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p nospam-20220921213151-5916 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-20220921213151-5916 stop: exit status 82 (19.0531636s)

                                                
                                                
-- stdout --
	* Stopping node "nospam-20220921213151-5916"  ...
	* Stopping node "nospam-20220921213151-5916"  ...
	* Stopping node "nospam-20220921213151-5916"  ...
	* Stopping node "nospam-20220921213151-5916"  ...
	* Stopping node "nospam-20220921213151-5916"  ...
	* Stopping node "nospam-20220921213151-5916"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0921 21:32:57.322193    8304 daemonize_windows.go:38] error terminating scheduled stop for profile nospam-20220921213151-5916: stopping schedule-stop service for profile nospam-20220921213151-5916: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "nospam-20220921213151-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" nospam-20220921213151-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: nospam-20220921213151-5916
	X Exiting due to GUEST_STOP_TIMEOUT: docker container inspect nospam-20220921213151-5916 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: nospam-20220921213151-5916
	
	* 
	╭──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                          │
	│    * If the above advice does not help, please let us know:                                                              │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                            │
	│                                                                                                                          │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                 │
	│    * Please also attach the following file to the GitHub issue:                                                          │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_delete_05e3a674b6e518bcc2eafc8a77eb8b77017a009c_466.log    │
	│                                                                                                                          │
	╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:161: "out/minikube-windows-amd64.exe -p nospam-20220921213151-5916 --log_dir C:\\Users\\jenkins.minikube2\\AppData\\Local\\Temp\\nospam-20220921213151-5916 stop" failed: exit status 82
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-20220921213151-5916 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-20220921213151-5916 stop
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p nospam-20220921213151-5916 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-20220921213151-5916 stop: exit status 82 (19.0131154s)

                                                
                                                
-- stdout --
	* Stopping node "nospam-20220921213151-5916"  ...
	* Stopping node "nospam-20220921213151-5916"  ...
	* Stopping node "nospam-20220921213151-5916"  ...
	* Stopping node "nospam-20220921213151-5916"  ...
	* Stopping node "nospam-20220921213151-5916"  ...
	* Stopping node "nospam-20220921213151-5916"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0921 21:33:16.325682    8472 daemonize_windows.go:38] error terminating scheduled stop for profile nospam-20220921213151-5916: stopping schedule-stop service for profile nospam-20220921213151-5916: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "nospam-20220921213151-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" nospam-20220921213151-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: nospam-20220921213151-5916
	X Exiting due to GUEST_STOP_TIMEOUT: docker container inspect nospam-20220921213151-5916 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: nospam-20220921213151-5916
	
	* 
	╭──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                          │
	│    * If the above advice does not help, please let us know:                                                              │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                            │
	│                                                                                                                          │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                 │
	│    * Please also attach the following file to the GitHub issue:                                                          │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_delete_05e3a674b6e518bcc2eafc8a77eb8b77017a009c_466.log    │
	│                                                                                                                          │
	╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:161: "out/minikube-windows-amd64.exe -p nospam-20220921213151-5916 --log_dir C:\\Users\\jenkins.minikube2\\AppData\\Local\\Temp\\nospam-20220921213151-5916 stop" failed: exit status 82
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-20220921213151-5916 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-20220921213151-5916 stop
error_spam_test.go:182: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p nospam-20220921213151-5916 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-20220921213151-5916 stop: exit status 82 (19.0498893s)

                                                
                                                
-- stdout --
	* Stopping node "nospam-20220921213151-5916"  ...
	* Stopping node "nospam-20220921213151-5916"  ...
	* Stopping node "nospam-20220921213151-5916"  ...
	* Stopping node "nospam-20220921213151-5916"  ...
	* Stopping node "nospam-20220921213151-5916"  ...
	* Stopping node "nospam-20220921213151-5916"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0921 21:33:35.340313    7996 daemonize_windows.go:38] error terminating scheduled stop for profile nospam-20220921213151-5916: stopping schedule-stop service for profile nospam-20220921213151-5916: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "nospam-20220921213151-5916": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" nospam-20220921213151-5916: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: nospam-20220921213151-5916
	X Exiting due to GUEST_STOP_TIMEOUT: docker container inspect nospam-20220921213151-5916 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: nospam-20220921213151-5916
	
	* 
	╭──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                          │
	│    * If the above advice does not help, please let us know:                                                              │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                            │
	│                                                                                                                          │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                 │
	│    * Please also attach the following file to the GitHub issue:                                                          │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_delete_05e3a674b6e518bcc2eafc8a77eb8b77017a009c_466.log    │
	│                                                                                                                          │
	╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:184: "out/minikube-windows-amd64.exe -p nospam-20220921213151-5916 --log_dir C:\\Users\\jenkins.minikube2\\AppData\\Local\\Temp\\nospam-20220921213151-5916 stop" failed: exit status 82
--- PASS: TestErrorSpam/stop (57.12s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1781: local sync path: C:\Users\jenkins.minikube2\minikube-integration\.minikube\files\etc\test\nested\copy\5916\hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.04s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (4.26s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1041: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220921213353-5916 cache add k8s.gcr.io/pause:3.1
functional_test.go:1041: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220921213353-5916 cache add k8s.gcr.io/pause:3.1: (1.4595096s)
functional_test.go:1041: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220921213353-5916 cache add k8s.gcr.io/pause:3.3
functional_test.go:1041: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220921213353-5916 cache add k8s.gcr.io/pause:3.3: (1.4752612s)
functional_test.go:1041: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220921213353-5916 cache add k8s.gcr.io/pause:latest
functional_test.go:1041: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220921213353-5916 cache add k8s.gcr.io/pause:latest: (1.3251608s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (4.26s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 (0.34s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3
functional_test.go:1094: (dbg) Run:  out/minikube-windows-amd64.exe cache delete k8s.gcr.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 (0.34s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.33s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1102: (dbg) Run:  out/minikube-windows-amd64.exe cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.33s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.71s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1164: (dbg) Run:  out/minikube-windows-amd64.exe cache delete k8s.gcr.io/pause:3.1
functional_test.go:1164: (dbg) Run:  out/minikube-windows-amd64.exe cache delete k8s.gcr.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.71s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (2.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1191: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220921213353-5916 config unset cpus

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1191: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220921213353-5916 config get cpus

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1191: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-20220921213353-5916 config get cpus: exit status 14 (350.5479ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1191: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220921213353-5916 config set cpus 2

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1191: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220921213353-5916 config get cpus

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1191: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220921213353-5916 config unset cpus

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1191: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220921213353-5916 config get cpus

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1191: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-20220921213353-5916 config get cpus: exit status 14 (381.4833ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (2.42s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (3.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:966: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-20220921213353-5916 --dry-run --memory 250MB --alsologtostderr --driver=docker

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:966: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p functional-20220921213353-5916 --dry-run --memory 250MB --alsologtostderr --driver=docker: exit status 23 (1.3709642s)

                                                
                                                
-- stdout --
	* [functional-20220921213353-5916] minikube v1.27.0 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	  - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=14995
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0921 21:37:40.034791    9012 out.go:296] Setting OutFile to fd 980 ...
	I0921 21:37:40.096725    9012 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0921 21:37:40.096725    9012 out.go:309] Setting ErrFile to fd 784...
	I0921 21:37:40.096725    9012 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0921 21:37:40.114740    9012 out.go:303] Setting JSON to false
	I0921 21:37:40.117727    9012 start.go:115] hostinfo: {"hostname":"minikube2","uptime":2328,"bootTime":1663793932,"procs":155,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19044 Build 19044","kernelVersion":"10.0.19044 Build 19044","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"29dced62-21fb-45d8-a34e-472b66ced002"}
	W0921 21:37:40.117727    9012 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0921 21:37:40.121730    9012 out.go:177] * [functional-20220921213353-5916] minikube v1.27.0 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	I0921 21:37:40.124718    9012 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	I0921 21:37:40.127723    9012 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	I0921 21:37:40.129724    9012 out.go:177]   - MINIKUBE_LOCATION=14995
	I0921 21:37:40.131744    9012 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0921 21:37:40.134753    9012 config.go:180] Loaded profile config "functional-20220921213353-5916": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.2
	I0921 21:37:40.136725    9012 driver.go:365] Setting default libvirt URI to qemu:///system
	I0921 21:37:40.436725    9012 docker.go:137] docker version: linux-20.10.17
	I0921 21:37:40.444721    9012 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0921 21:37:41.046047    9012 info.go:265] docker info: {ID:6VFI:N2PO:PLBP:KXLR:EN7R:OR7P:564S:UY7N:ZWDA:5V2U:KGL2:KJZ7 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:43 OomKillDisable:true NGoroutines:49 SystemTime:2022-09-21 21:37:40.6029551 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.10.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.9] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-
plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.19.0]] Warnings:<nil>}}
	I0921 21:37:41.049054    9012 out.go:177] * Using the docker driver based on existing profile
	I0921 21:37:41.051054    9012 start.go:284] selected driver: docker
	I0921 21:37:41.051054    9012 start.go:808] validating driver "docker" against &{Name:functional-20220921213353-5916 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.2 ClusterName:functional-20220921213353-5916 Namespace:default APIServerName:mini
kubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.25.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mou
ntOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I0921 21:37:41.052053    9012 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0921 21:37:41.104399    9012 out.go:177] 
	W0921 21:37:41.107608    9012 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0921 21:37:41.110454    9012 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:983: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-20220921213353-5916 --dry-run --alsologtostderr -v=1 --driver=docker

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:983: (dbg) Done: out/minikube-windows-amd64.exe start -p functional-20220921213353-5916 --dry-run --alsologtostderr -v=1 --driver=docker: (2.0740809s)
--- PASS: TestFunctional/parallel/DryRun (3.45s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (1.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1012: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-20220921213353-5916 --dry-run --memory 250MB --alsologtostderr --driver=docker

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1012: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p functional-20220921213353-5916 --dry-run --memory 250MB --alsologtostderr --driver=docker: exit status 23 (1.4587785s)

                                                
                                                
-- stdout --
	* [functional-20220921213353-5916] minikube v1.27.0 sur Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	  - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=14995
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0921 21:37:39.172024    5284 out.go:296] Setting OutFile to fd 876 ...
	I0921 21:37:39.239719    5284 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0921 21:37:39.240254    5284 out.go:309] Setting ErrFile to fd 968...
	I0921 21:37:39.240318    5284 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0921 21:37:39.263628    5284 out.go:303] Setting JSON to false
	I0921 21:37:39.266815    5284 start.go:115] hostinfo: {"hostname":"minikube2","uptime":2327,"bootTime":1663793932,"procs":152,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19044 Build 19044","kernelVersion":"10.0.19044 Build 19044","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"29dced62-21fb-45d8-a34e-472b66ced002"}
	W0921 21:37:39.266880    5284 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0921 21:37:39.270282    5284 out.go:177] * [functional-20220921213353-5916] minikube v1.27.0 sur Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	I0921 21:37:39.273825    5284 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	I0921 21:37:39.276465    5284 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	I0921 21:37:39.278316    5284 out.go:177]   - MINIKUBE_LOCATION=14995
	I0921 21:37:39.281094    5284 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0921 21:37:39.285177    5284 config.go:180] Loaded profile config "functional-20220921213353-5916": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.2
	I0921 21:37:39.286826    5284 driver.go:365] Setting default libvirt URI to qemu:///system
	I0921 21:37:39.608244    5284 docker.go:137] docker version: linux-20.10.17
	I0921 21:37:39.616194    5284 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0921 21:37:40.199716    5284 info.go:265] docker info: {ID:6VFI:N2PO:PLBP:KXLR:EN7R:OR7P:564S:UY7N:ZWDA:5V2U:KGL2:KJZ7 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:43 OomKillDisable:true NGoroutines:49 SystemTime:2022-09-21 21:37:39.7888192 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.10.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.9] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-
plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.19.0]] Warnings:<nil>}}
	I0921 21:37:40.202772    5284 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0921 21:37:40.205771    5284 start.go:284] selected driver: docker
	I0921 21:37:40.205771    5284 start.go:808] validating driver "docker" against &{Name:functional-20220921213353-5916 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.2 ClusterName:functional-20220921213353-5916 Namespace:default APIServerName:mini
kubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.25.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mou
ntOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I0921 21:37:40.205771    5284 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0921 21:37:40.322720    5284 out.go:177] 
	W0921 21:37:40.324716    5284 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0921 21:37:40.327729    5284 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (1.46s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (1.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1619: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220921213353-5916 addons list

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1631: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220921213353-5916 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (1.07s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (1.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1265: (dbg) Run:  out/minikube-windows-amd64.exe profile lis

                                                
                                                
=== CONT  TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (1.51s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (1.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1305: (dbg) Run:  out/minikube-windows-amd64.exe profile list

                                                
                                                
=== CONT  TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: Took "822.7443ms" to run "out/minikube-windows-amd64.exe profile list"
functional_test.go:1319: (dbg) Run:  out/minikube-windows-amd64.exe profile list -l

                                                
                                                
=== CONT  TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1324: Took "399.0919ms" to run "out/minikube-windows-amd64.exe profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (1.22s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (1.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1356: (dbg) Run:  out/minikube-windows-amd64.exe profile list -o json

                                                
                                                
=== CONT  TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: Took "807.6964ms" to run "out/minikube-windows-amd64.exe profile list -o json"
functional_test.go:1369: (dbg) Run:  out/minikube-windows-amd64.exe profile list -o json --light

                                                
                                                
=== CONT  TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1374: Took "400.0666ms" to run "out/minikube-windows-amd64.exe profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (1.21s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2182: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220921213353-5916 version --short
--- PASS: TestFunctional/parallel/Version/short (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:127: (dbg) daemon: [out/minikube-windows-amd64.exe -p functional-20220921213353-5916 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:369: (dbg) stopping [out/minikube-windows-amd64.exe -p functional-20220921213353-5916 tunnel --alsologtostderr] ...
helpers_test.go:506: unable to kill pid 7392: TerminateProcess: Access is denied.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (1.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:387: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220921213353-5916 image rm gcr.io/google-containers/addon-resizer:functional-20220921213353-5916

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:443: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220921213353-5916 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (1.17s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.4s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:185: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:185: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-20220921213353-5916
--- PASS: TestFunctional/delete_addon-resizer_images (0.40s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.19s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:193: (dbg) Run:  docker rmi -f localhost/my-image:functional-20220921213353-5916
--- PASS: TestFunctional/delete_my-image_image (0.19s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.22s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:201: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-20220921213353-5916
--- PASS: TestFunctional/delete_minikube_cached_images (0.22s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.57s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-windows-amd64.exe -p ingress-addon-legacy-20220921214242-5916 addons enable ingress-dns --alsologtostderr -v=5
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.57s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (1.83s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:149: (dbg) Run:  out/minikube-windows-amd64.exe start -p json-output-error-20220921214450-5916 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:149: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p json-output-error-20220921214450-5916 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (398.551ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"5f5a636f-0165-4663-ab39-831a55d10bfe","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-20220921214450-5916] minikube v1.27.0 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"f5fbfaca-11b4-4706-97aa-f5a437c6740e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=C:\\Users\\jenkins.minikube2\\minikube-integration\\kubeconfig"}}
	{"specversion":"1.0","id":"149f07fb-d639-41fd-9cdb-7fcd3791b6e1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube"}}
	{"specversion":"1.0","id":"88da8598-fc8d-4d7e-aaf4-88d34f4cd074","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=14995"}}
	{"specversion":"1.0","id":"831547b0-87b8-4266-a2c8-f0d1fea2be4a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"d66a0978-329b-4da4-b597-b67eb8e84aa2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on windows/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-20220921214450-5916" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p json-output-error-20220921214450-5916
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p json-output-error-20220921214450-5916: (1.4275761s)
--- PASS: TestErrorJSONOutput (1.83s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (193.57s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-windows-amd64.exe start -p docker-network-20220921214811-5916 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-windows-amd64.exe start -p docker-network-20220921214811-5916 --network=bridge: (2m47.0997034s)
kic_custom_network_test.go:122: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-20220921214811-5916" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p docker-network-20220921214811-5916
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p docker-network-20220921214811-5916: (26.2646725s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (193.57s)

                                                
                                    
x
+
TestMainNoArgs (0.33s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-windows-amd64.exe
--- PASS: TestMainNoArgs (0.33s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.57s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion

                                                
                                                
=== CONT  TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-windows-amd64.exe start -p NoKubernetes-20220921220434-5916 --no-kubernetes --kubernetes-version=1.20 --driver=docker

                                                
                                                
=== CONT  TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p NoKubernetes-20220921220434-5916 --no-kubernetes --kubernetes-version=1.20 --driver=docker: exit status 14 (569.6319ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-20220921220434-5916] minikube v1.27.0 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	  - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=14995
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.57s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.51s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.51s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (1.5s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p NoKubernetes-20220921220434-5916 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-windows-amd64.exe ssh -p NoKubernetes-20220921220434-5916 "sudo systemctl is-active --quiet service kubelet": exit status 80 (1.4955131s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "NoKubernetes-20220921220434-5916": docker container inspect NoKubernetes-20220921220434-5916 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: NoKubernetes-20220921220434-5916
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                     │
	│    * If the above advice does not help, please let us know:                                                         │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                       │
	│                                                                                                                     │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                            │
	│    * Please also attach the following file to the GitHub issue:                                                     │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_ssh_ef99f3f3976bdc9ede40cba20b814885e47e2c2a_1.log    │
	│                                                                                                                     │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (1.50s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (3.59s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-windows-amd64.exe profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-windows-amd64.exe profile list: (1.774674s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output=json

                                                
                                                
=== CONT  TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:179: (dbg) Done: out/minikube-windows-amd64.exe profile list --output=json: (1.8178552s)
--- PASS: TestNoKubernetes/serial/ProfileList (3.59s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.59s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-windows-amd64.exe addons enable metrics-server -p newest-cni-20220921221222-5916 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.59s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    

Test skip (21/224)

x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:121: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:140: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.25.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.25.2/cached-images
aaa_download_only_test.go:121: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.25.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.25.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.25.2/binaries
aaa_download_only_test.go:140: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.25.2/binaries (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (0.28s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1069: (dbg) Run:  docker build -t minikube-local-cache-test:functional-20220921213353-5916 C:\Users\jenkins.minikube2\AppData\Local\Temp\TestFunctionalserialCacheCmdcacheadd_local645902255\001
functional_test.go:1069: (dbg) Non-zero exit: docker build -t minikube-local-cache-test:functional-20220921213353-5916 C:\Users\jenkins.minikube2\AppData\Local\Temp\TestFunctionalserialCacheCmdcacheadd_local645902255\001: exit status 1 (246.3894ms)

                                                
                                                
** stderr ** 
	#2 [internal] load .dockerignore
	#2 sha256:965faa19f6d9adf139def9a38de904507e7ef2592307710859ed9e32229d8de1
	#2 ERROR: failed to create lease: write /var/lib/docker/buildkit/containerdmeta.db: read-only file system
	
	#1 [internal] load build definition from Dockerfile
	#1 sha256:73c714de95063cbeb638ba41ee3b595320d02b7b205a8444e51684d551082326
	#1 ERROR: failed to create lease: write /var/lib/docker/buildkit/containerdmeta.db: read-only file system
	------
	 > [internal] load .dockerignore:
	------
	------
	 > [internal] load build definition from Dockerfile:
	------
	failed to solve with frontend dockerfile.v0: failed to read dockerfile: failed to create lease: write /var/lib/docker/buildkit/containerdmeta.db: read-only file system

                                                
                                                
** /stderr **
functional_test.go:1071: failed to build docker image, skipping local test: exit status 1
--- SKIP: TestFunctional/serial/CacheCmd/cache/add_local (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (300.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:897: (dbg) daemon: [out/minikube-windows-amd64.exe dashboard --url --port 36195 -p functional-20220921213353-5916 --alsologtostderr -v=1]

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:908: output didn't produce a URL
functional_test.go:902: (dbg) stopping [out/minikube-windows-amd64.exe dashboard --url --port 36195 -p functional-20220921213353-5916 --alsologtostderr -v=1] ...
helpers_test.go:500: unable to terminate pid 9180: Access is denied.
--- SKIP: TestFunctional/parallel/DashboardCmd (300.03s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd
=== PAUSE TestFunctional/parallel/MountCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd
functional_test_mount_test.go:57: skipping: mount broken on windows: https://github.com/kubernetes/minikube/issues/8303
--- SKIP: TestFunctional/parallel/MountCmd (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:542: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:193: skipping: access direct test is broken on windows: https://github.com/kubernetes/minikube/issues/8304
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:97: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:97: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:97: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestScheduledStopUnix (0s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:76: test only runs on unix
--- SKIP: TestScheduledStopUnix (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:39: skipping due to https://github.com/kubernetes/minikube/issues/14232
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (1.55s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-20220921221219-5916" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p disable-driver-mounts-20220921221219-5916

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p disable-driver-mounts-20220921221219-5916: (1.5474582s)
--- SKIP: TestStartStop/group/disable-driver-mounts (1.55s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel (1.56s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel
net_test.go:79: flannel is not yet compatible with Docker driver: iptables v1.8.3 (legacy): Couldn't load target `CNI-x': No such file or directory
helpers_test.go:175: Cleaning up "flannel-20220921220528-5916" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p flannel-20220921220528-5916

                                                
                                                
=== CONT  TestNetworkPlugins/group/flannel
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p flannel-20220921220528-5916: (1.560626s)
--- SKIP: TestNetworkPlugins/group/flannel (1.56s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel (1.56s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel
net_test.go:79: flannel is not yet compatible with Docker driver: iptables v1.8.3 (legacy): Couldn't load target `CNI-x': No such file or directory
helpers_test.go:175: Cleaning up "custom-flannel-20220921220530-5916" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p custom-flannel-20220921220530-5916
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p custom-flannel-20220921220530-5916: (1.558244s)
--- SKIP: TestNetworkPlugins/group/custom-flannel (1.56s)

                                                
                                    
Copied to clipboard