Test Report: Docker_Windows 12739

                    
                      80e07762e28b592b48b4aeaf3aab89efbbe303e1:2021-11-17:21391
                    
                

Test fail (154/234)

Order failed test Duration
26 TestOffline 44.24
28 TestAddons/Setup 85.48
29 TestCertOptions 48.43
30 TestCertExpiration 291.54
31 TestDockerFlags 45.81
32 TestForceSystemdFlag 44.31
33 TestForceSystemdEnv 44.94
38 TestErrorSpam/setup 37.18
47 TestFunctional/serial/StartWithProxy 39.02
48 TestFunctional/serial/AuditLog 0
49 TestFunctional/serial/SoftStart 59.02
50 TestFunctional/serial/KubeContext 2.22
51 TestFunctional/serial/KubectlGetPods 2.15
54 TestFunctional/serial/CacheCmd/cache/add_remote 5.38
56 TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 0.33
57 TestFunctional/serial/CacheCmd/cache/list 0.29
58 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 1.73
59 TestFunctional/serial/CacheCmd/cache/cache_reload 5.65
60 TestFunctional/serial/CacheCmd/cache/delete 0.64
61 TestFunctional/serial/MinikubeKubectlCmd 4.06
62 TestFunctional/serial/MinikubeKubectlCmdDirectly 3.77
63 TestFunctional/serial/ExtraConfig 58.84
64 TestFunctional/serial/ComponentHealth 2.16
65 TestFunctional/serial/LogsCmd 2.22
66 TestFunctional/serial/LogsFileCmd 1.93
72 TestFunctional/parallel/StatusCmd 7.36
75 TestFunctional/parallel/ServiceCmd 3.05
77 TestFunctional/parallel/PersistentVolumeClaim 1.99
79 TestFunctional/parallel/SSHCmd 5.68
80 TestFunctional/parallel/CpCmd 3.76
81 TestFunctional/parallel/MySQL 2.33
82 TestFunctional/parallel/FileSync 3.92
83 TestFunctional/parallel/CertSync 13.11
87 TestFunctional/parallel/NodeLabels 2.24
89 TestFunctional/parallel/NonActiveRuntimeDisabled 1.87
92 TestFunctional/parallel/Version/components 2.14
93 TestFunctional/parallel/DockerEnv/powershell 7.45
94 TestFunctional/parallel/UpdateContextCmd/no_changes 1.89
95 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 1.86
96 TestFunctional/parallel/UpdateContextCmd/no_clusters 1.85
103 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 0
109 TestFunctional/parallel/ImageCommands/ImageList 1.81
110 TestFunctional/parallel/ImageCommands/ImageBuild 5.31
112 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 9.04
113 TestFunctional/parallel/ImageCommands/ImageSaveToFile 1.77
115 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.77
122 TestIngressAddonLegacy/StartLegacyK8sCluster 44.04
124 TestIngressAddonLegacy/serial/ValidateIngressAddonActivation 3.71
126 TestIngressAddonLegacy/serial/ValidateIngressAddons 1.83
129 TestJSONOutput/start/Command 37.19
130 TestJSONOutput/start/Audit 0
132 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
133 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0.01
135 TestJSONOutput/pause/Command 1.75
136 TestJSONOutput/pause/Audit 0
141 TestJSONOutput/unpause/Command 1.82
142 TestJSONOutput/unpause/Audit 0
147 TestJSONOutput/stop/Command 14.98
148 TestJSONOutput/stop/Audit 0
150 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
151 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
154 TestKicCustomNetwork/create_custom_network 221.87
160 TestMountStart/serial/StartWithMountFirst 39.09
161 TestMountStart/serial/StartWithMountSecond 39.32
162 TestMountStart/serial/VerifyMountFirst 3.63
163 TestMountStart/serial/VerifyMountSecond 3.61
165 TestMountStart/serial/VerifyMountPostDelete 3.65
166 TestMountStart/serial/Stop 16.98
167 TestMountStart/serial/RestartStopped 59.45
168 TestMountStart/serial/VerifyMountPostStop 3.61
171 TestMultiNode/serial/FreshStart2Nodes 39.08
172 TestMultiNode/serial/DeployApp2Nodes 14.13
173 TestMultiNode/serial/PingHostFrom2Pods 3.64
174 TestMultiNode/serial/AddNode 3.67
175 TestMultiNode/serial/ProfileList 3.7
176 TestMultiNode/serial/CopyFile 3.52
177 TestMultiNode/serial/StopNode 5.56
178 TestMultiNode/serial/StartAfterStop 4.1
179 TestMultiNode/serial/RestartKeepsNodes 74.82
180 TestMultiNode/serial/DeleteNode 5.42
181 TestMultiNode/serial/StopMultiNode 20.53
182 TestMultiNode/serial/RestartMultiNode 59.35
183 TestMultiNode/serial/ValidateNameConflict 82.35
187 TestPreload 42.05
188 TestScheduledStopWindows 42.01
190 TestSkaffold 43.49
192 TestInsufficientStorage 11.35
195 TestKubernetesUpgrade 75.67
196 TestMissingContainerUpgrade 271.63
198 TestNoKubernetes/serial/Start 40.47
212 TestNoKubernetes/serial/Stop 17.17
213 TestNoKubernetes/serial/StartNoArgs 67.11
218 TestPause/serial/Start 43.35
226 TestNetworkPlugins/group/auto/Start 41.41
227 TestNetworkPlugins/group/false/Start 42.38
229 TestPause/serial/SecondStartNoReconfiguration 60.97
230 TestNetworkPlugins/group/cilium/Start 38.52
231 TestNetworkPlugins/group/calico/Start 38.15
232 TestNetworkPlugins/group/custom-weave/Start 38.03
233 TestNetworkPlugins/group/enable-default-cni/Start 38.07
234 TestNetworkPlugins/group/kindnet/Start 38.24
235 TestPause/serial/Pause 5.73
236 TestPause/serial/VerifyStatus 3.89
237 TestNetworkPlugins/group/bridge/Start 38.26
238 TestPause/serial/Unpause 5.69
239 TestPause/serial/PauseAgain 5.67
241 TestPause/serial/VerifyDeletedResources 3.33
242 TestNetworkPlugins/group/kubenet/Start 38.16
244 TestStartStop/group/old-k8s-version/serial/FirstStart 40.33
246 TestStartStop/group/embed-certs/serial/FirstStart 40.36
248 TestStartStop/group/no-preload/serial/FirstStart 39.73
249 TestStartStop/group/old-k8s-version/serial/DeployApp 4.15
250 TestStartStop/group/embed-certs/serial/DeployApp 4.2
252 TestStartStop/group/default-k8s-different-port/serial/FirstStart 39.5
253 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 4.1
254 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 4.03
255 TestStartStop/group/old-k8s-version/serial/Stop 17.1
256 TestStartStop/group/embed-certs/serial/Stop 17.2
257 TestStartStop/group/no-preload/serial/DeployApp 4.11
258 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 5.65
259 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 5.58
260 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 3.96
261 TestStartStop/group/no-preload/serial/Stop 17.12
262 TestStartStop/group/old-k8s-version/serial/SecondStart 60.54
263 TestStartStop/group/embed-certs/serial/SecondStart 60.31
264 TestStartStop/group/default-k8s-different-port/serial/DeployApp 3.91
265 TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive 3.91
266 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 5.57
267 TestStartStop/group/default-k8s-different-port/serial/Stop 17.08
268 TestStartStop/group/no-preload/serial/SecondStart 59.83
269 TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop 5.42
270 TestStartStop/group/default-k8s-different-port/serial/SecondStart 60.06
271 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 1.95
272 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 1.95
273 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 2.21
274 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 2.19
275 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 3.83
276 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 3.79
277 TestStartStop/group/old-k8s-version/serial/Pause 5.8
278 TestStartStop/group/embed-certs/serial/Pause 5.8
280 TestStartStop/group/newest-cni/serial/FirstStart 39.9
281 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 1.98
282 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 2.17
283 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 3.64
284 TestStartStop/group/no-preload/serial/Pause 5.56
285 TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop 1.9
286 TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop 2.06
287 TestStartStop/group/default-k8s-different-port/serial/VerifyKubernetesImages 3.59
288 TestStartStop/group/default-k8s-different-port/serial/Pause 5.56
291 TestStartStop/group/newest-cni/serial/Stop 16.93
292 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 5.3
293 TestStartStop/group/newest-cni/serial/SecondStart 59.46
296 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 3.59
297 TestStartStop/group/newest-cni/serial/Pause 5.49
x
+
TestOffline (44.24s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:56: (dbg) Run:  out/minikube-windows-amd64.exe start -p offline-docker-20211117230313-9504 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:56: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p offline-docker-20211117230313-9504 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker: exit status 80 (38.4335522s)

                                                
                                                
-- stdout --
	* [offline-docker-20211117230313-9504] minikube v1.24.0 on Microsoft Windows 10 Enterprise N 10.0.18363 Build 18363
	  - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=12739
	* Using the docker driver based on user configuration
	* Starting control plane node offline-docker-20211117230313-9504 in cluster offline-docker-20211117230313-9504
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* docker "offline-docker-20211117230313-9504" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1117 23:03:13.543212    9728 out.go:297] Setting OutFile to fd 1412 ...
	I1117 23:03:13.661482    9728 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 23:03:13.661482    9728 out.go:310] Setting ErrFile to fd 1388...
	I1117 23:03:13.661482    9728 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 23:03:13.675481    9728 out.go:304] Setting JSON to false
	I1117 23:03:13.677470    9728 start.go:112] hostinfo: {"hostname":"minikube2","uptime":79509,"bootTime":1637110684,"procs":130,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.18363 Build 18363","kernelVersion":"10.0.18363 Build 18363","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"29dced62-21fb-45d8-a34e-472b66ced002"}
	W1117 23:03:13.678472    9728 start.go:120] gopshost.Virtualization returned error: not implemented yet
	I1117 23:03:13.681479    9728 out.go:176] * [offline-docker-20211117230313-9504] minikube v1.24.0 on Microsoft Windows 10 Enterprise N 10.0.18363 Build 18363
	I1117 23:03:13.681479    9728 notify.go:174] Checking for updates...
	I1117 23:03:13.687477    9728 out.go:176]   - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	I1117 23:03:13.690477    9728 out.go:176]   - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	I1117 23:03:13.693491    9728 out.go:176]   - MINIKUBE_LOCATION=12739
	I1117 23:03:13.696479    9728 config.go:176] Loaded profile config "multinode-20211117225530-9504-m01": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.3
	I1117 23:03:13.697481    9728 driver.go:343] Setting default libvirt URI to qemu:///system
	I1117 23:03:15.364391    9728 docker.go:132] docker version: linux-19.03.12
	I1117 23:03:15.369055    9728 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I1117 23:03:15.743708    9728 info.go:263] docker info: {ID:L2W4:ZZCI:FAHF:ZZT6:NAWJ:ODFR:2ZBQ:YQ77:PYZZ:VR3E:WLUM:OBSV Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:40 OomKillDisable:true NGoroutines:52 SystemTime:2021-11-17 23:03:15.460954644 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:4.19.76-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://inde
x.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:6239399936 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:19.03.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7ad184331fa3e55e52b890ea95e65ba581ae3429 Expected:7ad184331fa3e55e52b890ea95e65ba581ae3429} RuncCommit:{ID:dc9208a3303feef5b3839f4323d9beb36df0a9dd Expected:dc9208a3303feef5b3839f4323d9beb36df0a9dd} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense:Community Engine Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I1117 23:03:15.747719    9728 out.go:176] * Using the docker driver based on user configuration
	I1117 23:03:15.747719    9728 start.go:280] selected driver: docker
	I1117 23:03:15.747719    9728 start.go:775] validating driver "docker" against <nil>
	I1117 23:03:15.747719    9728 start.go:786] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I1117 23:03:15.802704    9728 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I1117 23:03:16.184505    9728 info.go:263] docker info: {ID:L2W4:ZZCI:FAHF:ZZT6:NAWJ:ODFR:2ZBQ:YQ77:PYZZ:VR3E:WLUM:OBSV Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:43 OomKillDisable:true NGoroutines:56 SystemTime:2021-11-17 23:03:15.883035606 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:4.19.76-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://inde
x.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:6239399936 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:19.03.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7ad184331fa3e55e52b890ea95e65ba581ae3429 Expected:7ad184331fa3e55e52b890ea95e65ba581ae3429} RuncCommit:{ID:dc9208a3303feef5b3839f4323d9beb36df0a9dd Expected:dc9208a3303feef5b3839f4323d9beb36df0a9dd} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense:Community Engine Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I1117 23:03:16.184505    9728 start_flags.go:268] no existing cluster config was found, will generate one from the flags 
	I1117 23:03:16.185237    9728 start_flags.go:758] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1117 23:03:16.185306    9728 cni.go:93] Creating CNI manager for ""
	I1117 23:03:16.185306    9728 cni.go:167] CNI unnecessary in this configuration, recommending no CNI
	I1117 23:03:16.185306    9728 start_flags.go:282] config:
	{Name:offline-docker-20211117230313-9504 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.3 ClusterName:offline-docker-20211117230313-9504 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker
CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host}
	I1117 23:03:16.189257    9728 out.go:176] * Starting control plane node offline-docker-20211117230313-9504 in cluster offline-docker-20211117230313-9504
	I1117 23:03:16.189798    9728 cache.go:118] Beginning downloading kic base image for docker with docker
	I1117 23:03:16.196791    9728 out.go:176] * Pulling base image ...
	I1117 23:03:16.196791    9728 preload.go:132] Checking if preload exists for k8s version v1.22.3 and runtime docker
	I1117 23:03:16.196791    9728 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c in local docker daemon
	I1117 23:03:16.197332    9728 preload.go:148] Found local preload: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4
	I1117 23:03:16.197332    9728 cache.go:57] Caching tarball of preloaded images
	I1117 23:03:16.197540    9728 preload.go:174] Found C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1117 23:03:16.198117    9728 cache.go:60] Finished verifying existence of preloaded tar for  v1.22.3 on docker
	I1117 23:03:16.198535    9728 profile.go:147] Saving config to C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\offline-docker-20211117230313-9504\config.json ...
	I1117 23:03:16.198739    9728 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\offline-docker-20211117230313-9504\config.json: {Name:mk74869cce66ae4911be7d6d85b0d96562f7d332 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1117 23:03:16.312542    9728 image.go:79] Found gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c in local docker daemon, skipping pull
	I1117 23:03:16.312610    9728 cache.go:140] gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c exists in daemon, skipping load
	I1117 23:03:16.312610    9728 cache.go:206] Successfully downloaded all kic artifacts
	I1117 23:03:16.312610    9728 start.go:313] acquiring machines lock for offline-docker-20211117230313-9504: {Name:mk66bc66065b13f675564669faff9f15ea3aef09 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 23:03:16.312610    9728 start.go:317] acquired machines lock for "offline-docker-20211117230313-9504" in 0s
	I1117 23:03:16.312610    9728 start.go:89] Provisioning new machine with config: &{Name:offline-docker-20211117230313-9504 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.3 ClusterName:offline-docker-20211117230313-9504 Namespace:default APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.22.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host} &{Name: IP: Port:8443 KubernetesVersion:v1.22.3 ControlPlane:true Worker:true}
	I1117 23:03:16.313257    9728 start.go:126] createHost starting for "" (driver="docker")
	I1117 23:03:16.319887    9728 out.go:203] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I1117 23:03:16.320489    9728 start.go:160] libmachine.API.Create for "offline-docker-20211117230313-9504" (driver="docker")
	I1117 23:03:16.320568    9728 client.go:168] LocalClient.Create starting
	I1117 23:03:16.321242    9728 main.go:130] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I1117 23:03:16.321242    9728 main.go:130] libmachine: Decoding PEM data...
	I1117 23:03:16.321242    9728 main.go:130] libmachine: Parsing certificate...
	I1117 23:03:16.321242    9728 main.go:130] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I1117 23:03:16.321894    9728 main.go:130] libmachine: Decoding PEM data...
	I1117 23:03:16.321968    9728 main.go:130] libmachine: Parsing certificate...
	I1117 23:03:16.327272    9728 cli_runner.go:115] Run: docker network inspect offline-docker-20211117230313-9504 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1117 23:03:16.428279    9728 cli_runner.go:162] docker network inspect offline-docker-20211117230313-9504 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1117 23:03:16.432579    9728 network_create.go:254] running [docker network inspect offline-docker-20211117230313-9504] to gather additional debugging logs...
	I1117 23:03:16.432579    9728 cli_runner.go:115] Run: docker network inspect offline-docker-20211117230313-9504
	W1117 23:03:16.525913    9728 cli_runner.go:162] docker network inspect offline-docker-20211117230313-9504 returned with exit code 1
	I1117 23:03:16.525913    9728 network_create.go:257] error running [docker network inspect offline-docker-20211117230313-9504]: docker network inspect offline-docker-20211117230313-9504: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: offline-docker-20211117230313-9504
	I1117 23:03:16.525913    9728 network_create.go:259] output of [docker network inspect offline-docker-20211117230313-9504]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: offline-docker-20211117230313-9504
	
	** /stderr **
	I1117 23:03:16.531566    9728 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1117 23:03:16.656505    9728 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc0001cae18] misses:0}
	I1117 23:03:16.656505    9728 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1117 23:03:16.657496    9728 network_create.go:106] attempt to create docker network offline-docker-20211117230313-9504 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1117 23:03:16.662484    9728 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true offline-docker-20211117230313-9504
	W1117 23:03:16.822144    9728 cli_runner.go:162] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true offline-docker-20211117230313-9504 returned with exit code 1
	W1117 23:03:16.822144    9728 network_create.go:98] failed to create docker network offline-docker-20211117230313-9504 192.168.49.0/24, will retry: subnet is taken
	I1117 23:03:16.839091    9728 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0001cae18] amended:false}} dirty:map[] misses:0}
	I1117 23:03:16.839091    9728 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1117 23:03:16.858570    9728 network.go:288] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0001cae18] amended:true}} dirty:map[192.168.49.0:0xc0001cae18 192.168.58.0:0xc0001caee0] misses:0}
	I1117 23:03:16.858570    9728 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1117 23:03:16.858570    9728 network_create.go:106] attempt to create docker network offline-docker-20211117230313-9504 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I1117 23:03:16.863823    9728 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true offline-docker-20211117230313-9504
	I1117 23:03:17.087411    9728 network_create.go:90] docker network offline-docker-20211117230313-9504 192.168.58.0/24 created
	I1117 23:03:17.087411    9728 kic.go:106] calculated static IP "192.168.58.2" for the "offline-docker-20211117230313-9504" container
	I1117 23:03:17.097369    9728 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
	I1117 23:03:17.197972    9728 cli_runner.go:115] Run: docker volume create offline-docker-20211117230313-9504 --label name.minikube.sigs.k8s.io=offline-docker-20211117230313-9504 --label created_by.minikube.sigs.k8s.io=true
	I1117 23:03:17.310735    9728 oci.go:102] Successfully created a docker volume offline-docker-20211117230313-9504
	I1117 23:03:17.314633    9728 cli_runner.go:115] Run: docker run --rm --name offline-docker-20211117230313-9504-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=offline-docker-20211117230313-9504 --entrypoint /usr/bin/test -v offline-docker-20211117230313-9504:/var gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -d /var/lib
	I1117 23:03:18.757209    9728 cli_runner.go:168] Completed: docker run --rm --name offline-docker-20211117230313-9504-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=offline-docker-20211117230313-9504 --entrypoint /usr/bin/test -v offline-docker-20211117230313-9504:/var gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -d /var/lib: (1.4423228s)
	I1117 23:03:18.757209    9728 oci.go:106] Successfully prepared a docker volume offline-docker-20211117230313-9504
	I1117 23:03:18.757209    9728 preload.go:132] Checking if preload exists for k8s version v1.22.3 and runtime docker
	I1117 23:03:18.757209    9728 kic.go:179] Starting extracting preloaded images to volume ...
	I1117 23:03:18.762685    9728 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I1117 23:03:18.763376    9728 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v offline-docker-20211117230313-9504:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir
	W1117 23:03:18.876447    9728 cli_runner.go:162] docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v offline-docker-20211117230313-9504:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir returned with exit code 125
	I1117 23:03:18.876447    9728 kic.go:186] Unable to extract preloaded tarball to volume: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v offline-docker-20211117230313-9504:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: status code not OK but 500: {"Message":"Unhandled exception: The notification platform is unavailable.\r\n\r\nThe notification platform is unavailable.\r\n","StackTrace":"   at Windows.UI.Notifications.ToastNotificationManager.CreateToastNotifier(String applicationId)\r\n   at Docker.WPF.PromptShareDirectory.<PromptUserAsync>d__0.MoveNext() in C:\\workspaces\\stable-2.3.x\\src\\github.com\\docker\\pinata\\win\\src\\Docker.WPF\\PromptShareDirectory.cs:line 25\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at Docker.ApiServices.Mounting.FileSharing.<DoShareAsync>d__6.MoveNext() in C:\\workspaces\\stable-2.3.x\\src\\github.com\\docker\\pinata\\win\\src\\Docker.ApiServices\\Mounting\\FileSharing.cs:line 80\r\n--- End of stack trace from previous location whe
re exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at Docker.ApiServices.Mounting.FileSharing.<ShareAsync>d__4.MoveNext() in C:\\workspaces\\stable-2.3.x\\src\\github.com\\docker\\pinata\\win\\src\\Docker.ApiServices\\Mounting\\FileSharing.cs:line 47\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at Docker.HttpApi.Controllers.FilesharingController.<ShareDirectory>d__2.MoveNext() in C:\\workspaces\\stable-2.3.x\\src\\github.com\\docker\\pinata\\win\\src\\Docker.HttpApi\\Controllers\\FilesharingController.cs:line 21\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.Exceptio
nDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at System.Threading.Tasks.TaskHelpersExtensions.<CastToObject>d__1`1.MoveNext()\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at System.Web.Http.Controllers.ApiControllerActionInvoker.<InvokeActionAsyncCore>d__1.MoveNext()\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at System.Web.Http.Controllers.ActionFilterResult.<ExecuteAsync>d__5.MoveNext()\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.Excepti
onServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at System.Web.Http.Dispatcher.HttpControllerDispatcher.<SendAsync>d__15.MoveNext()"}.
	See 'docker run --help'.
	I1117 23:03:19.117215    9728 info.go:263] docker info: {ID:L2W4:ZZCI:FAHF:ZZT6:NAWJ:ODFR:2ZBQ:YQ77:PYZZ:VR3E:WLUM:OBSV Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:40 OomKillDisable:true NGoroutines:52 SystemTime:2021-11-17 23:03:18.840388962 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:4.19.76-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://inde
x.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:6239399936 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:19.03.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7ad184331fa3e55e52b890ea95e65ba581ae3429 Expected:7ad184331fa3e55e52b890ea95e65ba581ae3429} RuncCommit:{ID:dc9208a3303feef5b3839f4323d9beb36df0a9dd Expected:dc9208a3303feef5b3839f4323d9beb36df0a9dd} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense:Community Engine Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	E1117 23:03:19.117215    9728 oci.go:197] error getting kernel modules path: Unable to locate kernel modules
	I1117 23:03:19.117215    9728 client.go:171] LocalClient.Create took 2.7966264s
	I1117 23:03:21.125336    9728 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1117 23:03:21.128976    9728 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20211117230313-9504
	W1117 23:03:21.226706    9728 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20211117230313-9504 returned with exit code 1
	I1117 23:03:21.226888    9728 retry.go:31] will retry after 276.165072ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-20211117230313-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20211117230313-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: offline-docker-20211117230313-9504
	I1117 23:03:21.507211    9728 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20211117230313-9504
	W1117 23:03:21.596763    9728 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20211117230313-9504 returned with exit code 1
	I1117 23:03:21.597128    9728 retry.go:31] will retry after 540.190908ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-20211117230313-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20211117230313-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: offline-docker-20211117230313-9504
	I1117 23:03:22.141808    9728 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20211117230313-9504
	W1117 23:03:22.229716    9728 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20211117230313-9504 returned with exit code 1
	I1117 23:03:22.229716    9728 retry.go:31] will retry after 655.06503ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-20211117230313-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20211117230313-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: offline-docker-20211117230313-9504
	I1117 23:03:22.890377    9728 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20211117230313-9504
	W1117 23:03:22.983432    9728 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20211117230313-9504 returned with exit code 1
	W1117 23:03:22.983854    9728 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-20211117230313-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20211117230313-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: offline-docker-20211117230313-9504
	
	W1117 23:03:22.983854    9728 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-20211117230313-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20211117230313-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: offline-docker-20211117230313-9504
	I1117 23:03:22.983961    9728 start.go:129] duration metric: createHost completed in 6.6705601s
	I1117 23:03:22.983961    9728 start.go:80] releasing machines lock for "offline-docker-20211117230313-9504", held for 6.6713003s
	W1117 23:03:22.984128    9728 start.go:532] error starting host: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	I1117 23:03:22.992462    9728 cli_runner.go:115] Run: docker container inspect offline-docker-20211117230313-9504 --format={{.State.Status}}
	W1117 23:03:23.088824    9728 cli_runner.go:162] docker container inspect offline-docker-20211117230313-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:03:23.089021    9728 delete.go:82] Unable to get host status for offline-docker-20211117230313-9504, assuming it has already been deleted: state: unknown state "offline-docker-20211117230313-9504": docker container inspect offline-docker-20211117230313-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: offline-docker-20211117230313-9504
	W1117 23:03:23.089177    9728 out.go:241] ! StartHost failed, but will try again: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	! StartHost failed, but will try again: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	I1117 23:03:23.089177    9728 start.go:547] Will try again in 5 seconds ...
	I1117 23:03:28.091366    9728 start.go:313] acquiring machines lock for offline-docker-20211117230313-9504: {Name:mk66bc66065b13f675564669faff9f15ea3aef09 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 23:03:28.091459    9728 start.go:317] acquired machines lock for "offline-docker-20211117230313-9504" in 0s
	I1117 23:03:28.091459    9728 start.go:93] Skipping create...Using existing machine configuration
	I1117 23:03:28.091459    9728 fix.go:55] fixHost starting: 
	I1117 23:03:28.099633    9728 cli_runner.go:115] Run: docker container inspect offline-docker-20211117230313-9504 --format={{.State.Status}}
	W1117 23:03:28.196174    9728 cli_runner.go:162] docker container inspect offline-docker-20211117230313-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:03:28.196371    9728 fix.go:108] recreateIfNeeded on offline-docker-20211117230313-9504: state= err=unknown state "offline-docker-20211117230313-9504": docker container inspect offline-docker-20211117230313-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: offline-docker-20211117230313-9504
	I1117 23:03:28.196422    9728 fix.go:113] machineExists: false. err=machine does not exist
	I1117 23:03:28.200450    9728 out.go:176] * docker "offline-docker-20211117230313-9504" container is missing, will recreate.
	I1117 23:03:28.200568    9728 delete.go:124] DEMOLISHING offline-docker-20211117230313-9504 ...
	I1117 23:03:28.209276    9728 cli_runner.go:115] Run: docker container inspect offline-docker-20211117230313-9504 --format={{.State.Status}}
	W1117 23:03:28.299248    9728 cli_runner.go:162] docker container inspect offline-docker-20211117230313-9504 --format={{.State.Status}} returned with exit code 1
	W1117 23:03:28.299559    9728 stop.go:75] unable to get state: unknown state "offline-docker-20211117230313-9504": docker container inspect offline-docker-20211117230313-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: offline-docker-20211117230313-9504
	I1117 23:03:28.299609    9728 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "offline-docker-20211117230313-9504": docker container inspect offline-docker-20211117230313-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: offline-docker-20211117230313-9504
	I1117 23:03:28.308384    9728 cli_runner.go:115] Run: docker container inspect offline-docker-20211117230313-9504 --format={{.State.Status}}
	W1117 23:03:28.400096    9728 cli_runner.go:162] docker container inspect offline-docker-20211117230313-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:03:28.400324    9728 delete.go:82] Unable to get host status for offline-docker-20211117230313-9504, assuming it has already been deleted: state: unknown state "offline-docker-20211117230313-9504": docker container inspect offline-docker-20211117230313-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: offline-docker-20211117230313-9504
	I1117 23:03:28.404538    9728 cli_runner.go:115] Run: docker container inspect -f {{.Id}} offline-docker-20211117230313-9504
	W1117 23:03:28.499232    9728 cli_runner.go:162] docker container inspect -f {{.Id}} offline-docker-20211117230313-9504 returned with exit code 1
	I1117 23:03:28.499232    9728 kic.go:360] could not find the container offline-docker-20211117230313-9504 to remove it. will try anyways
	I1117 23:03:28.504132    9728 cli_runner.go:115] Run: docker container inspect offline-docker-20211117230313-9504 --format={{.State.Status}}
	W1117 23:03:28.602755    9728 cli_runner.go:162] docker container inspect offline-docker-20211117230313-9504 --format={{.State.Status}} returned with exit code 1
	W1117 23:03:28.602755    9728 oci.go:83] error getting container status, will try to delete anyways: unknown state "offline-docker-20211117230313-9504": docker container inspect offline-docker-20211117230313-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: offline-docker-20211117230313-9504
	I1117 23:03:28.607866    9728 cli_runner.go:115] Run: docker exec --privileged -t offline-docker-20211117230313-9504 /bin/bash -c "sudo init 0"
	W1117 23:03:28.696756    9728 cli_runner.go:162] docker exec --privileged -t offline-docker-20211117230313-9504 /bin/bash -c "sudo init 0" returned with exit code 1
	I1117 23:03:28.696945    9728 oci.go:658] error shutdown offline-docker-20211117230313-9504: docker exec --privileged -t offline-docker-20211117230313-9504 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: offline-docker-20211117230313-9504
	I1117 23:03:29.701860    9728 cli_runner.go:115] Run: docker container inspect offline-docker-20211117230313-9504 --format={{.State.Status}}
	W1117 23:03:29.800045    9728 cli_runner.go:162] docker container inspect offline-docker-20211117230313-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:03:29.800219    9728 oci.go:670] temporary error verifying shutdown: unknown state "offline-docker-20211117230313-9504": docker container inspect offline-docker-20211117230313-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: offline-docker-20211117230313-9504
	I1117 23:03:29.800303    9728 oci.go:672] temporary error: container offline-docker-20211117230313-9504 status is  but expect it to be exited
	I1117 23:03:29.800341    9728 retry.go:31] will retry after 462.318748ms: couldn't verify container is exited. %v: unknown state "offline-docker-20211117230313-9504": docker container inspect offline-docker-20211117230313-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: offline-docker-20211117230313-9504
	I1117 23:03:30.268185    9728 cli_runner.go:115] Run: docker container inspect offline-docker-20211117230313-9504 --format={{.State.Status}}
	W1117 23:03:30.363685    9728 cli_runner.go:162] docker container inspect offline-docker-20211117230313-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:03:30.363757    9728 oci.go:670] temporary error verifying shutdown: unknown state "offline-docker-20211117230313-9504": docker container inspect offline-docker-20211117230313-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: offline-docker-20211117230313-9504
	I1117 23:03:30.363757    9728 oci.go:672] temporary error: container offline-docker-20211117230313-9504 status is  but expect it to be exited
	I1117 23:03:30.363757    9728 retry.go:31] will retry after 890.117305ms: couldn't verify container is exited. %v: unknown state "offline-docker-20211117230313-9504": docker container inspect offline-docker-20211117230313-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: offline-docker-20211117230313-9504
	I1117 23:03:31.259755    9728 cli_runner.go:115] Run: docker container inspect offline-docker-20211117230313-9504 --format={{.State.Status}}
	W1117 23:03:31.346747    9728 cli_runner.go:162] docker container inspect offline-docker-20211117230313-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:03:31.346747    9728 oci.go:670] temporary error verifying shutdown: unknown state "offline-docker-20211117230313-9504": docker container inspect offline-docker-20211117230313-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: offline-docker-20211117230313-9504
	I1117 23:03:31.346747    9728 oci.go:672] temporary error: container offline-docker-20211117230313-9504 status is  but expect it to be exited
	I1117 23:03:31.347046    9728 retry.go:31] will retry after 636.341646ms: couldn't verify container is exited. %v: unknown state "offline-docker-20211117230313-9504": docker container inspect offline-docker-20211117230313-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: offline-docker-20211117230313-9504
	I1117 23:03:31.988844    9728 cli_runner.go:115] Run: docker container inspect offline-docker-20211117230313-9504 --format={{.State.Status}}
	W1117 23:03:32.081826    9728 cli_runner.go:162] docker container inspect offline-docker-20211117230313-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:03:32.082031    9728 oci.go:670] temporary error verifying shutdown: unknown state "offline-docker-20211117230313-9504": docker container inspect offline-docker-20211117230313-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: offline-docker-20211117230313-9504
	I1117 23:03:32.082031    9728 oci.go:672] temporary error: container offline-docker-20211117230313-9504 status is  but expect it to be exited
	I1117 23:03:32.082101    9728 retry.go:31] will retry after 1.107876242s: couldn't verify container is exited. %v: unknown state "offline-docker-20211117230313-9504": docker container inspect offline-docker-20211117230313-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: offline-docker-20211117230313-9504
	I1117 23:03:33.196065    9728 cli_runner.go:115] Run: docker container inspect offline-docker-20211117230313-9504 --format={{.State.Status}}
	W1117 23:03:33.287465    9728 cli_runner.go:162] docker container inspect offline-docker-20211117230313-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:03:33.287465    9728 oci.go:670] temporary error verifying shutdown: unknown state "offline-docker-20211117230313-9504": docker container inspect offline-docker-20211117230313-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: offline-docker-20211117230313-9504
	I1117 23:03:33.287465    9728 oci.go:672] temporary error: container offline-docker-20211117230313-9504 status is  but expect it to be exited
	I1117 23:03:33.287465    9728 retry.go:31] will retry after 1.511079094s: couldn't verify container is exited. %v: unknown state "offline-docker-20211117230313-9504": docker container inspect offline-docker-20211117230313-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: offline-docker-20211117230313-9504
	I1117 23:03:34.804797    9728 cli_runner.go:115] Run: docker container inspect offline-docker-20211117230313-9504 --format={{.State.Status}}
	W1117 23:03:34.896188    9728 cli_runner.go:162] docker container inspect offline-docker-20211117230313-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:03:34.896273    9728 oci.go:670] temporary error verifying shutdown: unknown state "offline-docker-20211117230313-9504": docker container inspect offline-docker-20211117230313-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: offline-docker-20211117230313-9504
	I1117 23:03:34.896273    9728 oci.go:672] temporary error: container offline-docker-20211117230313-9504 status is  but expect it to be exited
	I1117 23:03:34.896273    9728 retry.go:31] will retry after 3.04096222s: couldn't verify container is exited. %v: unknown state "offline-docker-20211117230313-9504": docker container inspect offline-docker-20211117230313-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: offline-docker-20211117230313-9504
	I1117 23:03:37.944042    9728 cli_runner.go:115] Run: docker container inspect offline-docker-20211117230313-9504 --format={{.State.Status}}
	W1117 23:03:38.033600    9728 cli_runner.go:162] docker container inspect offline-docker-20211117230313-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:03:38.033826    9728 oci.go:670] temporary error verifying shutdown: unknown state "offline-docker-20211117230313-9504": docker container inspect offline-docker-20211117230313-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: offline-docker-20211117230313-9504
	I1117 23:03:38.033902    9728 oci.go:672] temporary error: container offline-docker-20211117230313-9504 status is  but expect it to be exited
	I1117 23:03:38.033902    9728 retry.go:31] will retry after 5.781953173s: couldn't verify container is exited. %v: unknown state "offline-docker-20211117230313-9504": docker container inspect offline-docker-20211117230313-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: offline-docker-20211117230313-9504
	I1117 23:03:43.821733    9728 cli_runner.go:115] Run: docker container inspect offline-docker-20211117230313-9504 --format={{.State.Status}}
	W1117 23:03:43.910912    9728 cli_runner.go:162] docker container inspect offline-docker-20211117230313-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:03:43.911107    9728 oci.go:670] temporary error verifying shutdown: unknown state "offline-docker-20211117230313-9504": docker container inspect offline-docker-20211117230313-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: offline-docker-20211117230313-9504
	I1117 23:03:43.911107    9728 oci.go:672] temporary error: container offline-docker-20211117230313-9504 status is  but expect it to be exited
	I1117 23:03:43.911107    9728 oci.go:87] couldn't shut down offline-docker-20211117230313-9504 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "offline-docker-20211117230313-9504": docker container inspect offline-docker-20211117230313-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: offline-docker-20211117230313-9504
	 
	I1117 23:03:43.915300    9728 cli_runner.go:115] Run: docker rm -f -v offline-docker-20211117230313-9504
	W1117 23:03:44.001042    9728 cli_runner.go:162] docker rm -f -v offline-docker-20211117230313-9504 returned with exit code 1
	W1117 23:03:44.002179    9728 delete.go:139] delete failed (probably ok) <nil>
	I1117 23:03:44.002266    9728 fix.go:120] Sleeping 1 second for extra luck!
	I1117 23:03:45.002764    9728 start.go:126] createHost starting for "" (driver="docker")
	I1117 23:03:45.007059    9728 out.go:203] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I1117 23:03:45.007196    9728 start.go:160] libmachine.API.Create for "offline-docker-20211117230313-9504" (driver="docker")
	I1117 23:03:45.007196    9728 client.go:168] LocalClient.Create starting
	I1117 23:03:45.007731    9728 main.go:130] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I1117 23:03:45.007926    9728 main.go:130] libmachine: Decoding PEM data...
	I1117 23:03:45.007926    9728 main.go:130] libmachine: Parsing certificate...
	I1117 23:03:45.007926    9728 main.go:130] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I1117 23:03:45.007926    9728 main.go:130] libmachine: Decoding PEM data...
	I1117 23:03:45.007926    9728 main.go:130] libmachine: Parsing certificate...
	I1117 23:03:45.013624    9728 cli_runner.go:115] Run: docker network inspect offline-docker-20211117230313-9504 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1117 23:03:45.170567    9728 network_create.go:67] Found existing network {name:offline-docker-20211117230313-9504 subnet:0xc0005a2cf0 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 168 58 1] mtu:1500}
	I1117 23:03:45.170567    9728 kic.go:106] calculated static IP "192.168.58.2" for the "offline-docker-20211117230313-9504" container
	I1117 23:03:45.177361    9728 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
	I1117 23:03:45.275960    9728 cli_runner.go:115] Run: docker volume create offline-docker-20211117230313-9504 --label name.minikube.sigs.k8s.io=offline-docker-20211117230313-9504 --label created_by.minikube.sigs.k8s.io=true
	I1117 23:03:45.365803    9728 oci.go:102] Successfully created a docker volume offline-docker-20211117230313-9504
	I1117 23:03:45.368978    9728 cli_runner.go:115] Run: docker run --rm --name offline-docker-20211117230313-9504-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=offline-docker-20211117230313-9504 --entrypoint /usr/bin/test -v offline-docker-20211117230313-9504:/var gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -d /var/lib
	I1117 23:03:46.444426    9728 cli_runner.go:168] Completed: docker run --rm --name offline-docker-20211117230313-9504-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=offline-docker-20211117230313-9504 --entrypoint /usr/bin/test -v offline-docker-20211117230313-9504:/var gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -d /var/lib: (1.07544s)
	I1117 23:03:46.444678    9728 oci.go:106] Successfully prepared a docker volume offline-docker-20211117230313-9504
	I1117 23:03:46.444830    9728 preload.go:132] Checking if preload exists for k8s version v1.22.3 and runtime docker
	I1117 23:03:46.444909    9728 kic.go:179] Starting extracting preloaded images to volume ...
	I1117 23:03:46.448842    9728 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I1117 23:03:46.448842    9728 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v offline-docker-20211117230313-9504:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir
	W1117 23:03:46.556898    9728 cli_runner.go:162] docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v offline-docker-20211117230313-9504:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir returned with exit code 125
	I1117 23:03:46.556898    9728 kic.go:186] Unable to extract preloaded tarball to volume: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v offline-docker-20211117230313-9504:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: status code not OK but 500: {"Message":"Unhandled exception: The notification platform is unavailable.\r\n\r\nThe notification platform is unavailable.\r\n","StackTrace":"   at Windows.UI.Notifications.ToastNotificationManager.CreateToastNotifier(String applicationId)\r\n   at Docker.WPF.PromptShareDirectory.<PromptUserAsync>d__0.MoveNext() in C:\\workspaces\\stable-2.3.x\\src\\github.com\\docker\\pinata\\win\\src\\Docker.WPF\\PromptShareDirectory.cs:line 25\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at Docker.ApiServices.Mounting.FileSharing.<DoShareAsync>d__6.MoveNext() in C:\\workspaces\\stable-2.3.x\\src\\github.com\\docker\\pinata\\win\\src\\Docker.ApiServices\\Mounting\\FileSharing.cs:line 80\r\n--- End of stack trace from previous location whe
re exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at Docker.ApiServices.Mounting.FileSharing.<ShareAsync>d__4.MoveNext() in C:\\workspaces\\stable-2.3.x\\src\\github.com\\docker\\pinata\\win\\src\\Docker.ApiServices\\Mounting\\FileSharing.cs:line 47\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at Docker.HttpApi.Controllers.FilesharingController.<ShareDirectory>d__2.MoveNext() in C:\\workspaces\\stable-2.3.x\\src\\github.com\\docker\\pinata\\win\\src\\Docker.HttpApi\\Controllers\\FilesharingController.cs:line 21\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.Exceptio
nDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at System.Threading.Tasks.TaskHelpersExtensions.<CastToObject>d__1`1.MoveNext()\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at System.Web.Http.Controllers.ApiControllerActionInvoker.<InvokeActionAsyncCore>d__1.MoveNext()\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at System.Web.Http.Controllers.ActionFilterResult.<ExecuteAsync>d__5.MoveNext()\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.Excepti
onServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at System.Web.Http.Dispatcher.HttpControllerDispatcher.<SendAsync>d__15.MoveNext()"}.
	See 'docker run --help'.
	I1117 23:03:46.811851    9728 info.go:263] docker info: {ID:L2W4:ZZCI:FAHF:ZZT6:NAWJ:ODFR:2ZBQ:YQ77:PYZZ:VR3E:WLUM:OBSV Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:41 OomKillDisable:true NGoroutines:53 SystemTime:2021-11-17 23:03:46.54479779 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:4.19.76-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://index
.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:6239399936 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:19.03.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7ad184331fa3e55e52b890ea95e65ba581ae3429 Expected:7ad184331fa3e55e52b890ea95e65ba581ae3429} RuncCommit:{ID:dc9208a3303feef5b3839f4323d9beb36df0a9dd Expected:dc9208a3303feef5b3839f4323d9beb36df0a9dd} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense:Community Engine Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	E1117 23:03:46.812335    9728 oci.go:197] error getting kernel modules path: Unable to locate kernel modules
	I1117 23:03:46.812425    9728 client.go:171] LocalClient.Create took 1.8052153s
	I1117 23:03:48.820575    9728 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1117 23:03:48.824054    9728 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20211117230313-9504
	W1117 23:03:48.916204    9728 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20211117230313-9504 returned with exit code 1
	I1117 23:03:48.916280    9728 retry.go:31] will retry after 178.565968ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-20211117230313-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20211117230313-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: offline-docker-20211117230313-9504
	I1117 23:03:49.099622    9728 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20211117230313-9504
	W1117 23:03:49.187246    9728 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20211117230313-9504 returned with exit code 1
	I1117 23:03:49.187555    9728 retry.go:31] will retry after 330.246446ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-20211117230313-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20211117230313-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: offline-docker-20211117230313-9504
	I1117 23:03:49.521125    9728 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20211117230313-9504
	W1117 23:03:49.610261    9728 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20211117230313-9504 returned with exit code 1
	I1117 23:03:49.610261    9728 retry.go:31] will retry after 460.157723ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-20211117230313-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20211117230313-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: offline-docker-20211117230313-9504
	I1117 23:03:50.075865    9728 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20211117230313-9504
	W1117 23:03:50.169192    9728 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20211117230313-9504 returned with exit code 1
	W1117 23:03:50.169192    9728 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-20211117230313-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20211117230313-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: offline-docker-20211117230313-9504
	
	W1117 23:03:50.169192    9728 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-20211117230313-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20211117230313-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: offline-docker-20211117230313-9504
	I1117 23:03:50.169192    9728 start.go:129] duration metric: createHost completed in 5.1663892s
	I1117 23:03:50.177292    9728 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1117 23:03:50.180937    9728 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20211117230313-9504
	W1117 23:03:50.283103    9728 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20211117230313-9504 returned with exit code 1
	I1117 23:03:50.283251    9728 retry.go:31] will retry after 195.758538ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-20211117230313-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20211117230313-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: offline-docker-20211117230313-9504
	I1117 23:03:50.483480    9728 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20211117230313-9504
	W1117 23:03:50.575367    9728 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20211117230313-9504 returned with exit code 1
	I1117 23:03:50.575780    9728 retry.go:31] will retry after 297.413196ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-20211117230313-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20211117230313-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: offline-docker-20211117230313-9504
	I1117 23:03:50.878076    9728 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20211117230313-9504
	W1117 23:03:50.967810    9728 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20211117230313-9504 returned with exit code 1
	I1117 23:03:50.968045    9728 retry.go:31] will retry after 663.23513ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-20211117230313-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20211117230313-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: offline-docker-20211117230313-9504
	I1117 23:03:51.636061    9728 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20211117230313-9504
	W1117 23:03:51.741222    9728 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20211117230313-9504 returned with exit code 1
	W1117 23:03:51.741222    9728 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-20211117230313-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20211117230313-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: offline-docker-20211117230313-9504
	
	W1117 23:03:51.741222    9728 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-20211117230313-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20211117230313-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: offline-docker-20211117230313-9504
	I1117 23:03:51.741222    9728 fix.go:57] fixHost completed within 23.6495855s
	I1117 23:03:51.741222    9728 start.go:80] releasing machines lock for "offline-docker-20211117230313-9504", held for 23.6495855s
	W1117 23:03:51.741999    9728 out.go:241] * Failed to start docker container. Running "minikube delete -p offline-docker-20211117230313-9504" may fix it: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	* Failed to start docker container. Running "minikube delete -p offline-docker-20211117230313-9504" may fix it: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	I1117 23:03:51.746105    9728 out.go:176] 
	W1117 23:03:51.746332    9728 out.go:241] X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	W1117 23:03:51.746332    9728 out.go:241] * 
	* 
	W1117 23:03:51.747799    9728 out.go:241] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1117 23:03:51.750776    9728 out.go:176] 

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:59: out/minikube-windows-amd64.exe start -p offline-docker-20211117230313-9504 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker failed: exit status 80
panic.go:642: *** TestOffline FAILED at 2021-11-17 23:03:51.8815638 +0000 GMT m=+2238.378749501
helpers_test.go:222: -----------------------post-mortem--------------------------------

                                                
                                                
=== CONT  TestOffline
helpers_test.go:230: ======>  post-mortem[TestOffline]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect offline-docker-20211117230313-9504
helpers_test.go:235: (dbg) docker inspect offline-docker-20211117230313-9504:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "offline-docker-20211117230313-9504",
	        "Id": "264aedd5c5318828644d59f91bd3fc0054d155e12afcd93c341d7d0e7a47610a",
	        "Created": "2021-11-17T23:03:16.940988732Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.58.0/24",
	                    "Gateway": "192.168.58.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p offline-docker-20211117230313-9504 -n offline-docker-20211117230313-9504

                                                
                                                
=== CONT  TestOffline
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p offline-docker-20211117230313-9504 -n offline-docker-20211117230313-9504: exit status 7 (1.8287862s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 23:03:53.821832    7428 status.go:247] status error: host: state: unknown state "offline-docker-20211117230313-9504": docker container inspect offline-docker-20211117230313-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: offline-docker-20211117230313-9504

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "offline-docker-20211117230313-9504" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:175: Cleaning up "offline-docker-20211117230313-9504" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p offline-docker-20211117230313-9504

                                                
                                                
=== CONT  TestOffline
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p offline-docker-20211117230313-9504: (3.7521491s)
--- FAIL: TestOffline (44.24s)

                                                
                                    
x
+
TestAddons/Setup (85.48s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:76: (dbg) Run:  out/minikube-windows-amd64.exe start -p addons-20211117222746-9504 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=olm --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --driver=docker --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:76: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p addons-20211117222746-9504 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=olm --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --driver=docker --addons=ingress --addons=ingress-dns --addons=helm-tiller: exit status 80 (1m25.395758s)

                                                
                                                
-- stdout --
	* [addons-20211117222746-9504] minikube v1.24.0 on Microsoft Windows 10 Enterprise N 10.0.18363 Build 18363
	  - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=12739
	* Using the docker driver based on user configuration
	* Starting control plane node addons-20211117222746-9504 in cluster addons-20211117222746-9504
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=4000MB) ...
	* docker "addons-20211117222746-9504" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=4000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1117 22:27:46.400029    6376 out.go:297] Setting OutFile to fd 724 ...
	I1117 22:27:46.463845    6376 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 22:27:46.463959    6376 out.go:310] Setting ErrFile to fd 592...
	I1117 22:27:46.463959    6376 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 22:27:46.474786    6376 out.go:304] Setting JSON to false
	I1117 22:27:46.476695    6376 start.go:112] hostinfo: {"hostname":"minikube2","uptime":77382,"bootTime":1637110684,"procs":127,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.18363 Build 18363","kernelVersion":"10.0.18363 Build 18363","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"29dced62-21fb-45d8-a34e-472b66ced002"}
	W1117 22:27:46.476695    6376 start.go:120] gopshost.Virtualization returned error: not implemented yet
	I1117 22:27:46.481746    6376 out.go:176] * [addons-20211117222746-9504] minikube v1.24.0 on Microsoft Windows 10 Enterprise N 10.0.18363 Build 18363
	I1117 22:27:46.481746    6376 notify.go:174] Checking for updates...
	I1117 22:27:46.484797    6376 out.go:176]   - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	I1117 22:27:46.487584    6376 out.go:176]   - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	I1117 22:27:46.490000    6376 out.go:176]   - MINIKUBE_LOCATION=12739
	I1117 22:27:46.490402    6376 driver.go:343] Setting default libvirt URI to qemu:///system
	I1117 22:27:48.032995    6376 docker.go:132] docker version: linux-19.03.12
	I1117 22:27:48.038044    6376 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I1117 22:27:48.479215    6376 info.go:263] docker info: {ID:L2W4:ZZCI:FAHF:ZZT6:NAWJ:ODFR:2ZBQ:YQ77:PYZZ:VR3E:WLUM:OBSV Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:39 OomKillDisable:true NGoroutines:50 SystemTime:2021-11-17 22:27:48.127235617 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:4.19.76-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://inde
x.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:6239399936 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:19.03.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7ad184331fa3e55e52b890ea95e65ba581ae3429 Expected:7ad184331fa3e55e52b890ea95e65ba581ae3429} RuncCommit:{ID:dc9208a3303feef5b3839f4323d9beb36df0a9dd Expected:dc9208a3303feef5b3839f4323d9beb36df0a9dd} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense:Community Engine Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I1117 22:27:48.849455    6376 out.go:176] * Using the docker driver based on user configuration
	I1117 22:27:48.850088    6376 start.go:280] selected driver: docker
	I1117 22:27:48.850088    6376 start.go:775] validating driver "docker" against <nil>
	I1117 22:27:48.850088    6376 start.go:786] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I1117 22:27:48.907400    6376 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I1117 22:27:49.239823    6376 info.go:263] docker info: {ID:L2W4:ZZCI:FAHF:ZZT6:NAWJ:ODFR:2ZBQ:YQ77:PYZZ:VR3E:WLUM:OBSV Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:39 OomKillDisable:true NGoroutines:50 SystemTime:2021-11-17 22:27:48.986541998 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:4.19.76-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://inde
x.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:6239399936 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:19.03.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7ad184331fa3e55e52b890ea95e65ba581ae3429 Expected:7ad184331fa3e55e52b890ea95e65ba581ae3429} RuncCommit:{ID:dc9208a3303feef5b3839f4323d9beb36df0a9dd Expected:dc9208a3303feef5b3839f4323d9beb36df0a9dd} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense:Community Engine Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I1117 22:27:49.239823    6376 start_flags.go:268] no existing cluster config was found, will generate one from the flags 
	I1117 22:27:49.240516    6376 start_flags.go:758] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1117 22:27:49.240587    6376 cni.go:93] Creating CNI manager for ""
	I1117 22:27:49.240587    6376 cni.go:167] CNI unnecessary in this configuration, recommending no CNI
	I1117 22:27:49.240587    6376 start_flags.go:282] config:
	{Name:addons-20211117222746-9504 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.3 ClusterName:addons-20211117222746-9504 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: Netw
orkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host}
	I1117 22:27:49.244988    6376 out.go:176] * Starting control plane node addons-20211117222746-9504 in cluster addons-20211117222746-9504
	I1117 22:27:49.245194    6376 cache.go:118] Beginning downloading kic base image for docker with docker
	I1117 22:27:49.250809    6376 out.go:176] * Pulling base image ...
	I1117 22:27:49.250809    6376 preload.go:132] Checking if preload exists for k8s version v1.22.3 and runtime docker
	I1117 22:27:49.250809    6376 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c in local docker daemon
	I1117 22:27:49.250809    6376 preload.go:148] Found local preload: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4
	I1117 22:27:49.250809    6376 cache.go:57] Caching tarball of preloaded images
	I1117 22:27:49.250809    6376 preload.go:174] Found C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1117 22:27:49.250809    6376 cache.go:60] Finished verifying existence of preloaded tar for  v1.22.3 on docker
	I1117 22:27:49.251825    6376 profile.go:147] Saving config to C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\addons-20211117222746-9504\config.json ...
	I1117 22:27:49.251825    6376 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\addons-20211117222746-9504\config.json: {Name:mk1ae387defb7fe0c2cd2f35200e837463b05545 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1117 22:27:49.354111    6376 cache.go:146] Downloading gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c to local cache
	I1117 22:27:49.354111    6376 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\kicbase_v0.0.28@sha256_4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c.tar
	I1117 22:27:49.354111    6376 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\kicbase_v0.0.28@sha256_4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c.tar
	I1117 22:27:49.354111    6376 image.go:59] Checking for gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c in local cache directory
	I1117 22:27:49.354111    6376 image.go:62] Found gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c in local cache directory, skipping pull
	I1117 22:27:49.354111    6376 image.go:103] gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c exists in cache, skipping pull
	I1117 22:27:49.354111    6376 cache.go:149] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c as a tarball
	I1117 22:27:49.354736    6376 cache.go:160] Loading gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c from local cache
	I1117 22:27:49.354736    6376 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\kicbase_v0.0.28@sha256_4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c.tar
	I1117 22:28:33.468734    6376 cache.go:163] successfully loaded gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c from cached tarball
	I1117 22:28:33.468829    6376 cache.go:206] Successfully downloaded all kic artifacts
	I1117 22:28:33.468829    6376 start.go:313] acquiring machines lock for addons-20211117222746-9504: {Name:mk4d4599002859d9e65c7f9f4bfe1d29d8886418 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 22:28:33.468829    6376 start.go:317] acquired machines lock for "addons-20211117222746-9504" in 0s
	I1117 22:28:33.469364    6376 start.go:89] Provisioning new machine with config: &{Name:addons-20211117222746-9504 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.3 ClusterName:addons-20211117222746-9504 Namespace:default APIServerName:minikubeCA APIServerNames:
[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.22.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host} &{Name: IP: Port:8443 KubernetesVersion:v1.22.3 ControlPlane:true Worker:true}
	I1117 22:28:33.469643    6376 start.go:126] createHost starting for "" (driver="docker")
	I1117 22:28:33.476146    6376 out.go:203] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I1117 22:28:33.476146    6376 start.go:160] libmachine.API.Create for "addons-20211117222746-9504" (driver="docker")
	I1117 22:28:33.476687    6376 client.go:168] LocalClient.Create starting
	I1117 22:28:33.477448    6376 main.go:130] libmachine: Creating CA: C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I1117 22:28:33.887439    6376 main.go:130] libmachine: Creating client certificate: C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I1117 22:28:34.438395    6376 cli_runner.go:115] Run: docker network inspect addons-20211117222746-9504 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1117 22:28:34.528266    6376 cli_runner.go:162] docker network inspect addons-20211117222746-9504 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1117 22:28:34.532504    6376 network_create.go:254] running [docker network inspect addons-20211117222746-9504] to gather additional debugging logs...
	I1117 22:28:34.532504    6376 cli_runner.go:115] Run: docker network inspect addons-20211117222746-9504
	W1117 22:28:34.618620    6376 cli_runner.go:162] docker network inspect addons-20211117222746-9504 returned with exit code 1
	I1117 22:28:34.618784    6376 network_create.go:257] error running [docker network inspect addons-20211117222746-9504]: docker network inspect addons-20211117222746-9504: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: addons-20211117222746-9504
	I1117 22:28:34.618784    6376 network_create.go:259] output of [docker network inspect addons-20211117222746-9504]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: addons-20211117222746-9504
	
	** /stderr **
	I1117 22:28:34.623751    6376 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1117 22:28:34.730809    6376 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc0002d41c0] misses:0}
	I1117 22:28:34.730809    6376 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1117 22:28:34.731415    6376 network_create.go:106] attempt to create docker network addons-20211117222746-9504 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1117 22:28:34.735823    6376 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true addons-20211117222746-9504
	I1117 22:28:34.944106    6376 network_create.go:90] docker network addons-20211117222746-9504 192.168.49.0/24 created
	I1117 22:28:34.944210    6376 kic.go:106] calculated static IP "192.168.49.2" for the "addons-20211117222746-9504" container
	I1117 22:28:34.952183    6376 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
	I1117 22:28:35.042650    6376 cli_runner.go:115] Run: docker volume create addons-20211117222746-9504 --label name.minikube.sigs.k8s.io=addons-20211117222746-9504 --label created_by.minikube.sigs.k8s.io=true
	I1117 22:28:35.135563    6376 oci.go:102] Successfully created a docker volume addons-20211117222746-9504
	I1117 22:28:35.140450    6376 cli_runner.go:115] Run: docker run --rm --name addons-20211117222746-9504-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-20211117222746-9504 --entrypoint /usr/bin/test -v addons-20211117222746-9504:/var gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -d /var/lib
	I1117 22:28:39.012493    6376 cli_runner.go:168] Completed: docker run --rm --name addons-20211117222746-9504-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-20211117222746-9504 --entrypoint /usr/bin/test -v addons-20211117222746-9504:/var gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -d /var/lib: (3.8720136s)
	I1117 22:28:39.012824    6376 oci.go:106] Successfully prepared a docker volume addons-20211117222746-9504
	I1117 22:28:39.012942    6376 preload.go:132] Checking if preload exists for k8s version v1.22.3 and runtime docker
	I1117 22:28:39.012942    6376 kic.go:179] Starting extracting preloaded images to volume ...
	I1117 22:28:39.018397    6376 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I1117 22:28:39.019118    6376 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v addons-20211117222746-9504:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir
	W1117 22:28:39.251107    6376 cli_runner.go:162] docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v addons-20211117222746-9504:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir returned with exit code 125
	I1117 22:28:39.251208    6376 kic.go:186] Unable to extract preloaded tarball to volume: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v addons-20211117222746-9504:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: status code not OK but 500: {"Message":"Unhandled exception: The notification platform is unavailable.\r\n\r\nThe notification platform is unavailable.\r\n","StackTrace":"   at Windows.UI.Notifications.ToastNotificationManager.CreateToastNotifier(String applicationId)\r\n   at Docker.WPF.PromptShareDirectory.<PromptUserAsync>d__0.MoveNext() in C:\\workspaces\\stable-2.3.x\\src\\github.com\\docker\\pinata\\win\\src\\Docker.WPF\\PromptShareDirectory.cs:line 25\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at Docker.ApiServices.Mounting.FileSharing.<DoShareAsync>d__6.MoveNext() in C:\\workspaces\\stable-2.3.x\\src\\github.com\\docker\\pinata\\win\\src\\Docker.ApiServices\\Mounting\\FileSharing.cs:line 80\r\n--- End of stack trace from previous location whe
re exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at Docker.ApiServices.Mounting.FileSharing.<ShareAsync>d__4.MoveNext() in C:\\workspaces\\stable-2.3.x\\src\\github.com\\docker\\pinata\\win\\src\\Docker.ApiServices\\Mounting\\FileSharing.cs:line 47\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at Docker.HttpApi.Controllers.FilesharingController.<ShareDirectory>d__2.MoveNext() in C:\\workspaces\\stable-2.3.x\\src\\github.com\\docker\\pinata\\win\\src\\Docker.HttpApi\\Controllers\\FilesharingController.cs:line 21\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.Exceptio
nDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at System.Threading.Tasks.TaskHelpersExtensions.<CastToObject>d__1`1.MoveNext()\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at System.Web.Http.Controllers.ApiControllerActionInvoker.<InvokeActionAsyncCore>d__1.MoveNext()\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at System.Web.Http.Controllers.ActionFilterResult.<ExecuteAsync>d__5.MoveNext()\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.Excepti
onServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at System.Web.Http.Dispatcher.HttpControllerDispatcher.<SendAsync>d__15.MoveNext()"}.
	See 'docker run --help'.
	I1117 22:28:39.365558    6376 info.go:263] docker info: {ID:L2W4:ZZCI:FAHF:ZZT6:NAWJ:ODFR:2ZBQ:YQ77:PYZZ:VR3E:WLUM:OBSV Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:41 OomKillDisable:true NGoroutines:53 SystemTime:2021-11-17 22:28:39.108927688 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:4.19.76-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://inde
x.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:6239399936 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:19.03.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7ad184331fa3e55e52b890ea95e65ba581ae3429 Expected:7ad184331fa3e55e52b890ea95e65ba581ae3429} RuncCommit:{ID:dc9208a3303feef5b3839f4323d9beb36df0a9dd Expected:dc9208a3303feef5b3839f4323d9beb36df0a9dd} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense:Community Engine Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	E1117 22:28:39.365945    6376 oci.go:197] error getting kernel modules path: Unable to locate kernel modules
	I1117 22:28:39.366027    6376 client.go:171] LocalClient.Create took 5.8892957s
	I1117 22:28:41.373730    6376 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1117 22:28:41.377402    6376 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20211117222746-9504
	W1117 22:28:41.470891    6376 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20211117222746-9504 returned with exit code 1
	I1117 22:28:41.470891    6376 retry.go:31] will retry after 276.165072ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "addons-20211117222746-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20211117222746-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: addons-20211117222746-9504
	I1117 22:28:41.751921    6376 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20211117222746-9504
	W1117 22:28:41.839837    6376 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20211117222746-9504 returned with exit code 1
	I1117 22:28:41.840049    6376 retry.go:31] will retry after 540.190908ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "addons-20211117222746-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20211117222746-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: addons-20211117222746-9504
	I1117 22:28:42.385216    6376 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20211117222746-9504
	W1117 22:28:42.472063    6376 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20211117222746-9504 returned with exit code 1
	I1117 22:28:42.472063    6376 retry.go:31] will retry after 655.06503ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "addons-20211117222746-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20211117222746-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: addons-20211117222746-9504
	I1117 22:28:43.133274    6376 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20211117222746-9504
	W1117 22:28:43.220286    6376 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20211117222746-9504 returned with exit code 1
	W1117 22:28:43.220505    6376 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "addons-20211117222746-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20211117222746-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: addons-20211117222746-9504
	
	W1117 22:28:43.220505    6376 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "addons-20211117222746-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20211117222746-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: addons-20211117222746-9504
	I1117 22:28:43.220505    6376 start.go:129] duration metric: createHost completed in 9.7507012s
	I1117 22:28:43.220505    6376 start.go:80] releasing machines lock for "addons-20211117222746-9504", held for 9.7516027s
	W1117 22:28:43.220505    6376 start.go:532] error starting host: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	I1117 22:28:43.228867    6376 cli_runner.go:115] Run: docker container inspect addons-20211117222746-9504 --format={{.State.Status}}
	W1117 22:28:43.315904    6376 cli_runner.go:162] docker container inspect addons-20211117222746-9504 --format={{.State.Status}} returned with exit code 1
	I1117 22:28:43.316085    6376 delete.go:82] Unable to get host status for addons-20211117222746-9504, assuming it has already been deleted: state: unknown state "addons-20211117222746-9504": docker container inspect addons-20211117222746-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: addons-20211117222746-9504
	W1117 22:28:43.316357    6376 out.go:241] ! StartHost failed, but will try again: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	! StartHost failed, but will try again: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	I1117 22:28:43.316449    6376 start.go:547] Will try again in 5 seconds ...
	I1117 22:28:48.317740    6376 start.go:313] acquiring machines lock for addons-20211117222746-9504: {Name:mk4d4599002859d9e65c7f9f4bfe1d29d8886418 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 22:28:48.318129    6376 start.go:317] acquired machines lock for "addons-20211117222746-9504" in 0s
	I1117 22:28:48.318129    6376 start.go:93] Skipping create...Using existing machine configuration
	I1117 22:28:48.318663    6376 fix.go:55] fixHost starting: 
	I1117 22:28:48.326202    6376 cli_runner.go:115] Run: docker container inspect addons-20211117222746-9504 --format={{.State.Status}}
	W1117 22:28:48.414048    6376 cli_runner.go:162] docker container inspect addons-20211117222746-9504 --format={{.State.Status}} returned with exit code 1
	I1117 22:28:48.414239    6376 fix.go:108] recreateIfNeeded on addons-20211117222746-9504: state= err=unknown state "addons-20211117222746-9504": docker container inspect addons-20211117222746-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: addons-20211117222746-9504
	I1117 22:28:48.414239    6376 fix.go:113] machineExists: false. err=machine does not exist
	I1117 22:28:48.419887    6376 out.go:176] * docker "addons-20211117222746-9504" container is missing, will recreate.
	I1117 22:28:48.419965    6376 delete.go:124] DEMOLISHING addons-20211117222746-9504 ...
	I1117 22:28:48.427448    6376 cli_runner.go:115] Run: docker container inspect addons-20211117222746-9504 --format={{.State.Status}}
	W1117 22:28:48.511345    6376 cli_runner.go:162] docker container inspect addons-20211117222746-9504 --format={{.State.Status}} returned with exit code 1
	W1117 22:28:48.511345    6376 stop.go:75] unable to get state: unknown state "addons-20211117222746-9504": docker container inspect addons-20211117222746-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: addons-20211117222746-9504
	I1117 22:28:48.511629    6376 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "addons-20211117222746-9504": docker container inspect addons-20211117222746-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: addons-20211117222746-9504
	I1117 22:28:48.519230    6376 cli_runner.go:115] Run: docker container inspect addons-20211117222746-9504 --format={{.State.Status}}
	W1117 22:28:48.606295    6376 cli_runner.go:162] docker container inspect addons-20211117222746-9504 --format={{.State.Status}} returned with exit code 1
	I1117 22:28:48.606633    6376 delete.go:82] Unable to get host status for addons-20211117222746-9504, assuming it has already been deleted: state: unknown state "addons-20211117222746-9504": docker container inspect addons-20211117222746-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: addons-20211117222746-9504
	I1117 22:28:48.610462    6376 cli_runner.go:115] Run: docker container inspect -f {{.Id}} addons-20211117222746-9504
	W1117 22:28:48.693811    6376 cli_runner.go:162] docker container inspect -f {{.Id}} addons-20211117222746-9504 returned with exit code 1
	I1117 22:28:48.693973    6376 kic.go:360] could not find the container addons-20211117222746-9504 to remove it. will try anyways
	I1117 22:28:48.698270    6376 cli_runner.go:115] Run: docker container inspect addons-20211117222746-9504 --format={{.State.Status}}
	W1117 22:28:48.782874    6376 cli_runner.go:162] docker container inspect addons-20211117222746-9504 --format={{.State.Status}} returned with exit code 1
	W1117 22:28:48.783188    6376 oci.go:83] error getting container status, will try to delete anyways: unknown state "addons-20211117222746-9504": docker container inspect addons-20211117222746-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: addons-20211117222746-9504
	I1117 22:28:48.787499    6376 cli_runner.go:115] Run: docker exec --privileged -t addons-20211117222746-9504 /bin/bash -c "sudo init 0"
	W1117 22:28:48.876375    6376 cli_runner.go:162] docker exec --privileged -t addons-20211117222746-9504 /bin/bash -c "sudo init 0" returned with exit code 1
	I1117 22:28:48.876375    6376 oci.go:658] error shutdown addons-20211117222746-9504: docker exec --privileged -t addons-20211117222746-9504 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: addons-20211117222746-9504
	I1117 22:28:49.881483    6376 cli_runner.go:115] Run: docker container inspect addons-20211117222746-9504 --format={{.State.Status}}
	W1117 22:28:49.967858    6376 cli_runner.go:162] docker container inspect addons-20211117222746-9504 --format={{.State.Status}} returned with exit code 1
	I1117 22:28:49.967858    6376 oci.go:670] temporary error verifying shutdown: unknown state "addons-20211117222746-9504": docker container inspect addons-20211117222746-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: addons-20211117222746-9504
	I1117 22:28:49.967858    6376 oci.go:672] temporary error: container addons-20211117222746-9504 status is  but expect it to be exited
	I1117 22:28:49.967858    6376 retry.go:31] will retry after 462.318748ms: couldn't verify container is exited. %v: unknown state "addons-20211117222746-9504": docker container inspect addons-20211117222746-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: addons-20211117222746-9504
	I1117 22:28:50.435738    6376 cli_runner.go:115] Run: docker container inspect addons-20211117222746-9504 --format={{.State.Status}}
	W1117 22:28:50.524108    6376 cli_runner.go:162] docker container inspect addons-20211117222746-9504 --format={{.State.Status}} returned with exit code 1
	I1117 22:28:50.524419    6376 oci.go:670] temporary error verifying shutdown: unknown state "addons-20211117222746-9504": docker container inspect addons-20211117222746-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: addons-20211117222746-9504
	I1117 22:28:50.524419    6376 oci.go:672] temporary error: container addons-20211117222746-9504 status is  but expect it to be exited
	I1117 22:28:50.524419    6376 retry.go:31] will retry after 890.117305ms: couldn't verify container is exited. %v: unknown state "addons-20211117222746-9504": docker container inspect addons-20211117222746-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: addons-20211117222746-9504
	I1117 22:28:51.419565    6376 cli_runner.go:115] Run: docker container inspect addons-20211117222746-9504 --format={{.State.Status}}
	W1117 22:28:51.506265    6376 cli_runner.go:162] docker container inspect addons-20211117222746-9504 --format={{.State.Status}} returned with exit code 1
	I1117 22:28:51.506360    6376 oci.go:670] temporary error verifying shutdown: unknown state "addons-20211117222746-9504": docker container inspect addons-20211117222746-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: addons-20211117222746-9504
	I1117 22:28:51.506360    6376 oci.go:672] temporary error: container addons-20211117222746-9504 status is  but expect it to be exited
	I1117 22:28:51.506360    6376 retry.go:31] will retry after 636.341646ms: couldn't verify container is exited. %v: unknown state "addons-20211117222746-9504": docker container inspect addons-20211117222746-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: addons-20211117222746-9504
	I1117 22:28:52.148975    6376 cli_runner.go:115] Run: docker container inspect addons-20211117222746-9504 --format={{.State.Status}}
	W1117 22:28:52.236222    6376 cli_runner.go:162] docker container inspect addons-20211117222746-9504 --format={{.State.Status}} returned with exit code 1
	I1117 22:28:52.236317    6376 oci.go:670] temporary error verifying shutdown: unknown state "addons-20211117222746-9504": docker container inspect addons-20211117222746-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: addons-20211117222746-9504
	I1117 22:28:52.236383    6376 oci.go:672] temporary error: container addons-20211117222746-9504 status is  but expect it to be exited
	I1117 22:28:52.236383    6376 retry.go:31] will retry after 1.107876242s: couldn't verify container is exited. %v: unknown state "addons-20211117222746-9504": docker container inspect addons-20211117222746-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: addons-20211117222746-9504
	I1117 22:28:53.344522    6376 cli_runner.go:115] Run: docker container inspect addons-20211117222746-9504 --format={{.State.Status}}
	W1117 22:28:53.447394    6376 cli_runner.go:162] docker container inspect addons-20211117222746-9504 --format={{.State.Status}} returned with exit code 1
	I1117 22:28:53.447504    6376 oci.go:670] temporary error verifying shutdown: unknown state "addons-20211117222746-9504": docker container inspect addons-20211117222746-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: addons-20211117222746-9504
	I1117 22:28:53.447504    6376 oci.go:672] temporary error: container addons-20211117222746-9504 status is  but expect it to be exited
	I1117 22:28:53.447504    6376 retry.go:31] will retry after 1.511079094s: couldn't verify container is exited. %v: unknown state "addons-20211117222746-9504": docker container inspect addons-20211117222746-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: addons-20211117222746-9504
	I1117 22:28:54.963539    6376 cli_runner.go:115] Run: docker container inspect addons-20211117222746-9504 --format={{.State.Status}}
	W1117 22:28:55.053256    6376 cli_runner.go:162] docker container inspect addons-20211117222746-9504 --format={{.State.Status}} returned with exit code 1
	I1117 22:28:55.053357    6376 oci.go:670] temporary error verifying shutdown: unknown state "addons-20211117222746-9504": docker container inspect addons-20211117222746-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: addons-20211117222746-9504
	I1117 22:28:55.053399    6376 oci.go:672] temporary error: container addons-20211117222746-9504 status is  but expect it to be exited
	I1117 22:28:55.053399    6376 retry.go:31] will retry after 3.04096222s: couldn't verify container is exited. %v: unknown state "addons-20211117222746-9504": docker container inspect addons-20211117222746-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: addons-20211117222746-9504
	I1117 22:28:58.099903    6376 cli_runner.go:115] Run: docker container inspect addons-20211117222746-9504 --format={{.State.Status}}
	W1117 22:28:58.188383    6376 cli_runner.go:162] docker container inspect addons-20211117222746-9504 --format={{.State.Status}} returned with exit code 1
	I1117 22:28:58.188591    6376 oci.go:670] temporary error verifying shutdown: unknown state "addons-20211117222746-9504": docker container inspect addons-20211117222746-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: addons-20211117222746-9504
	I1117 22:28:58.188591    6376 oci.go:672] temporary error: container addons-20211117222746-9504 status is  but expect it to be exited
	I1117 22:28:58.188658    6376 retry.go:31] will retry after 5.781953173s: couldn't verify container is exited. %v: unknown state "addons-20211117222746-9504": docker container inspect addons-20211117222746-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: addons-20211117222746-9504
	I1117 22:29:03.974909    6376 cli_runner.go:115] Run: docker container inspect addons-20211117222746-9504 --format={{.State.Status}}
	W1117 22:29:04.069807    6376 cli_runner.go:162] docker container inspect addons-20211117222746-9504 --format={{.State.Status}} returned with exit code 1
	I1117 22:29:04.069912    6376 oci.go:670] temporary error verifying shutdown: unknown state "addons-20211117222746-9504": docker container inspect addons-20211117222746-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: addons-20211117222746-9504
	I1117 22:29:04.069972    6376 oci.go:672] temporary error: container addons-20211117222746-9504 status is  but expect it to be exited
	I1117 22:29:04.069972    6376 oci.go:87] couldn't shut down addons-20211117222746-9504 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "addons-20211117222746-9504": docker container inspect addons-20211117222746-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: addons-20211117222746-9504
	 
	I1117 22:29:04.074366    6376 cli_runner.go:115] Run: docker rm -f -v addons-20211117222746-9504
	W1117 22:29:04.165218    6376 cli_runner.go:162] docker rm -f -v addons-20211117222746-9504 returned with exit code 1
	W1117 22:29:04.166654    6376 delete.go:139] delete failed (probably ok) <nil>
	I1117 22:29:04.166654    6376 fix.go:120] Sleeping 1 second for extra luck!
	I1117 22:29:05.167260    6376 start.go:126] createHost starting for "" (driver="docker")
	I1117 22:29:05.173539    6376 out.go:203] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I1117 22:29:05.174288    6376 start.go:160] libmachine.API.Create for "addons-20211117222746-9504" (driver="docker")
	I1117 22:29:05.174288    6376 client.go:168] LocalClient.Create starting
	I1117 22:29:05.174288    6376 main.go:130] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I1117 22:29:05.175043    6376 main.go:130] libmachine: Decoding PEM data...
	I1117 22:29:05.175075    6376 main.go:130] libmachine: Parsing certificate...
	I1117 22:29:05.175075    6376 main.go:130] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I1117 22:29:05.175075    6376 main.go:130] libmachine: Decoding PEM data...
	I1117 22:29:05.175075    6376 main.go:130] libmachine: Parsing certificate...
	I1117 22:29:05.181460    6376 cli_runner.go:115] Run: docker network inspect addons-20211117222746-9504 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1117 22:29:05.272342    6376 network_create.go:67] Found existing network {name:addons-20211117222746-9504 subnet:0xc000e87b30 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 168 49 1] mtu:1500}
	I1117 22:29:05.272573    6376 kic.go:106] calculated static IP "192.168.49.2" for the "addons-20211117222746-9504" container
	I1117 22:29:05.281100    6376 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
	I1117 22:29:05.369924    6376 cli_runner.go:115] Run: docker volume create addons-20211117222746-9504 --label name.minikube.sigs.k8s.io=addons-20211117222746-9504 --label created_by.minikube.sigs.k8s.io=true
	I1117 22:29:05.460669    6376 oci.go:102] Successfully created a docker volume addons-20211117222746-9504
	I1117 22:29:05.466245    6376 cli_runner.go:115] Run: docker run --rm --name addons-20211117222746-9504-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-20211117222746-9504 --entrypoint /usr/bin/test -v addons-20211117222746-9504:/var gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -d /var/lib
	I1117 22:29:06.313464    6376 oci.go:106] Successfully prepared a docker volume addons-20211117222746-9504
	I1117 22:29:06.313464    6376 preload.go:132] Checking if preload exists for k8s version v1.22.3 and runtime docker
	I1117 22:29:06.313464    6376 kic.go:179] Starting extracting preloaded images to volume ...
	I1117 22:29:06.317943    6376 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I1117 22:29:06.318758    6376 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v addons-20211117222746-9504:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir
	W1117 22:29:06.440877    6376 cli_runner.go:162] docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v addons-20211117222746-9504:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir returned with exit code 125
	I1117 22:29:06.441100    6376 kic.go:186] Unable to extract preloaded tarball to volume: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v addons-20211117222746-9504:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: status code not OK but 500: {"Message":"Unhandled exception: The notification platform is unavailable.\r\n\r\nThe notification platform is unavailable.\r\n","StackTrace":"   at Windows.UI.Notifications.ToastNotificationManager.CreateToastNotifier(String applicationId)\r\n   at Docker.WPF.PromptShareDirectory.<PromptUserAsync>d__0.MoveNext() in C:\\workspaces\\stable-2.3.x\\src\\github.com\\docker\\pinata\\win\\src\\Docker.WPF\\PromptShareDirectory.cs:line 25\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at Docker.ApiServices.Mounting.FileSharing.<DoShareAsync>d__6.MoveNext() in C:\\workspaces\\stable-2.3.x\\src\\github.com\\docker\\pinata\\win\\src\\Docker.ApiServices\\Mounting\\FileSharing.cs:line 80\r\n--- End of stack trace from previous location whe
re exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at Docker.ApiServices.Mounting.FileSharing.<ShareAsync>d__4.MoveNext() in C:\\workspaces\\stable-2.3.x\\src\\github.com\\docker\\pinata\\win\\src\\Docker.ApiServices\\Mounting\\FileSharing.cs:line 47\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at Docker.HttpApi.Controllers.FilesharingController.<ShareDirectory>d__2.MoveNext() in C:\\workspaces\\stable-2.3.x\\src\\github.com\\docker\\pinata\\win\\src\\Docker.HttpApi\\Controllers\\FilesharingController.cs:line 21\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.Exceptio
nDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at System.Threading.Tasks.TaskHelpersExtensions.<CastToObject>d__1`1.MoveNext()\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at System.Web.Http.Controllers.ApiControllerActionInvoker.<InvokeActionAsyncCore>d__1.MoveNext()\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at System.Web.Http.Controllers.ActionFilterResult.<ExecuteAsync>d__5.MoveNext()\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.Excepti
onServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at System.Web.Http.Dispatcher.HttpControllerDispatcher.<SendAsync>d__15.MoveNext()"}.
	See 'docker run --help'.
	I1117 22:29:06.681483    6376 info.go:263] docker info: {ID:L2W4:ZZCI:FAHF:ZZT6:NAWJ:ODFR:2ZBQ:YQ77:PYZZ:VR3E:WLUM:OBSV Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:40 OomKillDisable:true NGoroutines:52 SystemTime:2021-11-17 22:29:06.403243015 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:4.19.76-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://inde
x.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:6239399936 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:19.03.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7ad184331fa3e55e52b890ea95e65ba581ae3429 Expected:7ad184331fa3e55e52b890ea95e65ba581ae3429} RuncCommit:{ID:dc9208a3303feef5b3839f4323d9beb36df0a9dd Expected:dc9208a3303feef5b3839f4323d9beb36df0a9dd} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense:Community Engine Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	E1117 22:29:06.682061    6376 oci.go:197] error getting kernel modules path: Unable to locate kernel modules
	I1117 22:29:06.682211    6376 client.go:171] LocalClient.Create took 1.5079115s
	I1117 22:29:08.691340    6376 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1117 22:29:08.695765    6376 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20211117222746-9504
	W1117 22:29:08.784031    6376 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20211117222746-9504 returned with exit code 1
	I1117 22:29:08.784378    6376 retry.go:31] will retry after 178.565968ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "addons-20211117222746-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20211117222746-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: addons-20211117222746-9504
	I1117 22:29:08.972146    6376 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20211117222746-9504
	W1117 22:29:09.079351    6376 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20211117222746-9504 returned with exit code 1
	I1117 22:29:09.079609    6376 retry.go:31] will retry after 330.246446ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "addons-20211117222746-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20211117222746-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: addons-20211117222746-9504
	I1117 22:29:09.414730    6376 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20211117222746-9504
	W1117 22:29:09.500789    6376 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20211117222746-9504 returned with exit code 1
	I1117 22:29:09.501203    6376 retry.go:31] will retry after 460.157723ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "addons-20211117222746-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20211117222746-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: addons-20211117222746-9504
	I1117 22:29:09.968411    6376 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20211117222746-9504
	W1117 22:29:10.052757    6376 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20211117222746-9504 returned with exit code 1
	W1117 22:29:10.053158    6376 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "addons-20211117222746-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20211117222746-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: addons-20211117222746-9504
	
	W1117 22:29:10.053191    6376 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "addons-20211117222746-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20211117222746-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: addons-20211117222746-9504
	I1117 22:29:10.053191    6376 start.go:129] duration metric: createHost completed in 4.885894s
	I1117 22:29:10.060097    6376 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1117 22:29:10.063624    6376 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20211117222746-9504
	W1117 22:29:10.151652    6376 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20211117222746-9504 returned with exit code 1
	I1117 22:29:10.152164    6376 retry.go:31] will retry after 195.758538ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "addons-20211117222746-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20211117222746-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: addons-20211117222746-9504
	I1117 22:29:10.353437    6376 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20211117222746-9504
	W1117 22:29:10.442231    6376 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20211117222746-9504 returned with exit code 1
	I1117 22:29:10.442378    6376 retry.go:31] will retry after 297.413196ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "addons-20211117222746-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20211117222746-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: addons-20211117222746-9504
	I1117 22:29:10.745838    6376 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20211117222746-9504
	W1117 22:29:10.839251    6376 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20211117222746-9504 returned with exit code 1
	I1117 22:29:10.839561    6376 retry.go:31] will retry after 663.23513ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "addons-20211117222746-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20211117222746-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: addons-20211117222746-9504
	I1117 22:29:11.508381    6376 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20211117222746-9504
	W1117 22:29:11.593590    6376 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20211117222746-9504 returned with exit code 1
	W1117 22:29:11.593590    6376 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "addons-20211117222746-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20211117222746-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: addons-20211117222746-9504
	
	W1117 22:29:11.593590    6376 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "addons-20211117222746-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20211117222746-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: addons-20211117222746-9504
	I1117 22:29:11.593590    6376 fix.go:57] fixHost completed within 23.2747522s
	I1117 22:29:11.593590    6376 start.go:80] releasing machines lock for "addons-20211117222746-9504", held for 23.2752865s
	W1117 22:29:11.593590    6376 out.go:241] * Failed to start docker container. Running "minikube delete -p addons-20211117222746-9504" may fix it: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	* Failed to start docker container. Running "minikube delete -p addons-20211117222746-9504" may fix it: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	I1117 22:29:11.598894    6376 out.go:176] 
	W1117 22:29:11.599091    6376 out.go:241] X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	W1117 22:29:11.599091    6376 out.go:241] * 
	* 
	W1117 22:29:11.600335    6376 out.go:241] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1117 22:29:11.602424    6376 out.go:176] 

                                                
                                                
** /stderr **
addons_test.go:78: out/minikube-windows-amd64.exe start -p addons-20211117222746-9504 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=olm --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --driver=docker --addons=ingress --addons=ingress-dns --addons=helm-tiller failed: exit status 80
--- FAIL: TestAddons/Setup (85.48s)

                                                
                                    
x
+
TestCertOptions (48.43s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:50: (dbg) Run:  out/minikube-windows-amd64.exe start -p cert-options-20211117230807-9504 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker --apiserver-name=localhost

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:50: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p cert-options-20211117230807-9504 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker --apiserver-name=localhost: exit status 80 (39.478698s)

                                                
                                                
-- stdout --
	* [cert-options-20211117230807-9504] minikube v1.24.0 on Microsoft Windows 10 Enterprise N 10.0.18363 Build 18363
	  - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=12739
	* Using the docker driver based on user configuration
	* Starting control plane node cert-options-20211117230807-9504 in cluster cert-options-20211117230807-9504
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* docker "cert-options-20211117230807-9504" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 23:08:14.166673    4504 oci.go:197] error getting kernel modules path: Unable to locate kernel modules
	! StartHost failed, but will try again: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	E1117 23:08:41.612862    4504 oci.go:197] error getting kernel modules path: Unable to locate kernel modules
	* Failed to start docker container. Running "minikube delete -p cert-options-20211117230807-9504" may fix it: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:52: failed to start minikube with args: "out/minikube-windows-amd64.exe start -p cert-options-20211117230807-9504 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker --apiserver-name=localhost" : exit status 80
cert_options_test.go:61: (dbg) Run:  out/minikube-windows-amd64.exe -p cert-options-20211117230807-9504 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:61: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p cert-options-20211117230807-9504 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt": exit status 80 (1.86465s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "cert-options-20211117230807-9504": docker container inspect cert-options-20211117230807-9504 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cert-options-20211117230807-9504
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                     │
	│    * If the above advice does not help, please let us know:                                                         │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                       │
	│                                                                                                                     │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                            │
	│    * Please also attach the following file to the GitHub issue:                                                     │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_ssh_7b8531d53ef9e7bbc6fc851111559258d7d600b6_0.log    │
	│                                                                                                                     │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:63: failed to read apiserver cert inside minikube. args "out/minikube-windows-amd64.exe -p cert-options-20211117230807-9504 ssh \"openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt\"": exit status 80
cert_options_test.go:70: apiserver cert does not include 127.0.0.1 in SAN.
cert_options_test.go:70: apiserver cert does not include 192.168.15.15 in SAN.
cert_options_test.go:70: apiserver cert does not include localhost in SAN.
cert_options_test.go:70: apiserver cert does not include www.google.com in SAN.
cert_options_test.go:83: failed to inspect container for the port get port 8555 for "cert-options-20211117230807-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8555/tcp") 0).HostPort}}'" cert-options-20211117230807-9504: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: cert-options-20211117230807-9504
cert_options_test.go:86: expected to get a non-zero forwarded port but got 0
cert_options_test.go:101: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p cert-options-20211117230807-9504 -- "sudo cat /etc/kubernetes/admin.conf"

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:101: (dbg) Non-zero exit: out/minikube-windows-amd64.exe ssh -p cert-options-20211117230807-9504 -- "sudo cat /etc/kubernetes/admin.conf": exit status 80 (1.8576603s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "cert-options-20211117230807-9504": docker container inspect cert-options-20211117230807-9504 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cert-options-20211117230807-9504
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                     │
	│    * If the above advice does not help, please let us know:                                                         │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                       │
	│                                                                                                                     │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                            │
	│    * Please also attach the following file to the GitHub issue:                                                     │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_ssh_bf4b0acc5ddf49539e7b1dcbc83bd1916f9eb405_0.log    │
	│                                                                                                                     │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:103: failed to SSH to minikube with args: "out/minikube-windows-amd64.exe ssh -p cert-options-20211117230807-9504 -- \"sudo cat /etc/kubernetes/admin.conf\"" : exit status 80
cert_options_test.go:107: Internal minikube kubeconfig (admin.conf) does not containe the right api port. 
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "cert-options-20211117230807-9504": docker container inspect cert-options-20211117230807-9504 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cert-options-20211117230807-9504
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                     │
	│    * If the above advice does not help, please let us know:                                                         │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                       │
	│                                                                                                                     │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                            │
	│    * Please also attach the following file to the GitHub issue:                                                     │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_ssh_bf4b0acc5ddf49539e7b1dcbc83bd1916f9eb405_0.log    │
	│                                                                                                                     │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:110: *** TestCertOptions FAILED at 2021-11-17 23:08:50.517946 +0000 GMT m=+2537.012895101
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestCertOptions]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect cert-options-20211117230807-9504
helpers_test.go:235: (dbg) docker inspect cert-options-20211117230807-9504:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "cert-options-20211117230807-9504",
	        "Id": "5afff171e81d5ce7766a28aa8e71c71093b793d07412b21d4eddde4f21cb022f",
	        "Created": "2021-11-17T23:08:10.461968695Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.49.0/24",
	                    "Gateway": "192.168.49.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p cert-options-20211117230807-9504 -n cert-options-20211117230807-9504
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p cert-options-20211117230807-9504 -n cert-options-20211117230807-9504: exit status 7 (1.9061672s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 23:08:52.514326    6948 status.go:247] status error: host: state: unknown state "cert-options-20211117230807-9504": docker container inspect cert-options-20211117230807-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cert-options-20211117230807-9504

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-options-20211117230807-9504" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:175: Cleaning up "cert-options-20211117230807-9504" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p cert-options-20211117230807-9504
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p cert-options-20211117230807-9504: (3.1025235s)
--- FAIL: TestCertOptions (48.43s)

                                                
                                    
x
+
TestCertExpiration (291.54s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:124: (dbg) Run:  out/minikube-windows-amd64.exe start -p cert-expiration-20211117230315-9504 --memory=2048 --cert-expiration=3m --driver=docker

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:124: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p cert-expiration-20211117230315-9504 --memory=2048 --cert-expiration=3m --driver=docker: exit status 80 (37.9952573s)

                                                
                                                
-- stdout --
	* [cert-expiration-20211117230315-9504] minikube v1.24.0 on Microsoft Windows 10 Enterprise N 10.0.18363 Build 18363
	  - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=12739
	* Using the docker driver based on user configuration
	* Starting control plane node cert-expiration-20211117230315-9504 in cluster cert-expiration-20211117230315-9504
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* docker "cert-expiration-20211117230315-9504" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 23:03:21.179022    9140 oci.go:197] error getting kernel modules path: Unable to locate kernel modules
	! StartHost failed, but will try again: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	E1117 23:03:48.668686    9140 oci.go:197] error getting kernel modules path: Unable to locate kernel modules
	* Failed to start docker container. Running "minikube delete -p cert-expiration-20211117230315-9504" may fix it: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:126: failed to start minikube with args: "out/minikube-windows-amd64.exe start -p cert-expiration-20211117230315-9504 --memory=2048 --cert-expiration=3m --driver=docker" : exit status 80

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:132: (dbg) Run:  out/minikube-windows-amd64.exe start -p cert-expiration-20211117230315-9504 --memory=2048 --cert-expiration=8760h --driver=docker

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:132: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p cert-expiration-20211117230315-9504 --memory=2048 --cert-expiration=8760h --driver=docker: exit status 80 (1m8.6382519s)

                                                
                                                
-- stdout --
	* [cert-expiration-20211117230315-9504] minikube v1.24.0 on Microsoft Windows 10 Enterprise N 10.0.18363 Build 18363
	  - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=12739
	* Using the docker driver based on existing profile
	* Starting control plane node cert-expiration-20211117230315-9504 in cluster cert-expiration-20211117230315-9504
	* Pulling base image ...
	* docker "cert-expiration-20211117230315-9504" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* docker "cert-expiration-20211117230315-9504" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 23:07:25.867479   11036 oci.go:197] error getting kernel modules path: Unable to locate kernel modules
	! StartHost failed, but will try again: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	E1117 23:07:56.860269   11036 oci.go:197] error getting kernel modules path: Unable to locate kernel modules
	* Failed to start docker container. Running "minikube delete -p cert-expiration-20211117230315-9504" may fix it: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:134: failed to start minikube after cert expiration: "out/minikube-windows-amd64.exe start -p cert-expiration-20211117230315-9504 --memory=2048 --cert-expiration=8760h --driver=docker" : exit status 80
cert_options_test.go:137: minikube start output did not warn about expired certs: 
-- stdout --
	* [cert-expiration-20211117230315-9504] minikube v1.24.0 on Microsoft Windows 10 Enterprise N 10.0.18363 Build 18363
	  - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=12739
	* Using the docker driver based on existing profile
	* Starting control plane node cert-expiration-20211117230315-9504 in cluster cert-expiration-20211117230315-9504
	* Pulling base image ...
	* docker "cert-expiration-20211117230315-9504" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* docker "cert-expiration-20211117230315-9504" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 23:07:25.867479   11036 oci.go:197] error getting kernel modules path: Unable to locate kernel modules
	! StartHost failed, but will try again: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	E1117 23:07:56.860269   11036 oci.go:197] error getting kernel modules path: Unable to locate kernel modules
	* Failed to start docker container. Running "minikube delete -p cert-expiration-20211117230315-9504" may fix it: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:139: *** TestCertExpiration FAILED at 2021-11-17 23:08:02.2993973 +0000 GMT m=+2488.794708101
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestCertExpiration]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect cert-expiration-20211117230315-9504
helpers_test.go:235: (dbg) docker inspect cert-expiration-20211117230315-9504:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "cert-expiration-20211117230315-9504",
	        "Id": "14f0535c7fc34319ee1d5a39e9f9455414a72227df9230245058ac4619fa1710",
	        "Created": "2021-11-17T23:07:22.899939688Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.49.0/24",
	                    "Gateway": "192.168.49.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p cert-expiration-20211117230315-9504 -n cert-expiration-20211117230315-9504

                                                
                                                
=== CONT  TestCertExpiration
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p cert-expiration-20211117230315-9504 -n cert-expiration-20211117230315-9504: exit status 7 (1.8454635s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 23:08:04.233295    9248 status.go:247] status error: host: state: unknown state "cert-expiration-20211117230315-9504": docker container inspect cert-expiration-20211117230315-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cert-expiration-20211117230315-9504

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-expiration-20211117230315-9504" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:175: Cleaning up "cert-expiration-20211117230315-9504" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p cert-expiration-20211117230315-9504
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p cert-expiration-20211117230315-9504: (2.9478886s)
--- FAIL: TestCertExpiration (291.54s)

                                                
                                    
x
+
TestDockerFlags (45.81s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:46: (dbg) Run:  out/minikube-windows-amd64.exe start -p docker-flags-20211117230357-9504 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:46: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p docker-flags-20211117230357-9504 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker: exit status 80 (37.5289032s)

                                                
                                                
-- stdout --
	* [docker-flags-20211117230357-9504] minikube v1.24.0 on Microsoft Windows 10 Enterprise N 10.0.18363 Build 18363
	  - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=12739
	* Using the docker driver based on user configuration
	* Starting control plane node docker-flags-20211117230357-9504 in cluster docker-flags-20211117230357-9504
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* docker "docker-flags-20211117230357-9504" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1117 23:03:57.769593    9212 out.go:297] Setting OutFile to fd 1424 ...
	I1117 23:03:57.831586    9212 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 23:03:57.831586    9212 out.go:310] Setting ErrFile to fd 1560...
	I1117 23:03:57.831586    9212 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 23:03:57.841584    9212 out.go:304] Setting JSON to false
	I1117 23:03:57.845587    9212 start.go:112] hostinfo: {"hostname":"minikube2","uptime":79553,"bootTime":1637110684,"procs":130,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.18363 Build 18363","kernelVersion":"10.0.18363 Build 18363","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"29dced62-21fb-45d8-a34e-472b66ced002"}
	W1117 23:03:57.845587    9212 start.go:120] gopshost.Virtualization returned error: not implemented yet
	I1117 23:03:57.850594    9212 out.go:176] * [docker-flags-20211117230357-9504] minikube v1.24.0 on Microsoft Windows 10 Enterprise N 10.0.18363 Build 18363
	I1117 23:03:57.850594    9212 notify.go:174] Checking for updates...
	I1117 23:03:57.853581    9212 out.go:176]   - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	I1117 23:03:57.855602    9212 out.go:176]   - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	I1117 23:03:57.857582    9212 out.go:176]   - MINIKUBE_LOCATION=12739
	I1117 23:03:57.858586    9212 config.go:176] Loaded profile config "NoKubernetes-20211117230313-9504": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v0.0.0
	I1117 23:03:57.859578    9212 config.go:176] Loaded profile config "cert-expiration-20211117230315-9504": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.3
	I1117 23:03:57.860586    9212 config.go:176] Loaded profile config "multinode-20211117225530-9504-m01": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.3
	I1117 23:03:57.860586    9212 driver.go:343] Setting default libvirt URI to qemu:///system
	I1117 23:03:59.508318    9212 docker.go:132] docker version: linux-19.03.12
	I1117 23:03:59.512599    9212 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I1117 23:03:59.867877    9212 info.go:263] docker info: {ID:L2W4:ZZCI:FAHF:ZZT6:NAWJ:ODFR:2ZBQ:YQ77:PYZZ:VR3E:WLUM:OBSV Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:40 OomKillDisable:true NGoroutines:52 SystemTime:2021-11-17 23:03:59.599485861 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:4.19.76-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://inde
x.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:6239399936 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:19.03.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7ad184331fa3e55e52b890ea95e65ba581ae3429 Expected:7ad184331fa3e55e52b890ea95e65ba581ae3429} RuncCommit:{ID:dc9208a3303feef5b3839f4323d9beb36df0a9dd Expected:dc9208a3303feef5b3839f4323d9beb36df0a9dd} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense:Community Engine Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I1117 23:03:59.875386    9212 out.go:176] * Using the docker driver based on user configuration
	I1117 23:03:59.875386    9212 start.go:280] selected driver: docker
	I1117 23:03:59.875386    9212 start.go:775] validating driver "docker" against <nil>
	I1117 23:03:59.875386    9212 start.go:786] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I1117 23:03:59.938417    9212 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I1117 23:04:00.300617    9212 info.go:263] docker info: {ID:L2W4:ZZCI:FAHF:ZZT6:NAWJ:ODFR:2ZBQ:YQ77:PYZZ:VR3E:WLUM:OBSV Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:40 OomKillDisable:true NGoroutines:52 SystemTime:2021-11-17 23:04:00.023624349 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:4.19.76-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://inde
x.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:6239399936 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:19.03.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7ad184331fa3e55e52b890ea95e65ba581ae3429 Expected:7ad184331fa3e55e52b890ea95e65ba581ae3429} RuncCommit:{ID:dc9208a3303feef5b3839f4323d9beb36df0a9dd Expected:dc9208a3303feef5b3839f4323d9beb36df0a9dd} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense:Community Engine Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I1117 23:04:00.300617    9212 start_flags.go:268] no existing cluster config was found, will generate one from the flags 
	I1117 23:04:00.301322    9212 start_flags.go:753] Waiting for no components: map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false]
	I1117 23:04:00.301322    9212 cni.go:93] Creating CNI manager for ""
	I1117 23:04:00.301322    9212 cni.go:167] CNI unnecessary in this configuration, recommending no CNI
	I1117 23:04:00.301322    9212 start_flags.go:282] config:
	{Name:docker-flags-20211117230357-9504 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.3 ClusterName:docker-flags-20211117230357-9504 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loca
l ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host}
	I1117 23:04:00.306762    9212 out.go:176] * Starting control plane node docker-flags-20211117230357-9504 in cluster docker-flags-20211117230357-9504
	I1117 23:04:00.306926    9212 cache.go:118] Beginning downloading kic base image for docker with docker
	I1117 23:04:00.309726    9212 out.go:176] * Pulling base image ...
	I1117 23:04:00.309726    9212 preload.go:132] Checking if preload exists for k8s version v1.22.3 and runtime docker
	I1117 23:04:00.309726    9212 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c in local docker daemon
	I1117 23:04:00.309726    9212 preload.go:148] Found local preload: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4
	I1117 23:04:00.309726    9212 cache.go:57] Caching tarball of preloaded images
	I1117 23:04:00.310457    9212 preload.go:174] Found C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1117 23:04:00.310457    9212 cache.go:60] Finished verifying existence of preloaded tar for  v1.22.3 on docker
	I1117 23:04:00.311012    9212 profile.go:147] Saving config to C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\docker-flags-20211117230357-9504\config.json ...
	I1117 23:04:00.311198    9212 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\docker-flags-20211117230357-9504\config.json: {Name:mke6ce1b62a6072db8824bb9abefcd91d705c22c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1117 23:04:00.410899    9212 image.go:79] Found gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c in local docker daemon, skipping pull
	I1117 23:04:00.410899    9212 cache.go:140] gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c exists in daemon, skipping load
	I1117 23:04:00.410899    9212 cache.go:206] Successfully downloaded all kic artifacts
	I1117 23:04:00.410899    9212 start.go:313] acquiring machines lock for docker-flags-20211117230357-9504: {Name:mkb31e4bf40382592880aae6735cc19b49697c94 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 23:04:00.411400    9212 start.go:317] acquired machines lock for "docker-flags-20211117230357-9504" in 501.1µs
	I1117 23:04:00.411400    9212 start.go:89] Provisioning new machine with config: &{Name:docker-flags-20211117230357-9504 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.3 ClusterName:docker-flags-20211117230357-9504 Namespace:default
APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.22.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host} &{Name: IP: Port:8443 KubernetesVersion:v1.22.3 ControlPlane:true Worker:true}
	I1117 23:04:00.411617    9212 start.go:126] createHost starting for "" (driver="docker")
	I1117 23:04:00.415383    9212 out.go:203] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I1117 23:04:00.416308    9212 start.go:160] libmachine.API.Create for "docker-flags-20211117230357-9504" (driver="docker")
	I1117 23:04:00.416378    9212 client.go:168] LocalClient.Create starting
	I1117 23:04:00.416898    9212 main.go:130] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I1117 23:04:00.417150    9212 main.go:130] libmachine: Decoding PEM data...
	I1117 23:04:00.417228    9212 main.go:130] libmachine: Parsing certificate...
	I1117 23:04:00.417489    9212 main.go:130] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I1117 23:04:00.418241    9212 main.go:130] libmachine: Decoding PEM data...
	I1117 23:04:00.418241    9212 main.go:130] libmachine: Parsing certificate...
	I1117 23:04:00.423394    9212 cli_runner.go:115] Run: docker network inspect docker-flags-20211117230357-9504 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1117 23:04:00.526014    9212 cli_runner.go:162] docker network inspect docker-flags-20211117230357-9504 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1117 23:04:00.529553    9212 network_create.go:254] running [docker network inspect docker-flags-20211117230357-9504] to gather additional debugging logs...
	I1117 23:04:00.530084    9212 cli_runner.go:115] Run: docker network inspect docker-flags-20211117230357-9504
	W1117 23:04:00.620761    9212 cli_runner.go:162] docker network inspect docker-flags-20211117230357-9504 returned with exit code 1
	I1117 23:04:00.620761    9212 network_create.go:257] error running [docker network inspect docker-flags-20211117230357-9504]: docker network inspect docker-flags-20211117230357-9504: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: docker-flags-20211117230357-9504
	I1117 23:04:00.620921    9212 network_create.go:259] output of [docker network inspect docker-flags-20211117230357-9504]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: docker-flags-20211117230357-9504
	
	** /stderr **
	I1117 23:04:00.625888    9212 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1117 23:04:00.742955    9212 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc00060e4b0] misses:0}
	I1117 23:04:00.743054    9212 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1117 23:04:00.743186    9212 network_create.go:106] attempt to create docker network docker-flags-20211117230357-9504 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1117 23:04:00.750740    9212 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true docker-flags-20211117230357-9504
	I1117 23:04:00.976149    9212 network_create.go:90] docker network docker-flags-20211117230357-9504 192.168.49.0/24 created
	I1117 23:04:00.976367    9212 kic.go:106] calculated static IP "192.168.49.2" for the "docker-flags-20211117230357-9504" container
	I1117 23:04:00.984159    9212 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
	I1117 23:04:01.076495    9212 cli_runner.go:115] Run: docker volume create docker-flags-20211117230357-9504 --label name.minikube.sigs.k8s.io=docker-flags-20211117230357-9504 --label created_by.minikube.sigs.k8s.io=true
	I1117 23:04:01.183120    9212 oci.go:102] Successfully created a docker volume docker-flags-20211117230357-9504
	I1117 23:04:01.187296    9212 cli_runner.go:115] Run: docker run --rm --name docker-flags-20211117230357-9504-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=docker-flags-20211117230357-9504 --entrypoint /usr/bin/test -v docker-flags-20211117230357-9504:/var gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -d /var/lib
	I1117 23:04:02.299072    9212 cli_runner.go:168] Completed: docker run --rm --name docker-flags-20211117230357-9504-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=docker-flags-20211117230357-9504 --entrypoint /usr/bin/test -v docker-flags-20211117230357-9504:/var gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -d /var/lib: (1.111768s)
	I1117 23:04:02.299072    9212 oci.go:106] Successfully prepared a docker volume docker-flags-20211117230357-9504
	I1117 23:04:02.299072    9212 preload.go:132] Checking if preload exists for k8s version v1.22.3 and runtime docker
	I1117 23:04:02.299072    9212 kic.go:179] Starting extracting preloaded images to volume ...
	I1117 23:04:02.303781    9212 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I1117 23:04:02.304221    9212 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v docker-flags-20211117230357-9504:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir
	W1117 23:04:02.411817    9212 cli_runner.go:162] docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v docker-flags-20211117230357-9504:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir returned with exit code 125
	I1117 23:04:02.411899    9212 kic.go:186] Unable to extract preloaded tarball to volume: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v docker-flags-20211117230357-9504:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: status code not OK but 500: {"Message":"Unhandled exception: The notification platform is unavailable.\r\n\r\nThe notification platform is unavailable.\r\n","StackTrace":"   at Windows.UI.Notifications.ToastNotificationManager.CreateToastNotifier(String applicationId)\r\n   at Docker.WPF.PromptShareDirectory.<PromptUserAsync>d__0.MoveNext() in C:\\workspaces\\stable-2.3.x\\src\\github.com\\docker\\pinata\\win\\src\\Docker.WPF\\PromptShareDirectory.cs:line 25\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at Docker.ApiServices.Mounting.FileSharing.<DoShareAsync>d__6.MoveNext() in C:\\workspaces\\stable-2.3.x\\src\\github.com\\docker\\pinata\\win\\src\\Docker.ApiServices\\Mounting\\FileSharing.cs:line 80\r\n--- End of stack trace from previous location whe
re exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at Docker.ApiServices.Mounting.FileSharing.<ShareAsync>d__4.MoveNext() in C:\\workspaces\\stable-2.3.x\\src\\github.com\\docker\\pinata\\win\\src\\Docker.ApiServices\\Mounting\\FileSharing.cs:line 47\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at Docker.HttpApi.Controllers.FilesharingController.<ShareDirectory>d__2.MoveNext() in C:\\workspaces\\stable-2.3.x\\src\\github.com\\docker\\pinata\\win\\src\\Docker.HttpApi\\Controllers\\FilesharingController.cs:line 21\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.Exceptio
nDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at System.Threading.Tasks.TaskHelpersExtensions.<CastToObject>d__1`1.MoveNext()\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at System.Web.Http.Controllers.ApiControllerActionInvoker.<InvokeActionAsyncCore>d__1.MoveNext()\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at System.Web.Http.Controllers.ActionFilterResult.<ExecuteAsync>d__5.MoveNext()\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.Excepti
onServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at System.Web.Http.Dispatcher.HttpControllerDispatcher.<SendAsync>d__15.MoveNext()"}.
	See 'docker run --help'.
	I1117 23:04:02.694526    9212 info.go:263] docker info: {ID:L2W4:ZZCI:FAHF:ZZT6:NAWJ:ODFR:2ZBQ:YQ77:PYZZ:VR3E:WLUM:OBSV Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:42 OomKillDisable:true NGoroutines:56 SystemTime:2021-11-17 23:04:02.40618116 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:4.19.76-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://index
.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:6239399936 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:19.03.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7ad184331fa3e55e52b890ea95e65ba581ae3429 Expected:7ad184331fa3e55e52b890ea95e65ba581ae3429} RuncCommit:{ID:dc9208a3303feef5b3839f4323d9beb36df0a9dd Expected:dc9208a3303feef5b3839f4323d9beb36df0a9dd} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense:Community Engine Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	E1117 23:04:02.694526    9212 oci.go:197] error getting kernel modules path: Unable to locate kernel modules
	I1117 23:04:02.694526    9212 client.go:171] LocalClient.Create took 2.2781312s
	I1117 23:04:04.702497    9212 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1117 23:04:04.706090    9212 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20211117230357-9504
	W1117 23:04:04.793347    9212 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20211117230357-9504 returned with exit code 1
	I1117 23:04:04.793671    9212 retry.go:31] will retry after 276.165072ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-20211117230357-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20211117230357-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-20211117230357-9504
	I1117 23:04:05.075249    9212 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20211117230357-9504
	W1117 23:04:05.161355    9212 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20211117230357-9504 returned with exit code 1
	I1117 23:04:05.161523    9212 retry.go:31] will retry after 540.190908ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-20211117230357-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20211117230357-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-20211117230357-9504
	I1117 23:04:05.706909    9212 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20211117230357-9504
	W1117 23:04:05.798040    9212 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20211117230357-9504 returned with exit code 1
	I1117 23:04:05.798324    9212 retry.go:31] will retry after 655.06503ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-20211117230357-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20211117230357-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-20211117230357-9504
	I1117 23:04:06.458610    9212 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20211117230357-9504
	W1117 23:04:06.547090    9212 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20211117230357-9504 returned with exit code 1
	W1117 23:04:06.547262    9212 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-20211117230357-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20211117230357-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-20211117230357-9504
	
	W1117 23:04:06.547345    9212 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-20211117230357-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20211117230357-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-20211117230357-9504
	I1117 23:04:06.547345    9212 start.go:129] duration metric: createHost completed in 6.1356821s
	I1117 23:04:06.547345    9212 start.go:80] releasing machines lock for "docker-flags-20211117230357-9504", held for 6.1358985s
	W1117 23:04:06.547551    9212 start.go:532] error starting host: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	I1117 23:04:06.556970    9212 cli_runner.go:115] Run: docker container inspect docker-flags-20211117230357-9504 --format={{.State.Status}}
	W1117 23:04:06.642449    9212 cli_runner.go:162] docker container inspect docker-flags-20211117230357-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:04:06.642614    9212 delete.go:82] Unable to get host status for docker-flags-20211117230357-9504, assuming it has already been deleted: state: unknown state "docker-flags-20211117230357-9504": docker container inspect docker-flags-20211117230357-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-20211117230357-9504
	W1117 23:04:06.642939    9212 out.go:241] ! StartHost failed, but will try again: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	! StartHost failed, but will try again: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	I1117 23:04:06.642939    9212 start.go:547] Will try again in 5 seconds ...
	I1117 23:04:11.643080    9212 start.go:313] acquiring machines lock for docker-flags-20211117230357-9504: {Name:mkb31e4bf40382592880aae6735cc19b49697c94 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 23:04:11.643080    9212 start.go:317] acquired machines lock for "docker-flags-20211117230357-9504" in 0s
	I1117 23:04:11.643080    9212 start.go:93] Skipping create...Using existing machine configuration
	I1117 23:04:11.643080    9212 fix.go:55] fixHost starting: 
	I1117 23:04:11.651897    9212 cli_runner.go:115] Run: docker container inspect docker-flags-20211117230357-9504 --format={{.State.Status}}
	W1117 23:04:11.753967    9212 cli_runner.go:162] docker container inspect docker-flags-20211117230357-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:04:11.753967    9212 fix.go:108] recreateIfNeeded on docker-flags-20211117230357-9504: state= err=unknown state "docker-flags-20211117230357-9504": docker container inspect docker-flags-20211117230357-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-20211117230357-9504
	I1117 23:04:11.753967    9212 fix.go:113] machineExists: false. err=machine does not exist
	I1117 23:04:11.757592    9212 out.go:176] * docker "docker-flags-20211117230357-9504" container is missing, will recreate.
	I1117 23:04:11.757731    9212 delete.go:124] DEMOLISHING docker-flags-20211117230357-9504 ...
	I1117 23:04:11.765667    9212 cli_runner.go:115] Run: docker container inspect docker-flags-20211117230357-9504 --format={{.State.Status}}
	W1117 23:04:11.856708    9212 cli_runner.go:162] docker container inspect docker-flags-20211117230357-9504 --format={{.State.Status}} returned with exit code 1
	W1117 23:04:11.856897    9212 stop.go:75] unable to get state: unknown state "docker-flags-20211117230357-9504": docker container inspect docker-flags-20211117230357-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-20211117230357-9504
	I1117 23:04:11.856897    9212 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "docker-flags-20211117230357-9504": docker container inspect docker-flags-20211117230357-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-20211117230357-9504
	I1117 23:04:11.866227    9212 cli_runner.go:115] Run: docker container inspect docker-flags-20211117230357-9504 --format={{.State.Status}}
	W1117 23:04:11.966765    9212 cli_runner.go:162] docker container inspect docker-flags-20211117230357-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:04:11.966873    9212 delete.go:82] Unable to get host status for docker-flags-20211117230357-9504, assuming it has already been deleted: state: unknown state "docker-flags-20211117230357-9504": docker container inspect docker-flags-20211117230357-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-20211117230357-9504
	I1117 23:04:11.971611    9212 cli_runner.go:115] Run: docker container inspect -f {{.Id}} docker-flags-20211117230357-9504
	W1117 23:04:12.057131    9212 cli_runner.go:162] docker container inspect -f {{.Id}} docker-flags-20211117230357-9504 returned with exit code 1
	I1117 23:04:12.057450    9212 kic.go:360] could not find the container docker-flags-20211117230357-9504 to remove it. will try anyways
	I1117 23:04:12.061258    9212 cli_runner.go:115] Run: docker container inspect docker-flags-20211117230357-9504 --format={{.State.Status}}
	W1117 23:04:12.149525    9212 cli_runner.go:162] docker container inspect docker-flags-20211117230357-9504 --format={{.State.Status}} returned with exit code 1
	W1117 23:04:12.149525    9212 oci.go:83] error getting container status, will try to delete anyways: unknown state "docker-flags-20211117230357-9504": docker container inspect docker-flags-20211117230357-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-20211117230357-9504
	I1117 23:04:12.153526    9212 cli_runner.go:115] Run: docker exec --privileged -t docker-flags-20211117230357-9504 /bin/bash -c "sudo init 0"
	W1117 23:04:12.240361    9212 cli_runner.go:162] docker exec --privileged -t docker-flags-20211117230357-9504 /bin/bash -c "sudo init 0" returned with exit code 1
	I1117 23:04:12.240420    9212 oci.go:658] error shutdown docker-flags-20211117230357-9504: docker exec --privileged -t docker-flags-20211117230357-9504 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: docker-flags-20211117230357-9504
	I1117 23:04:13.245916    9212 cli_runner.go:115] Run: docker container inspect docker-flags-20211117230357-9504 --format={{.State.Status}}
	W1117 23:04:13.336339    9212 cli_runner.go:162] docker container inspect docker-flags-20211117230357-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:04:13.336524    9212 oci.go:670] temporary error verifying shutdown: unknown state "docker-flags-20211117230357-9504": docker container inspect docker-flags-20211117230357-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-20211117230357-9504
	I1117 23:04:13.336667    9212 oci.go:672] temporary error: container docker-flags-20211117230357-9504 status is  but expect it to be exited
	I1117 23:04:13.336667    9212 retry.go:31] will retry after 462.318748ms: couldn't verify container is exited. %v: unknown state "docker-flags-20211117230357-9504": docker container inspect docker-flags-20211117230357-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-20211117230357-9504
	I1117 23:04:13.804068    9212 cli_runner.go:115] Run: docker container inspect docker-flags-20211117230357-9504 --format={{.State.Status}}
	W1117 23:04:13.897980    9212 cli_runner.go:162] docker container inspect docker-flags-20211117230357-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:04:13.898067    9212 oci.go:670] temporary error verifying shutdown: unknown state "docker-flags-20211117230357-9504": docker container inspect docker-flags-20211117230357-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-20211117230357-9504
	I1117 23:04:13.898067    9212 oci.go:672] temporary error: container docker-flags-20211117230357-9504 status is  but expect it to be exited
	I1117 23:04:13.898149    9212 retry.go:31] will retry after 890.117305ms: couldn't verify container is exited. %v: unknown state "docker-flags-20211117230357-9504": docker container inspect docker-flags-20211117230357-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-20211117230357-9504
	I1117 23:04:14.793358    9212 cli_runner.go:115] Run: docker container inspect docker-flags-20211117230357-9504 --format={{.State.Status}}
	W1117 23:04:14.892433    9212 cli_runner.go:162] docker container inspect docker-flags-20211117230357-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:04:14.892507    9212 oci.go:670] temporary error verifying shutdown: unknown state "docker-flags-20211117230357-9504": docker container inspect docker-flags-20211117230357-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-20211117230357-9504
	I1117 23:04:14.892507    9212 oci.go:672] temporary error: container docker-flags-20211117230357-9504 status is  but expect it to be exited
	I1117 23:04:14.892576    9212 retry.go:31] will retry after 636.341646ms: couldn't verify container is exited. %v: unknown state "docker-flags-20211117230357-9504": docker container inspect docker-flags-20211117230357-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-20211117230357-9504
	I1117 23:04:15.533103    9212 cli_runner.go:115] Run: docker container inspect docker-flags-20211117230357-9504 --format={{.State.Status}}
	W1117 23:04:15.634998    9212 cli_runner.go:162] docker container inspect docker-flags-20211117230357-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:04:15.634998    9212 oci.go:670] temporary error verifying shutdown: unknown state "docker-flags-20211117230357-9504": docker container inspect docker-flags-20211117230357-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-20211117230357-9504
	I1117 23:04:15.634998    9212 oci.go:672] temporary error: container docker-flags-20211117230357-9504 status is  but expect it to be exited
	I1117 23:04:15.634998    9212 retry.go:31] will retry after 1.107876242s: couldn't verify container is exited. %v: unknown state "docker-flags-20211117230357-9504": docker container inspect docker-flags-20211117230357-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-20211117230357-9504
	I1117 23:04:16.749298    9212 cli_runner.go:115] Run: docker container inspect docker-flags-20211117230357-9504 --format={{.State.Status}}
	W1117 23:04:16.845339    9212 cli_runner.go:162] docker container inspect docker-flags-20211117230357-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:04:16.845468    9212 oci.go:670] temporary error verifying shutdown: unknown state "docker-flags-20211117230357-9504": docker container inspect docker-flags-20211117230357-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-20211117230357-9504
	I1117 23:04:16.845468    9212 oci.go:672] temporary error: container docker-flags-20211117230357-9504 status is  but expect it to be exited
	I1117 23:04:16.845468    9212 retry.go:31] will retry after 1.511079094s: couldn't verify container is exited. %v: unknown state "docker-flags-20211117230357-9504": docker container inspect docker-flags-20211117230357-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-20211117230357-9504
	I1117 23:04:18.362207    9212 cli_runner.go:115] Run: docker container inspect docker-flags-20211117230357-9504 --format={{.State.Status}}
	W1117 23:04:18.454997    9212 cli_runner.go:162] docker container inspect docker-flags-20211117230357-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:04:18.454997    9212 oci.go:670] temporary error verifying shutdown: unknown state "docker-flags-20211117230357-9504": docker container inspect docker-flags-20211117230357-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-20211117230357-9504
	I1117 23:04:18.455368    9212 oci.go:672] temporary error: container docker-flags-20211117230357-9504 status is  but expect it to be exited
	I1117 23:04:18.455411    9212 retry.go:31] will retry after 3.04096222s: couldn't verify container is exited. %v: unknown state "docker-flags-20211117230357-9504": docker container inspect docker-flags-20211117230357-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-20211117230357-9504
	I1117 23:04:21.505030    9212 cli_runner.go:115] Run: docker container inspect docker-flags-20211117230357-9504 --format={{.State.Status}}
	W1117 23:04:21.594860    9212 cli_runner.go:162] docker container inspect docker-flags-20211117230357-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:04:21.595049    9212 oci.go:670] temporary error verifying shutdown: unknown state "docker-flags-20211117230357-9504": docker container inspect docker-flags-20211117230357-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-20211117230357-9504
	I1117 23:04:21.595049    9212 oci.go:672] temporary error: container docker-flags-20211117230357-9504 status is  but expect it to be exited
	I1117 23:04:21.595049    9212 retry.go:31] will retry after 5.781953173s: couldn't verify container is exited. %v: unknown state "docker-flags-20211117230357-9504": docker container inspect docker-flags-20211117230357-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-20211117230357-9504
	I1117 23:04:27.385227    9212 cli_runner.go:115] Run: docker container inspect docker-flags-20211117230357-9504 --format={{.State.Status}}
	W1117 23:04:27.476858    9212 cli_runner.go:162] docker container inspect docker-flags-20211117230357-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:04:27.476933    9212 oci.go:670] temporary error verifying shutdown: unknown state "docker-flags-20211117230357-9504": docker container inspect docker-flags-20211117230357-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-20211117230357-9504
	I1117 23:04:27.476933    9212 oci.go:672] temporary error: container docker-flags-20211117230357-9504 status is  but expect it to be exited
	I1117 23:04:27.476933    9212 oci.go:87] couldn't shut down docker-flags-20211117230357-9504 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "docker-flags-20211117230357-9504": docker container inspect docker-flags-20211117230357-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-20211117230357-9504
	 
	I1117 23:04:27.482415    9212 cli_runner.go:115] Run: docker rm -f -v docker-flags-20211117230357-9504
	W1117 23:04:27.572940    9212 cli_runner.go:162] docker rm -f -v docker-flags-20211117230357-9504 returned with exit code 1
	W1117 23:04:27.573969    9212 delete.go:139] delete failed (probably ok) <nil>
	I1117 23:04:27.574134    9212 fix.go:120] Sleeping 1 second for extra luck!
	I1117 23:04:28.575253    9212 start.go:126] createHost starting for "" (driver="docker")
	I1117 23:04:28.578489    9212 out.go:203] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I1117 23:04:28.578755    9212 start.go:160] libmachine.API.Create for "docker-flags-20211117230357-9504" (driver="docker")
	I1117 23:04:28.578755    9212 client.go:168] LocalClient.Create starting
	I1117 23:04:28.579757    9212 main.go:130] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I1117 23:04:28.579943    9212 main.go:130] libmachine: Decoding PEM data...
	I1117 23:04:28.579943    9212 main.go:130] libmachine: Parsing certificate...
	I1117 23:04:28.579943    9212 main.go:130] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I1117 23:04:28.579943    9212 main.go:130] libmachine: Decoding PEM data...
	I1117 23:04:28.579943    9212 main.go:130] libmachine: Parsing certificate...
	I1117 23:04:28.586404    9212 cli_runner.go:115] Run: docker network inspect docker-flags-20211117230357-9504 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1117 23:04:28.678798    9212 network_create.go:67] Found existing network {name:docker-flags-20211117230357-9504 subnet:0xc000bc8090 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 168 49 1] mtu:1500}
	I1117 23:04:28.679107    9212 kic.go:106] calculated static IP "192.168.49.2" for the "docker-flags-20211117230357-9504" container
	I1117 23:04:28.687159    9212 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
	I1117 23:04:28.783078    9212 cli_runner.go:115] Run: docker volume create docker-flags-20211117230357-9504 --label name.minikube.sigs.k8s.io=docker-flags-20211117230357-9504 --label created_by.minikube.sigs.k8s.io=true
	I1117 23:04:28.872771    9212 oci.go:102] Successfully created a docker volume docker-flags-20211117230357-9504
	I1117 23:04:28.877003    9212 cli_runner.go:115] Run: docker run --rm --name docker-flags-20211117230357-9504-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=docker-flags-20211117230357-9504 --entrypoint /usr/bin/test -v docker-flags-20211117230357-9504:/var gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -d /var/lib
	I1117 23:04:29.790930    9212 oci.go:106] Successfully prepared a docker volume docker-flags-20211117230357-9504
	I1117 23:04:29.790930    9212 preload.go:132] Checking if preload exists for k8s version v1.22.3 and runtime docker
	I1117 23:04:29.790930    9212 kic.go:179] Starting extracting preloaded images to volume ...
	I1117 23:04:29.796611    9212 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v docker-flags-20211117230357-9504:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir
	I1117 23:04:29.796611    9212 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	W1117 23:04:29.910122    9212 cli_runner.go:162] docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v docker-flags-20211117230357-9504:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir returned with exit code 125
	I1117 23:04:29.910368    9212 kic.go:186] Unable to extract preloaded tarball to volume: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v docker-flags-20211117230357-9504:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: status code not OK but 500: {"Message":"Unhandled exception: The notification platform is unavailable.\r\n\r\nThe notification platform is unavailable.\r\n","StackTrace":"   at Windows.UI.Notifications.ToastNotificationManager.CreateToastNotifier(String applicationId)\r\n   at Docker.WPF.PromptShareDirectory.<PromptUserAsync>d__0.MoveNext() in C:\\workspaces\\stable-2.3.x\\src\\github.com\\docker\\pinata\\win\\src\\Docker.WPF\\PromptShareDirectory.cs:line 25\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at Docker.ApiServices.Mounting.FileSharing.<DoShareAsync>d__6.MoveNext() in C:\\workspaces\\stable-2.3.x\\src\\github.com\\docker\\pinata\\win\\src\\Docker.ApiServices\\Mounting\\FileSharing.cs:line 80\r\n--- End of stack trace from previous location whe
re exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at Docker.ApiServices.Mounting.FileSharing.<ShareAsync>d__4.MoveNext() in C:\\workspaces\\stable-2.3.x\\src\\github.com\\docker\\pinata\\win\\src\\Docker.ApiServices\\Mounting\\FileSharing.cs:line 47\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at Docker.HttpApi.Controllers.FilesharingController.<ShareDirectory>d__2.MoveNext() in C:\\workspaces\\stable-2.3.x\\src\\github.com\\docker\\pinata\\win\\src\\Docker.HttpApi\\Controllers\\FilesharingController.cs:line 21\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.Exceptio
nDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at System.Threading.Tasks.TaskHelpersExtensions.<CastToObject>d__1`1.MoveNext()\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at System.Web.Http.Controllers.ApiControllerActionInvoker.<InvokeActionAsyncCore>d__1.MoveNext()\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at System.Web.Http.Controllers.ActionFilterResult.<ExecuteAsync>d__5.MoveNext()\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.Excepti
onServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at System.Web.Http.Dispatcher.HttpControllerDispatcher.<SendAsync>d__15.MoveNext()"}.
	See 'docker run --help'.
	I1117 23:04:30.165774    9212 info.go:263] docker info: {ID:L2W4:ZZCI:FAHF:ZZT6:NAWJ:ODFR:2ZBQ:YQ77:PYZZ:VR3E:WLUM:OBSV Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:42 OomKillDisable:true NGoroutines:56 SystemTime:2021-11-17 23:04:29.876730579 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:4.19.76-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://inde
x.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:6239399936 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:19.03.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7ad184331fa3e55e52b890ea95e65ba581ae3429 Expected:7ad184331fa3e55e52b890ea95e65ba581ae3429} RuncCommit:{ID:dc9208a3303feef5b3839f4323d9beb36df0a9dd Expected:dc9208a3303feef5b3839f4323d9beb36df0a9dd} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense:Community Engine Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	E1117 23:04:30.165774    9212 oci.go:197] error getting kernel modules path: Unable to locate kernel modules
	I1117 23:04:30.165774    9212 client.go:171] LocalClient.Create took 1.587007s
	I1117 23:04:32.174796    9212 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1117 23:04:32.177791    9212 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20211117230357-9504
	W1117 23:04:32.266965    9212 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20211117230357-9504 returned with exit code 1
	I1117 23:04:32.267310    9212 retry.go:31] will retry after 178.565968ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-20211117230357-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20211117230357-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-20211117230357-9504
	I1117 23:04:32.458352    9212 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20211117230357-9504
	W1117 23:04:32.542272    9212 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20211117230357-9504 returned with exit code 1
	I1117 23:04:32.542533    9212 retry.go:31] will retry after 330.246446ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-20211117230357-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20211117230357-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-20211117230357-9504
	I1117 23:04:32.878741    9212 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20211117230357-9504
	W1117 23:04:32.974893    9212 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20211117230357-9504 returned with exit code 1
	I1117 23:04:32.975143    9212 retry.go:31] will retry after 460.157723ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-20211117230357-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20211117230357-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-20211117230357-9504
	I1117 23:04:33.441508    9212 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20211117230357-9504
	W1117 23:04:33.529213    9212 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20211117230357-9504 returned with exit code 1
	W1117 23:04:33.529560    9212 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-20211117230357-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20211117230357-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-20211117230357-9504
	
	W1117 23:04:33.529589    9212 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-20211117230357-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20211117230357-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-20211117230357-9504
	I1117 23:04:33.529589    9212 start.go:129] duration metric: createHost completed in 4.9542991s
	I1117 23:04:33.537747    9212 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1117 23:04:33.541092    9212 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20211117230357-9504
	W1117 23:04:33.628179    9212 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20211117230357-9504 returned with exit code 1
	I1117 23:04:33.628773    9212 retry.go:31] will retry after 195.758538ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-20211117230357-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20211117230357-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-20211117230357-9504
	I1117 23:04:33.832087    9212 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20211117230357-9504
	W1117 23:04:33.914490    9212 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20211117230357-9504 returned with exit code 1
	I1117 23:04:33.914490    9212 retry.go:31] will retry after 297.413196ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-20211117230357-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20211117230357-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-20211117230357-9504
	I1117 23:04:34.217413    9212 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20211117230357-9504
	W1117 23:04:34.310330    9212 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20211117230357-9504 returned with exit code 1
	I1117 23:04:34.310330    9212 retry.go:31] will retry after 663.23513ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-20211117230357-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20211117230357-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-20211117230357-9504
	I1117 23:04:34.979845    9212 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20211117230357-9504
	W1117 23:04:35.070844    9212 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20211117230357-9504 returned with exit code 1
	W1117 23:04:35.070844    9212 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-20211117230357-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20211117230357-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-20211117230357-9504
	
	W1117 23:04:35.070844    9212 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-20211117230357-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20211117230357-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-20211117230357-9504
	I1117 23:04:35.070844    9212 fix.go:57] fixHost completed within 23.4275885s
	I1117 23:04:35.070844    9212 start.go:80] releasing machines lock for "docker-flags-20211117230357-9504", held for 23.4275885s
	W1117 23:04:35.071812    9212 out.go:241] * Failed to start docker container. Running "minikube delete -p docker-flags-20211117230357-9504" may fix it: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	* Failed to start docker container. Running "minikube delete -p docker-flags-20211117230357-9504" may fix it: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	I1117 23:04:35.077348    9212 out.go:176] 
	W1117 23:04:35.077696    9212 out.go:241] X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	W1117 23:04:35.077803    9212 out.go:241] * 
	* 
	W1117 23:04:35.078983    9212 out.go:241] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1117 23:04:35.081287    9212 out.go:176] 

                                                
                                                
** /stderr **
docker_test.go:48: failed to start minikube with args: "out/minikube-windows-amd64.exe start -p docker-flags-20211117230357-9504 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker" : exit status 80
docker_test.go:51: (dbg) Run:  out/minikube-windows-amd64.exe -p docker-flags-20211117230357-9504 ssh "sudo systemctl show docker --property=Environment --no-pager"

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p docker-flags-20211117230357-9504 ssh "sudo systemctl show docker --property=Environment --no-pager": exit status 80 (1.8617716s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "docker-flags-20211117230357-9504": docker container inspect docker-flags-20211117230357-9504 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-20211117230357-9504
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                     │
	│    * If the above advice does not help, please let us know:                                                         │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                       │
	│                                                                                                                     │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                            │
	│    * Please also attach the following file to the GitHub issue:                                                     │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_ssh_d4f85ee29175a4f8b67ccfa3331e6e8264cb6e77_0.log    │
	│                                                                                                                     │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
docker_test.go:53: failed to 'systemctl show docker' inside minikube. args "out/minikube-windows-amd64.exe -p docker-flags-20211117230357-9504 ssh \"sudo systemctl show docker --property=Environment --no-pager\"": exit status 80
docker_test.go:58: expected env key/value "FOO=BAR" to be passed to minikube's docker and be included in: *"\n\n"*.
docker_test.go:58: expected env key/value "BAZ=BAT" to be passed to minikube's docker and be included in: *"\n\n"*.
docker_test.go:62: (dbg) Run:  out/minikube-windows-amd64.exe -p docker-flags-20211117230357-9504 ssh "sudo systemctl show docker --property=ExecStart --no-pager"

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:62: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p docker-flags-20211117230357-9504 ssh "sudo systemctl show docker --property=ExecStart --no-pager": exit status 80 (1.8785607s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "docker-flags-20211117230357-9504": docker container inspect docker-flags-20211117230357-9504 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-20211117230357-9504
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                     │
	│    * If the above advice does not help, please let us know:                                                         │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                       │
	│                                                                                                                     │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                            │
	│    * Please also attach the following file to the GitHub issue:                                                     │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_ssh_e7205990054f4366ee7f5bb530c13b1f3df973dc_0.log    │
	│                                                                                                                     │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
docker_test.go:64: failed on the second 'systemctl show docker' inside minikube. args "out/minikube-windows-amd64.exe -p docker-flags-20211117230357-9504 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"": exit status 80
docker_test.go:68: expected "out/minikube-windows-amd64.exe -p docker-flags-20211117230357-9504 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"" output to have include *--debug* . output: "\n\n"
panic.go:642: *** TestDockerFlags FAILED at 2021-11-17 23:04:38.9416445 +0000 GMT m=+2285.438477201
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestDockerFlags]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect docker-flags-20211117230357-9504
helpers_test.go:235: (dbg) docker inspect docker-flags-20211117230357-9504:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "docker-flags-20211117230357-9504",
	        "Id": "77502e795253addd3b7b8769f55cb29b11c782fd565bb91cec2bda24b4510b6d",
	        "Created": "2021-11-17T23:04:00.839334081Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.49.0/24",
	                    "Gateway": "192.168.49.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p docker-flags-20211117230357-9504 -n docker-flags-20211117230357-9504

                                                
                                                
=== CONT  TestDockerFlags
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p docker-flags-20211117230357-9504 -n docker-flags-20211117230357-9504: exit status 7 (1.8182112s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 23:04:40.853283    8856 status.go:247] status error: host: state: unknown state "docker-flags-20211117230357-9504": docker container inspect docker-flags-20211117230357-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-20211117230357-9504

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "docker-flags-20211117230357-9504" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:175: Cleaning up "docker-flags-20211117230357-9504" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p docker-flags-20211117230357-9504

                                                
                                                
=== CONT  TestDockerFlags
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p docker-flags-20211117230357-9504: (2.525498s)
--- FAIL: TestDockerFlags (45.81s)

                                                
                                    
x
+
TestForceSystemdFlag (44.31s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:86: (dbg) Run:  out/minikube-windows-amd64.exe start -p force-systemd-flag-20211117230313-9504 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:86: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p force-systemd-flag-20211117230313-9504 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker: exit status 80 (37.884026s)

                                                
                                                
-- stdout --
	* [force-systemd-flag-20211117230313-9504] minikube v1.24.0 on Microsoft Windows 10 Enterprise N 10.0.18363 Build 18363
	  - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=12739
	* Using the docker driver based on user configuration
	* Starting control plane node force-systemd-flag-20211117230313-9504 in cluster force-systemd-flag-20211117230313-9504
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* docker "force-systemd-flag-20211117230313-9504" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1117 23:03:13.543212   10032 out.go:297] Setting OutFile to fd 1432 ...
	I1117 23:03:13.650495   10032 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 23:03:13.650495   10032 out.go:310] Setting ErrFile to fd 1428...
	I1117 23:03:13.651480   10032 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 23:03:13.661482   10032 out.go:304] Setting JSON to false
	I1117 23:03:13.663471   10032 start.go:112] hostinfo: {"hostname":"minikube2","uptime":79509,"bootTime":1637110684,"procs":130,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.18363 Build 18363","kernelVersion":"10.0.18363 Build 18363","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"29dced62-21fb-45d8-a34e-472b66ced002"}
	W1117 23:03:13.663471   10032 start.go:120] gopshost.Virtualization returned error: not implemented yet
	I1117 23:03:13.669484   10032 out.go:176] * [force-systemd-flag-20211117230313-9504] minikube v1.24.0 on Microsoft Windows 10 Enterprise N 10.0.18363 Build 18363
	I1117 23:03:13.670471   10032 notify.go:174] Checking for updates...
	I1117 23:03:13.672473   10032 out.go:176]   - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	I1117 23:03:13.675481   10032 out.go:176]   - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	I1117 23:03:13.678472   10032 out.go:176]   - MINIKUBE_LOCATION=12739
	I1117 23:03:13.679474   10032 config.go:176] Loaded profile config "multinode-20211117225530-9504-m01": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.3
	I1117 23:03:13.679474   10032 driver.go:343] Setting default libvirt URI to qemu:///system
	I1117 23:03:15.312852   10032 docker.go:132] docker version: linux-19.03.12
	I1117 23:03:15.316188   10032 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I1117 23:03:15.686704   10032 info.go:263] docker info: {ID:L2W4:ZZCI:FAHF:ZZT6:NAWJ:ODFR:2ZBQ:YQ77:PYZZ:VR3E:WLUM:OBSV Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:40 OomKillDisable:true NGoroutines:52 SystemTime:2021-11-17 23:03:15.405147326 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:4.19.76-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://inde
x.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:6239399936 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:19.03.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7ad184331fa3e55e52b890ea95e65ba581ae3429 Expected:7ad184331fa3e55e52b890ea95e65ba581ae3429} RuncCommit:{ID:dc9208a3303feef5b3839f4323d9beb36df0a9dd Expected:dc9208a3303feef5b3839f4323d9beb36df0a9dd} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense:Community Engine Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I1117 23:03:15.693710   10032 out.go:176] * Using the docker driver based on user configuration
	I1117 23:03:15.693710   10032 start.go:280] selected driver: docker
	I1117 23:03:15.693710   10032 start.go:775] validating driver "docker" against <nil>
	I1117 23:03:15.693710   10032 start.go:786] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I1117 23:03:15.749713   10032 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I1117 23:03:16.118306   10032 info.go:263] docker info: {ID:L2W4:ZZCI:FAHF:ZZT6:NAWJ:ODFR:2ZBQ:YQ77:PYZZ:VR3E:WLUM:OBSV Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:40 OomKillDisable:true NGoroutines:52 SystemTime:2021-11-17 23:03:15.837170862 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:4.19.76-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://inde
x.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:6239399936 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:19.03.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7ad184331fa3e55e52b890ea95e65ba581ae3429 Expected:7ad184331fa3e55e52b890ea95e65ba581ae3429} RuncCommit:{ID:dc9208a3303feef5b3839f4323d9beb36df0a9dd Expected:dc9208a3303feef5b3839f4323d9beb36df0a9dd} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense:Community Engine Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I1117 23:03:16.118306   10032 start_flags.go:268] no existing cluster config was found, will generate one from the flags 
	I1117 23:03:16.118984   10032 start_flags.go:740] Wait components to verify : map[apiserver:true system_pods:true]
	I1117 23:03:16.118984   10032 cni.go:93] Creating CNI manager for ""
	I1117 23:03:16.119260   10032 cni.go:167] CNI unnecessary in this configuration, recommending no CNI
	I1117 23:03:16.119260   10032 start_flags.go:282] config:
	{Name:force-systemd-flag-20211117230313-9504 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.3 ClusterName:force-systemd-flag-20211117230313-9504 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host}
	I1117 23:03:16.122649   10032 out.go:176] * Starting control plane node force-systemd-flag-20211117230313-9504 in cluster force-systemd-flag-20211117230313-9504
	I1117 23:03:16.123177   10032 cache.go:118] Beginning downloading kic base image for docker with docker
	I1117 23:03:16.125953   10032 out.go:176] * Pulling base image ...
	I1117 23:03:16.125953   10032 preload.go:132] Checking if preload exists for k8s version v1.22.3 and runtime docker
	I1117 23:03:16.125953   10032 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c in local docker daemon
	I1117 23:03:16.125953   10032 preload.go:148] Found local preload: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4
	I1117 23:03:16.125953   10032 cache.go:57] Caching tarball of preloaded images
	I1117 23:03:16.126839   10032 preload.go:174] Found C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1117 23:03:16.126934   10032 cache.go:60] Finished verifying existence of preloaded tar for  v1.22.3 on docker
	I1117 23:03:16.126934   10032 profile.go:147] Saving config to C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\force-systemd-flag-20211117230313-9504\config.json ...
	I1117 23:03:16.127513   10032 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\force-systemd-flag-20211117230313-9504\config.json: {Name:mk8934d5898a8041035f0b9bf694735ebe7e327d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1117 23:03:16.233657   10032 image.go:79] Found gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c in local docker daemon, skipping pull
	I1117 23:03:16.233657   10032 cache.go:140] gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c exists in daemon, skipping load
	I1117 23:03:16.233657   10032 cache.go:206] Successfully downloaded all kic artifacts
	I1117 23:03:16.233657   10032 start.go:313] acquiring machines lock for force-systemd-flag-20211117230313-9504: {Name:mkef617df0f4c6f82e5d7b4b1db611ef48a5c363 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 23:03:16.233657   10032 start.go:317] acquired machines lock for "force-systemd-flag-20211117230313-9504" in 0s
	I1117 23:03:16.234193   10032 start.go:89] Provisioning new machine with config: &{Name:force-systemd-flag-20211117230313-9504 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.3 ClusterName:force-systemd-flag-20211117230313-9504 Namespace:default APIServerName:mi
nikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.22.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host} &{Name: IP: Port:8443 KubernetesVersion:v1.22.3 ControlPlane:true Worker:true}
	I1117 23:03:16.234193   10032 start.go:126] createHost starting for "" (driver="docker")
	I1117 23:03:16.238191   10032 out.go:203] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I1117 23:03:16.238850   10032 start.go:160] libmachine.API.Create for "force-systemd-flag-20211117230313-9504" (driver="docker")
	I1117 23:03:16.238850   10032 client.go:168] LocalClient.Create starting
	I1117 23:03:16.238850   10032 main.go:130] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I1117 23:03:16.239508   10032 main.go:130] libmachine: Decoding PEM data...
	I1117 23:03:16.239568   10032 main.go:130] libmachine: Parsing certificate...
	I1117 23:03:16.239568   10032 main.go:130] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I1117 23:03:16.239568   10032 main.go:130] libmachine: Decoding PEM data...
	I1117 23:03:16.239568   10032 main.go:130] libmachine: Parsing certificate...
	I1117 23:03:16.245497   10032 cli_runner.go:115] Run: docker network inspect force-systemd-flag-20211117230313-9504 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1117 23:03:16.350268   10032 cli_runner.go:162] docker network inspect force-systemd-flag-20211117230313-9504 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1117 23:03:16.353884   10032 network_create.go:254] running [docker network inspect force-systemd-flag-20211117230313-9504] to gather additional debugging logs...
	I1117 23:03:16.353884   10032 cli_runner.go:115] Run: docker network inspect force-systemd-flag-20211117230313-9504
	W1117 23:03:16.449278   10032 cli_runner.go:162] docker network inspect force-systemd-flag-20211117230313-9504 returned with exit code 1
	I1117 23:03:16.449278   10032 network_create.go:257] error running [docker network inspect force-systemd-flag-20211117230313-9504]: docker network inspect force-systemd-flag-20211117230313-9504: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: force-systemd-flag-20211117230313-9504
	I1117 23:03:16.449278   10032 network_create.go:259] output of [docker network inspect force-systemd-flag-20211117230313-9504]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: force-systemd-flag-20211117230313-9504
	
	** /stderr **
	I1117 23:03:16.452269   10032 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1117 23:03:16.562580   10032 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc000010528] misses:0}
	I1117 23:03:16.562580   10032 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1117 23:03:16.562580   10032 network_create.go:106] attempt to create docker network force-systemd-flag-20211117230313-9504 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1117 23:03:16.567074   10032 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true force-systemd-flag-20211117230313-9504
	I1117 23:03:16.840496   10032 network_create.go:90] docker network force-systemd-flag-20211117230313-9504 192.168.49.0/24 created
	I1117 23:03:16.840682   10032 kic.go:106] calculated static IP "192.168.49.2" for the "force-systemd-flag-20211117230313-9504" container
	I1117 23:03:16.851028   10032 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
	I1117 23:03:16.953593   10032 cli_runner.go:115] Run: docker volume create force-systemd-flag-20211117230313-9504 --label name.minikube.sigs.k8s.io=force-systemd-flag-20211117230313-9504 --label created_by.minikube.sigs.k8s.io=true
	I1117 23:03:17.079756   10032 oci.go:102] Successfully created a docker volume force-systemd-flag-20211117230313-9504
	I1117 23:03:17.083352   10032 cli_runner.go:115] Run: docker run --rm --name force-systemd-flag-20211117230313-9504-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-flag-20211117230313-9504 --entrypoint /usr/bin/test -v force-systemd-flag-20211117230313-9504:/var gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -d /var/lib
	I1117 23:03:18.281503   10032 cli_runner.go:168] Completed: docker run --rm --name force-systemd-flag-20211117230313-9504-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-flag-20211117230313-9504 --entrypoint /usr/bin/test -v force-systemd-flag-20211117230313-9504:/var gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -d /var/lib: (1.1981424s)
	I1117 23:03:18.281503   10032 oci.go:106] Successfully prepared a docker volume force-systemd-flag-20211117230313-9504
	I1117 23:03:18.281503   10032 preload.go:132] Checking if preload exists for k8s version v1.22.3 and runtime docker
	I1117 23:03:18.281503   10032 kic.go:179] Starting extracting preloaded images to volume ...
	I1117 23:03:18.287819   10032 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v force-systemd-flag-20211117230313-9504:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir
	I1117 23:03:18.288412   10032 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	W1117 23:03:18.398518   10032 cli_runner.go:162] docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v force-systemd-flag-20211117230313-9504:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir returned with exit code 125
	I1117 23:03:18.398641   10032 kic.go:186] Unable to extract preloaded tarball to volume: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v force-systemd-flag-20211117230313-9504:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: status code not OK but 500: {"Message":"Unhandled exception: The notification platform is unavailable.\r\n\r\nThe notification platform is unavailable.\r\n","StackTrace":"   at Windows.UI.Notifications.ToastNotificationManager.CreateToastNotifier(String applicationId)\r\n   at Docker.WPF.PromptShareDirectory.<PromptUserAsync>d__0.MoveNext() in C:\\workspaces\\stable-2.3.x\\src\\github.com\\docker\\pinata\\win\\src\\Docker.WPF\\PromptShareDirectory.cs:line 25\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at Docker.ApiServices.Mounting.FileSharing.<DoShareAsync>d__6.MoveNext() in C:\\workspaces\\stable-2.3.x\\src\\github.com\\docker\\pinata\\win\\src\\Docker.ApiServices\\Mounting\\FileSharing.cs:line 80\r\n--- End of stack trace from previous location whe
re exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at Docker.ApiServices.Mounting.FileSharing.<ShareAsync>d__4.MoveNext() in C:\\workspaces\\stable-2.3.x\\src\\github.com\\docker\\pinata\\win\\src\\Docker.ApiServices\\Mounting\\FileSharing.cs:line 47\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at Docker.HttpApi.Controllers.FilesharingController.<ShareDirectory>d__2.MoveNext() in C:\\workspaces\\stable-2.3.x\\src\\github.com\\docker\\pinata\\win\\src\\Docker.HttpApi\\Controllers\\FilesharingController.cs:line 21\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.Exceptio
nDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at System.Threading.Tasks.TaskHelpersExtensions.<CastToObject>d__1`1.MoveNext()\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at System.Web.Http.Controllers.ApiControllerActionInvoker.<InvokeActionAsyncCore>d__1.MoveNext()\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at System.Web.Http.Controllers.ActionFilterResult.<ExecuteAsync>d__5.MoveNext()\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.Excepti
onServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at System.Web.Http.Dispatcher.HttpControllerDispatcher.<SendAsync>d__15.MoveNext()"}.
	See 'docker run --help'.
	I1117 23:03:18.660161   10032 info.go:263] docker info: {ID:L2W4:ZZCI:FAHF:ZZT6:NAWJ:ODFR:2ZBQ:YQ77:PYZZ:VR3E:WLUM:OBSV Containers:2 ContainersRunning:0 ContainersPaused:0 ContainersStopped:2 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:55 OomKillDisable:true NGoroutines:89 SystemTime:2021-11-17 23:03:18.380535517 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:4.19.76-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://inde
x.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:6239399936 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:19.03.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7ad184331fa3e55e52b890ea95e65ba581ae3429 Expected:7ad184331fa3e55e52b890ea95e65ba581ae3429} RuncCommit:{ID:dc9208a3303feef5b3839f4323d9beb36df0a9dd Expected:dc9208a3303feef5b3839f4323d9beb36df0a9dd} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense:Community Engine Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	E1117 23:03:18.660161   10032 oci.go:197] error getting kernel modules path: Unable to locate kernel modules
	I1117 23:03:18.660161   10032 client.go:171] LocalClient.Create took 2.4212932s
	I1117 23:03:20.671222   10032 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1117 23:03:20.675080   10032 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20211117230313-9504
	W1117 23:03:20.774373   10032 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20211117230313-9504 returned with exit code 1
	I1117 23:03:20.774560   10032 retry.go:31] will retry after 276.165072ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-20211117230313-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20211117230313-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-20211117230313-9504
	I1117 23:03:21.055305   10032 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20211117230313-9504
	W1117 23:03:21.157398   10032 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20211117230313-9504 returned with exit code 1
	I1117 23:03:21.157661   10032 retry.go:31] will retry after 540.190908ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-20211117230313-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20211117230313-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-20211117230313-9504
	I1117 23:03:21.705503   10032 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20211117230313-9504
	W1117 23:03:21.794918   10032 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20211117230313-9504 returned with exit code 1
	I1117 23:03:21.794918   10032 retry.go:31] will retry after 655.06503ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-20211117230313-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20211117230313-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-20211117230313-9504
	I1117 23:03:22.455859   10032 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20211117230313-9504
	W1117 23:03:22.547292   10032 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20211117230313-9504 returned with exit code 1
	W1117 23:03:22.547752   10032 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-20211117230313-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20211117230313-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-20211117230313-9504
	
	W1117 23:03:22.547831   10032 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-20211117230313-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20211117230313-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-20211117230313-9504
	I1117 23:03:22.547831   10032 start.go:129] duration metric: createHost completed in 6.3135908s
	I1117 23:03:22.547831   10032 start.go:80] releasing machines lock for "force-systemd-flag-20211117230313-9504", held for 6.3141273s
	W1117 23:03:22.548040   10032 start.go:532] error starting host: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	I1117 23:03:22.557693   10032 cli_runner.go:115] Run: docker container inspect force-systemd-flag-20211117230313-9504 --format={{.State.Status}}
	W1117 23:03:22.647237   10032 cli_runner.go:162] docker container inspect force-systemd-flag-20211117230313-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:03:22.647650   10032 delete.go:82] Unable to get host status for force-systemd-flag-20211117230313-9504, assuming it has already been deleted: state: unknown state "force-systemd-flag-20211117230313-9504": docker container inspect force-systemd-flag-20211117230313-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-20211117230313-9504
	W1117 23:03:22.647650   10032 out.go:241] ! StartHost failed, but will try again: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	! StartHost failed, but will try again: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	I1117 23:03:22.647650   10032 start.go:547] Will try again in 5 seconds ...
	I1117 23:03:27.648478   10032 start.go:313] acquiring machines lock for force-systemd-flag-20211117230313-9504: {Name:mkef617df0f4c6f82e5d7b4b1db611ef48a5c363 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 23:03:27.648478   10032 start.go:317] acquired machines lock for "force-systemd-flag-20211117230313-9504" in 0s
	I1117 23:03:27.648967   10032 start.go:93] Skipping create...Using existing machine configuration
	I1117 23:03:27.648967   10032 fix.go:55] fixHost starting: 
	I1117 23:03:27.656244   10032 cli_runner.go:115] Run: docker container inspect force-systemd-flag-20211117230313-9504 --format={{.State.Status}}
	W1117 23:03:27.748132   10032 cli_runner.go:162] docker container inspect force-systemd-flag-20211117230313-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:03:27.748304   10032 fix.go:108] recreateIfNeeded on force-systemd-flag-20211117230313-9504: state= err=unknown state "force-systemd-flag-20211117230313-9504": docker container inspect force-systemd-flag-20211117230313-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-20211117230313-9504
	I1117 23:03:27.748371   10032 fix.go:113] machineExists: false. err=machine does not exist
	I1117 23:03:27.752339   10032 out.go:176] * docker "force-systemd-flag-20211117230313-9504" container is missing, will recreate.
	I1117 23:03:27.752415   10032 delete.go:124] DEMOLISHING force-systemd-flag-20211117230313-9504 ...
	I1117 23:03:27.759102   10032 cli_runner.go:115] Run: docker container inspect force-systemd-flag-20211117230313-9504 --format={{.State.Status}}
	W1117 23:03:27.847304   10032 cli_runner.go:162] docker container inspect force-systemd-flag-20211117230313-9504 --format={{.State.Status}} returned with exit code 1
	W1117 23:03:27.847304   10032 stop.go:75] unable to get state: unknown state "force-systemd-flag-20211117230313-9504": docker container inspect force-systemd-flag-20211117230313-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-20211117230313-9504
	I1117 23:03:27.847638   10032 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "force-systemd-flag-20211117230313-9504": docker container inspect force-systemd-flag-20211117230313-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-20211117230313-9504
	I1117 23:03:27.856155   10032 cli_runner.go:115] Run: docker container inspect force-systemd-flag-20211117230313-9504 --format={{.State.Status}}
	W1117 23:03:27.945540   10032 cli_runner.go:162] docker container inspect force-systemd-flag-20211117230313-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:03:27.945738   10032 delete.go:82] Unable to get host status for force-systemd-flag-20211117230313-9504, assuming it has already been deleted: state: unknown state "force-systemd-flag-20211117230313-9504": docker container inspect force-systemd-flag-20211117230313-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-20211117230313-9504
	I1117 23:03:27.949195   10032 cli_runner.go:115] Run: docker container inspect -f {{.Id}} force-systemd-flag-20211117230313-9504
	W1117 23:03:28.041916   10032 cli_runner.go:162] docker container inspect -f {{.Id}} force-systemd-flag-20211117230313-9504 returned with exit code 1
	I1117 23:03:28.042014   10032 kic.go:360] could not find the container force-systemd-flag-20211117230313-9504 to remove it. will try anyways
	I1117 23:03:28.045323   10032 cli_runner.go:115] Run: docker container inspect force-systemd-flag-20211117230313-9504 --format={{.State.Status}}
	W1117 23:03:28.145245   10032 cli_runner.go:162] docker container inspect force-systemd-flag-20211117230313-9504 --format={{.State.Status}} returned with exit code 1
	W1117 23:03:28.145452   10032 oci.go:83] error getting container status, will try to delete anyways: unknown state "force-systemd-flag-20211117230313-9504": docker container inspect force-systemd-flag-20211117230313-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-20211117230313-9504
	I1117 23:03:28.149883   10032 cli_runner.go:115] Run: docker exec --privileged -t force-systemd-flag-20211117230313-9504 /bin/bash -c "sudo init 0"
	W1117 23:03:28.249488   10032 cli_runner.go:162] docker exec --privileged -t force-systemd-flag-20211117230313-9504 /bin/bash -c "sudo init 0" returned with exit code 1
	I1117 23:03:28.249608   10032 oci.go:658] error shutdown force-systemd-flag-20211117230313-9504: docker exec --privileged -t force-systemd-flag-20211117230313-9504 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: force-systemd-flag-20211117230313-9504
	I1117 23:03:29.254964   10032 cli_runner.go:115] Run: docker container inspect force-systemd-flag-20211117230313-9504 --format={{.State.Status}}
	W1117 23:03:29.341355   10032 cli_runner.go:162] docker container inspect force-systemd-flag-20211117230313-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:03:29.341606   10032 oci.go:670] temporary error verifying shutdown: unknown state "force-systemd-flag-20211117230313-9504": docker container inspect force-systemd-flag-20211117230313-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-20211117230313-9504
	I1117 23:03:29.341606   10032 oci.go:672] temporary error: container force-systemd-flag-20211117230313-9504 status is  but expect it to be exited
	I1117 23:03:29.341606   10032 retry.go:31] will retry after 462.318748ms: couldn't verify container is exited. %v: unknown state "force-systemd-flag-20211117230313-9504": docker container inspect force-systemd-flag-20211117230313-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-20211117230313-9504
	I1117 23:03:29.810020   10032 cli_runner.go:115] Run: docker container inspect force-systemd-flag-20211117230313-9504 --format={{.State.Status}}
	W1117 23:03:29.912744   10032 cli_runner.go:162] docker container inspect force-systemd-flag-20211117230313-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:03:29.912819   10032 oci.go:670] temporary error verifying shutdown: unknown state "force-systemd-flag-20211117230313-9504": docker container inspect force-systemd-flag-20211117230313-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-20211117230313-9504
	I1117 23:03:29.912887   10032 oci.go:672] temporary error: container force-systemd-flag-20211117230313-9504 status is  but expect it to be exited
	I1117 23:03:29.912917   10032 retry.go:31] will retry after 890.117305ms: couldn't verify container is exited. %v: unknown state "force-systemd-flag-20211117230313-9504": docker container inspect force-systemd-flag-20211117230313-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-20211117230313-9504
	I1117 23:03:30.812555   10032 cli_runner.go:115] Run: docker container inspect force-systemd-flag-20211117230313-9504 --format={{.State.Status}}
	W1117 23:03:30.903665   10032 cli_runner.go:162] docker container inspect force-systemd-flag-20211117230313-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:03:30.903873   10032 oci.go:670] temporary error verifying shutdown: unknown state "force-systemd-flag-20211117230313-9504": docker container inspect force-systemd-flag-20211117230313-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-20211117230313-9504
	I1117 23:03:30.903873   10032 oci.go:672] temporary error: container force-systemd-flag-20211117230313-9504 status is  but expect it to be exited
	I1117 23:03:30.903873   10032 retry.go:31] will retry after 636.341646ms: couldn't verify container is exited. %v: unknown state "force-systemd-flag-20211117230313-9504": docker container inspect force-systemd-flag-20211117230313-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-20211117230313-9504
	I1117 23:03:31.545826   10032 cli_runner.go:115] Run: docker container inspect force-systemd-flag-20211117230313-9504 --format={{.State.Status}}
	W1117 23:03:31.637946   10032 cli_runner.go:162] docker container inspect force-systemd-flag-20211117230313-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:03:31.638124   10032 oci.go:670] temporary error verifying shutdown: unknown state "force-systemd-flag-20211117230313-9504": docker container inspect force-systemd-flag-20211117230313-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-20211117230313-9504
	I1117 23:03:31.638124   10032 oci.go:672] temporary error: container force-systemd-flag-20211117230313-9504 status is  but expect it to be exited
	I1117 23:03:31.638216   10032 retry.go:31] will retry after 1.107876242s: couldn't verify container is exited. %v: unknown state "force-systemd-flag-20211117230313-9504": docker container inspect force-systemd-flag-20211117230313-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-20211117230313-9504
	I1117 23:03:32.751974   10032 cli_runner.go:115] Run: docker container inspect force-systemd-flag-20211117230313-9504 --format={{.State.Status}}
	W1117 23:03:32.841573   10032 cli_runner.go:162] docker container inspect force-systemd-flag-20211117230313-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:03:32.841573   10032 oci.go:670] temporary error verifying shutdown: unknown state "force-systemd-flag-20211117230313-9504": docker container inspect force-systemd-flag-20211117230313-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-20211117230313-9504
	I1117 23:03:32.841820   10032 oci.go:672] temporary error: container force-systemd-flag-20211117230313-9504 status is  but expect it to be exited
	I1117 23:03:32.841882   10032 retry.go:31] will retry after 1.511079094s: couldn't verify container is exited. %v: unknown state "force-systemd-flag-20211117230313-9504": docker container inspect force-systemd-flag-20211117230313-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-20211117230313-9504
	I1117 23:03:34.358636   10032 cli_runner.go:115] Run: docker container inspect force-systemd-flag-20211117230313-9504 --format={{.State.Status}}
	W1117 23:03:34.451514   10032 cli_runner.go:162] docker container inspect force-systemd-flag-20211117230313-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:03:34.451620   10032 oci.go:670] temporary error verifying shutdown: unknown state "force-systemd-flag-20211117230313-9504": docker container inspect force-systemd-flag-20211117230313-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-20211117230313-9504
	I1117 23:03:34.451620   10032 oci.go:672] temporary error: container force-systemd-flag-20211117230313-9504 status is  but expect it to be exited
	I1117 23:03:34.451787   10032 retry.go:31] will retry after 3.04096222s: couldn't verify container is exited. %v: unknown state "force-systemd-flag-20211117230313-9504": docker container inspect force-systemd-flag-20211117230313-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-20211117230313-9504
	I1117 23:03:37.498033   10032 cli_runner.go:115] Run: docker container inspect force-systemd-flag-20211117230313-9504 --format={{.State.Status}}
	W1117 23:03:37.588155   10032 cli_runner.go:162] docker container inspect force-systemd-flag-20211117230313-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:03:37.588441   10032 oci.go:670] temporary error verifying shutdown: unknown state "force-systemd-flag-20211117230313-9504": docker container inspect force-systemd-flag-20211117230313-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-20211117230313-9504
	I1117 23:03:37.588500   10032 oci.go:672] temporary error: container force-systemd-flag-20211117230313-9504 status is  but expect it to be exited
	I1117 23:03:37.588500   10032 retry.go:31] will retry after 5.781953173s: couldn't verify container is exited. %v: unknown state "force-systemd-flag-20211117230313-9504": docker container inspect force-systemd-flag-20211117230313-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-20211117230313-9504
	I1117 23:03:43.376512   10032 cli_runner.go:115] Run: docker container inspect force-systemd-flag-20211117230313-9504 --format={{.State.Status}}
	W1117 23:03:43.467607   10032 cli_runner.go:162] docker container inspect force-systemd-flag-20211117230313-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:03:43.467806   10032 oci.go:670] temporary error verifying shutdown: unknown state "force-systemd-flag-20211117230313-9504": docker container inspect force-systemd-flag-20211117230313-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-20211117230313-9504
	I1117 23:03:43.467903   10032 oci.go:672] temporary error: container force-systemd-flag-20211117230313-9504 status is  but expect it to be exited
	I1117 23:03:43.467903   10032 oci.go:87] couldn't shut down force-systemd-flag-20211117230313-9504 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "force-systemd-flag-20211117230313-9504": docker container inspect force-systemd-flag-20211117230313-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-20211117230313-9504
	 
	I1117 23:03:43.471967   10032 cli_runner.go:115] Run: docker rm -f -v force-systemd-flag-20211117230313-9504
	W1117 23:03:43.557547   10032 cli_runner.go:162] docker rm -f -v force-systemd-flag-20211117230313-9504 returned with exit code 1
	W1117 23:03:43.558528   10032 delete.go:139] delete failed (probably ok) <nil>
	I1117 23:03:43.558528   10032 fix.go:120] Sleeping 1 second for extra luck!
	I1117 23:03:44.559109   10032 start.go:126] createHost starting for "" (driver="docker")
	I1117 23:03:44.563476   10032 out.go:203] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I1117 23:03:44.563883   10032 start.go:160] libmachine.API.Create for "force-systemd-flag-20211117230313-9504" (driver="docker")
	I1117 23:03:44.563992   10032 client.go:168] LocalClient.Create starting
	I1117 23:03:44.564711   10032 main.go:130] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I1117 23:03:44.565061   10032 main.go:130] libmachine: Decoding PEM data...
	I1117 23:03:44.565061   10032 main.go:130] libmachine: Parsing certificate...
	I1117 23:03:44.565294   10032 main.go:130] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I1117 23:03:44.565741   10032 main.go:130] libmachine: Decoding PEM data...
	I1117 23:03:44.565741   10032 main.go:130] libmachine: Parsing certificate...
	I1117 23:03:44.573135   10032 cli_runner.go:115] Run: docker network inspect force-systemd-flag-20211117230313-9504 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1117 23:03:44.664633   10032 network_create.go:67] Found existing network {name:force-systemd-flag-20211117230313-9504 subnet:0xc000e06480 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 168 49 1] mtu:1500}
	I1117 23:03:44.664633   10032 kic.go:106] calculated static IP "192.168.49.2" for the "force-systemd-flag-20211117230313-9504" container
	I1117 23:03:44.672254   10032 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
	I1117 23:03:44.764579   10032 cli_runner.go:115] Run: docker volume create force-systemd-flag-20211117230313-9504 --label name.minikube.sigs.k8s.io=force-systemd-flag-20211117230313-9504 --label created_by.minikube.sigs.k8s.io=true
	I1117 23:03:44.850870   10032 oci.go:102] Successfully created a docker volume force-systemd-flag-20211117230313-9504
	I1117 23:03:44.855460   10032 cli_runner.go:115] Run: docker run --rm --name force-systemd-flag-20211117230313-9504-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-flag-20211117230313-9504 --entrypoint /usr/bin/test -v force-systemd-flag-20211117230313-9504:/var gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -d /var/lib
	I1117 23:03:45.907266   10032 cli_runner.go:168] Completed: docker run --rm --name force-systemd-flag-20211117230313-9504-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-flag-20211117230313-9504 --entrypoint /usr/bin/test -v force-systemd-flag-20211117230313-9504:/var gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -d /var/lib: (1.0517231s)
	I1117 23:03:45.907266   10032 oci.go:106] Successfully prepared a docker volume force-systemd-flag-20211117230313-9504
	I1117 23:03:45.907583   10032 preload.go:132] Checking if preload exists for k8s version v1.22.3 and runtime docker
	I1117 23:03:45.907741   10032 kic.go:179] Starting extracting preloaded images to volume ...
	I1117 23:03:45.913254   10032 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v force-systemd-flag-20211117230313-9504:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir
	I1117 23:03:45.913254   10032 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	W1117 23:03:46.030054   10032 cli_runner.go:162] docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v force-systemd-flag-20211117230313-9504:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir returned with exit code 125
	I1117 23:03:46.030054   10032 kic.go:186] Unable to extract preloaded tarball to volume: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v force-systemd-flag-20211117230313-9504:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: status code not OK but 500: {"Message":"Unhandled exception: The notification platform is unavailable.\r\n\r\nThe notification platform is unavailable.\r\n","StackTrace":"   at Windows.UI.Notifications.ToastNotificationManager.CreateToastNotifier(String applicationId)\r\n   at Docker.WPF.PromptShareDirectory.<PromptUserAsync>d__0.MoveNext() in C:\\workspaces\\stable-2.3.x\\src\\github.com\\docker\\pinata\\win\\src\\Docker.WPF\\PromptShareDirectory.cs:line 25\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at Docker.ApiServices.Mounting.FileSharing.<DoShareAsync>d__6.MoveNext() in C:\\workspaces\\stable-2.3.x\\src\\github.com\\docker\\pinata\\win\\src\\Docker.ApiServices\\Mounting\\FileSharing.cs:line 80\r\n--- End of stack trace from previous location whe
re exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at Docker.ApiServices.Mounting.FileSharing.<ShareAsync>d__4.MoveNext() in C:\\workspaces\\stable-2.3.x\\src\\github.com\\docker\\pinata\\win\\src\\Docker.ApiServices\\Mounting\\FileSharing.cs:line 47\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at Docker.HttpApi.Controllers.FilesharingController.<ShareDirectory>d__2.MoveNext() in C:\\workspaces\\stable-2.3.x\\src\\github.com\\docker\\pinata\\win\\src\\Docker.HttpApi\\Controllers\\FilesharingController.cs:line 21\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.Exceptio
nDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at System.Threading.Tasks.TaskHelpersExtensions.<CastToObject>d__1`1.MoveNext()\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at System.Web.Http.Controllers.ApiControllerActionInvoker.<InvokeActionAsyncCore>d__1.MoveNext()\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at System.Web.Http.Controllers.ActionFilterResult.<ExecuteAsync>d__5.MoveNext()\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.Excepti
onServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at System.Web.Http.Dispatcher.HttpControllerDispatcher.<SendAsync>d__15.MoveNext()"}.
	See 'docker run --help'.
	I1117 23:03:46.277406   10032 info.go:263] docker info: {ID:L2W4:ZZCI:FAHF:ZZT6:NAWJ:ODFR:2ZBQ:YQ77:PYZZ:VR3E:WLUM:OBSV Containers:2 ContainersRunning:0 ContainersPaused:0 ContainersStopped:2 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:59 OomKillDisable:true NGoroutines:86 SystemTime:2021-11-17 23:03:46.012986785 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:4.19.76-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://inde
x.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:6239399936 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:19.03.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7ad184331fa3e55e52b890ea95e65ba581ae3429 Expected:7ad184331fa3e55e52b890ea95e65ba581ae3429} RuncCommit:{ID:dc9208a3303feef5b3839f4323d9beb36df0a9dd Expected:dc9208a3303feef5b3839f4323d9beb36df0a9dd} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense:Community Engine Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	E1117 23:03:46.277732   10032 oci.go:197] error getting kernel modules path: Unable to locate kernel modules
	I1117 23:03:46.277798   10032 client.go:171] LocalClient.Create took 1.7137936s
	I1117 23:03:48.287082   10032 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1117 23:03:48.290384   10032 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20211117230313-9504
	W1117 23:03:48.383270   10032 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20211117230313-9504 returned with exit code 1
	I1117 23:03:48.383504   10032 retry.go:31] will retry after 178.565968ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-20211117230313-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20211117230313-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-20211117230313-9504
	I1117 23:03:48.567929   10032 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20211117230313-9504
	W1117 23:03:48.658760   10032 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20211117230313-9504 returned with exit code 1
	I1117 23:03:48.658760   10032 retry.go:31] will retry after 330.246446ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-20211117230313-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20211117230313-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-20211117230313-9504
	I1117 23:03:48.992967   10032 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20211117230313-9504
	W1117 23:03:49.089179   10032 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20211117230313-9504 returned with exit code 1
	I1117 23:03:49.089179   10032 retry.go:31] will retry after 460.157723ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-20211117230313-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20211117230313-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-20211117230313-9504
	I1117 23:03:49.554492   10032 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20211117230313-9504
	W1117 23:03:49.647075   10032 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20211117230313-9504 returned with exit code 1
	W1117 23:03:49.647452   10032 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-20211117230313-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20211117230313-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-20211117230313-9504
	
	W1117 23:03:49.647513   10032 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-20211117230313-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20211117230313-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-20211117230313-9504
	I1117 23:03:49.647513   10032 start.go:129] duration metric: createHost completed in 5.0883656s
	I1117 23:03:49.654490   10032 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1117 23:03:49.658632   10032 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20211117230313-9504
	W1117 23:03:49.748080   10032 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20211117230313-9504 returned with exit code 1
	I1117 23:03:49.748488   10032 retry.go:31] will retry after 195.758538ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-20211117230313-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20211117230313-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-20211117230313-9504
	I1117 23:03:49.949770   10032 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20211117230313-9504
	W1117 23:03:50.044221   10032 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20211117230313-9504 returned with exit code 1
	I1117 23:03:50.044387   10032 retry.go:31] will retry after 297.413196ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-20211117230313-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20211117230313-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-20211117230313-9504
	I1117 23:03:50.347270   10032 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20211117230313-9504
	W1117 23:03:50.440335   10032 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20211117230313-9504 returned with exit code 1
	I1117 23:03:50.440627   10032 retry.go:31] will retry after 663.23513ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-20211117230313-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20211117230313-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-20211117230313-9504
	I1117 23:03:51.109447   10032 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20211117230313-9504
	W1117 23:03:51.196848   10032 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20211117230313-9504 returned with exit code 1
	W1117 23:03:51.197097   10032 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-20211117230313-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20211117230313-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-20211117230313-9504
	
	W1117 23:03:51.197097   10032 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-20211117230313-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20211117230313-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-20211117230313-9504
	I1117 23:03:51.197097   10032 fix.go:57] fixHost completed within 23.5479533s
	I1117 23:03:51.197097   10032 start.go:80] releasing machines lock for "force-systemd-flag-20211117230313-9504", held for 23.5481108s
	W1117 23:03:51.197936   10032 out.go:241] * Failed to start docker container. Running "minikube delete -p force-systemd-flag-20211117230313-9504" may fix it: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	* Failed to start docker container. Running "minikube delete -p force-systemd-flag-20211117230313-9504" may fix it: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	I1117 23:03:51.202565   10032 out.go:176] 
	W1117 23:03:51.202734   10032 out.go:241] X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	W1117 23:03:51.202734   10032 out.go:241] * 
	* 
	W1117 23:03:51.204151   10032 out.go:241] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1117 23:03:51.207401   10032 out.go:176] 

                                                
                                                
** /stderr **
docker_test.go:88: failed to start minikube with args: "out/minikube-windows-amd64.exe start -p force-systemd-flag-20211117230313-9504 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker" : exit status 80
docker_test.go:105: (dbg) Run:  out/minikube-windows-amd64.exe -p force-systemd-flag-20211117230313-9504 ssh "docker info --format {{.CgroupDriver}}"

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:105: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p force-systemd-flag-20211117230313-9504 ssh "docker info --format {{.CgroupDriver}}": exit status 80 (1.8786291s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "force-systemd-flag-20211117230313-9504": docker container inspect force-systemd-flag-20211117230313-9504 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-20211117230313-9504
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                     │
	│    * If the above advice does not help, please let us know:                                                         │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                       │
	│                                                                                                                     │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                            │
	│    * Please also attach the following file to the GitHub issue:                                                     │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_ssh_2837ebd22544166cf14c5e2e977cc80019e59e54_0.log    │
	│                                                                                                                     │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
docker_test.go:107: failed to get docker cgroup driver. args "out/minikube-windows-amd64.exe -p force-systemd-flag-20211117230313-9504 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 80
docker_test.go:101: *** TestForceSystemdFlag FAILED at 2021-11-17 23:03:53.1974909 +0000 GMT m=+2239.694666701
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestForceSystemdFlag]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect force-systemd-flag-20211117230313-9504
helpers_test.go:235: (dbg) docker inspect force-systemd-flag-20211117230313-9504:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "force-systemd-flag-20211117230313-9504",
	        "Id": "bcfa9b4ad35f1aa0c991676023f6a6481bfcf9251537a728b19c1fcdeeefc3ac",
	        "Created": "2021-11-17T23:03:16.658363815Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.49.0/24",
	                    "Gateway": "192.168.49.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p force-systemd-flag-20211117230313-9504 -n force-systemd-flag-20211117230313-9504

                                                
                                                
=== CONT  TestForceSystemdFlag
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p force-systemd-flag-20211117230313-9504 -n force-systemd-flag-20211117230313-9504: exit status 7 (1.8078699s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 23:03:55.096332   11700 status.go:247] status error: host: state: unknown state "force-systemd-flag-20211117230313-9504": docker container inspect force-systemd-flag-20211117230313-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-20211117230313-9504

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-flag-20211117230313-9504" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:175: Cleaning up "force-systemd-flag-20211117230313-9504" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p force-systemd-flag-20211117230313-9504

                                                
                                                
=== CONT  TestForceSystemdFlag
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p force-systemd-flag-20211117230313-9504: (2.5484689s)
--- FAIL: TestForceSystemdFlag (44.31s)

                                                
                                    
x
+
TestForceSystemdEnv (44.94s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:151: (dbg) Run:  out/minikube-windows-amd64.exe start -p force-systemd-env-20211117230357-9504 --memory=2048 --alsologtostderr -v=5 --driver=docker

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:151: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p force-systemd-env-20211117230357-9504 --memory=2048 --alsologtostderr -v=5 --driver=docker: exit status 80 (37.7147252s)

                                                
                                                
-- stdout --
	* [force-systemd-env-20211117230357-9504] minikube v1.24.0 on Microsoft Windows 10 Enterprise N 10.0.18363 Build 18363
	  - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=12739
	  - MINIKUBE_FORCE_SYSTEMD=true
	* Using the docker driver based on user configuration
	* Starting control plane node force-systemd-env-20211117230357-9504 in cluster force-systemd-env-20211117230357-9504
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* docker "force-systemd-env-20211117230357-9504" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1117 23:03:57.832583    5660 out.go:297] Setting OutFile to fd 1572 ...
	I1117 23:03:57.907072    5660 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 23:03:57.907166    5660 out.go:310] Setting ErrFile to fd 1320...
	I1117 23:03:57.907295    5660 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 23:03:57.916173    5660 out.go:304] Setting JSON to false
	I1117 23:03:57.919503    5660 start.go:112] hostinfo: {"hostname":"minikube2","uptime":79553,"bootTime":1637110684,"procs":130,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.18363 Build 18363","kernelVersion":"10.0.18363 Build 18363","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"29dced62-21fb-45d8-a34e-472b66ced002"}
	W1117 23:03:57.919503    5660 start.go:120] gopshost.Virtualization returned error: not implemented yet
	I1117 23:03:57.924676    5660 out.go:176] * [force-systemd-env-20211117230357-9504] minikube v1.24.0 on Microsoft Windows 10 Enterprise N 10.0.18363 Build 18363
	I1117 23:03:57.925041    5660 notify.go:174] Checking for updates...
	I1117 23:03:57.927860    5660 out.go:176]   - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	I1117 23:03:57.930670    5660 out.go:176]   - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	I1117 23:03:57.932215    5660 out.go:176]   - MINIKUBE_LOCATION=12739
	I1117 23:03:57.935267    5660 out.go:176]   - MINIKUBE_FORCE_SYSTEMD=true
	I1117 23:03:57.935985    5660 config.go:176] Loaded profile config "NoKubernetes-20211117230313-9504": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v0.0.0
	I1117 23:03:57.936316    5660 config.go:176] Loaded profile config "cert-expiration-20211117230315-9504": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.3
	I1117 23:03:57.936880    5660 config.go:176] Loaded profile config "multinode-20211117225530-9504-m01": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.3
	I1117 23:03:57.937017    5660 driver.go:343] Setting default libvirt URI to qemu:///system
	I1117 23:03:59.549510    5660 docker.go:132] docker version: linux-19.03.12
	I1117 23:03:59.553579    5660 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I1117 23:03:59.900897    5660 info.go:263] docker info: {ID:L2W4:ZZCI:FAHF:ZZT6:NAWJ:ODFR:2ZBQ:YQ77:PYZZ:VR3E:WLUM:OBSV Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:40 OomKillDisable:true NGoroutines:52 SystemTime:2021-11-17 23:03:59.634670925 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:4.19.76-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://inde
x.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:6239399936 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:19.03.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7ad184331fa3e55e52b890ea95e65ba581ae3429 Expected:7ad184331fa3e55e52b890ea95e65ba581ae3429} RuncCommit:{ID:dc9208a3303feef5b3839f4323d9beb36df0a9dd Expected:dc9208a3303feef5b3839f4323d9beb36df0a9dd} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense:Community Engine Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I1117 23:03:59.907654    5660 out.go:176] * Using the docker driver based on user configuration
	I1117 23:03:59.907726    5660 start.go:280] selected driver: docker
	I1117 23:03:59.907726    5660 start.go:775] validating driver "docker" against <nil>
	I1117 23:03:59.907726    5660 start.go:786] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I1117 23:03:59.967409    5660 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I1117 23:04:00.321952    5660 info.go:263] docker info: {ID:L2W4:ZZCI:FAHF:ZZT6:NAWJ:ODFR:2ZBQ:YQ77:PYZZ:VR3E:WLUM:OBSV Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:40 OomKillDisable:true NGoroutines:52 SystemTime:2021-11-17 23:04:00.057652905 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:4.19.76-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://inde
x.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:6239399936 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:19.03.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7ad184331fa3e55e52b890ea95e65ba581ae3429 Expected:7ad184331fa3e55e52b890ea95e65ba581ae3429} RuncCommit:{ID:dc9208a3303feef5b3839f4323d9beb36df0a9dd Expected:dc9208a3303feef5b3839f4323d9beb36df0a9dd} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense:Community Engine Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I1117 23:04:00.322478    5660 start_flags.go:268] no existing cluster config was found, will generate one from the flags 
	I1117 23:04:00.323137    5660 start_flags.go:740] Wait components to verify : map[apiserver:true system_pods:true]
	I1117 23:04:00.323196    5660 cni.go:93] Creating CNI manager for ""
	I1117 23:04:00.323239    5660 cni.go:167] CNI unnecessary in this configuration, recommending no CNI
	I1117 23:04:00.323266    5660 start_flags.go:282] config:
	{Name:force-systemd-env-20211117230357-9504 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.3 ClusterName:force-systemd-env-20211117230357-9504 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host}
	I1117 23:04:00.328138    5660 out.go:176] * Starting control plane node force-systemd-env-20211117230357-9504 in cluster force-systemd-env-20211117230357-9504
	I1117 23:04:00.328292    5660 cache.go:118] Beginning downloading kic base image for docker with docker
	I1117 23:04:00.330351    5660 out.go:176] * Pulling base image ...
	I1117 23:04:00.330531    5660 preload.go:132] Checking if preload exists for k8s version v1.22.3 and runtime docker
	I1117 23:04:00.330531    5660 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c in local docker daemon
	I1117 23:04:00.330651    5660 preload.go:148] Found local preload: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4
	I1117 23:04:00.330808    5660 cache.go:57] Caching tarball of preloaded images
	I1117 23:04:00.331306    5660 preload.go:174] Found C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1117 23:04:00.331498    5660 cache.go:60] Finished verifying existence of preloaded tar for  v1.22.3 on docker
	I1117 23:04:00.331793    5660 profile.go:147] Saving config to C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\force-systemd-env-20211117230357-9504\config.json ...
	I1117 23:04:00.332001    5660 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\force-systemd-env-20211117230357-9504\config.json: {Name:mkf89ff2079ee48b84947821cab24bc54d9f410b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1117 23:04:00.426386    5660 image.go:79] Found gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c in local docker daemon, skipping pull
	I1117 23:04:00.426386    5660 cache.go:140] gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c exists in daemon, skipping load
	I1117 23:04:00.426386    5660 cache.go:206] Successfully downloaded all kic artifacts
	I1117 23:04:00.426386    5660 start.go:313] acquiring machines lock for force-systemd-env-20211117230357-9504: {Name:mkae7f7be238b691d27af3c4e77d3ac2238b1079 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 23:04:00.426386    5660 start.go:317] acquired machines lock for "force-systemd-env-20211117230357-9504" in 0s
	I1117 23:04:00.426386    5660 start.go:89] Provisioning new machine with config: &{Name:force-systemd-env-20211117230357-9504 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.3 ClusterName:force-systemd-env-20211117230357-9504 Namespace:default APIServerName:mini
kubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.22.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host} &{Name: IP: Port:8443 KubernetesVersion:v1.22.3 ControlPlane:true Worker:true}
	I1117 23:04:00.426386    5660 start.go:126] createHost starting for "" (driver="docker")
	I1117 23:04:00.429385    5660 out.go:203] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I1117 23:04:00.429385    5660 start.go:160] libmachine.API.Create for "force-systemd-env-20211117230357-9504" (driver="docker")
	I1117 23:04:00.429385    5660 client.go:168] LocalClient.Create starting
	I1117 23:04:00.430375    5660 main.go:130] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I1117 23:04:00.430375    5660 main.go:130] libmachine: Decoding PEM data...
	I1117 23:04:00.430375    5660 main.go:130] libmachine: Parsing certificate...
	I1117 23:04:00.430375    5660 main.go:130] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I1117 23:04:00.430375    5660 main.go:130] libmachine: Decoding PEM data...
	I1117 23:04:00.430375    5660 main.go:130] libmachine: Parsing certificate...
	I1117 23:04:00.435379    5660 cli_runner.go:115] Run: docker network inspect force-systemd-env-20211117230357-9504 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1117 23:04:00.535477    5660 cli_runner.go:162] docker network inspect force-systemd-env-20211117230357-9504 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1117 23:04:00.539258    5660 network_create.go:254] running [docker network inspect force-systemd-env-20211117230357-9504] to gather additional debugging logs...
	I1117 23:04:00.539258    5660 cli_runner.go:115] Run: docker network inspect force-systemd-env-20211117230357-9504
	W1117 23:04:00.623263    5660 cli_runner.go:162] docker network inspect force-systemd-env-20211117230357-9504 returned with exit code 1
	I1117 23:04:00.623263    5660 network_create.go:257] error running [docker network inspect force-systemd-env-20211117230357-9504]: docker network inspect force-systemd-env-20211117230357-9504: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: force-systemd-env-20211117230357-9504
	I1117 23:04:00.623539    5660 network_create.go:259] output of [docker network inspect force-systemd-env-20211117230357-9504]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: force-systemd-env-20211117230357-9504
	
	** /stderr **
	I1117 23:04:00.627658    5660 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1117 23:04:00.747443    5660 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc000006890] misses:0}
	I1117 23:04:00.747500    5660 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1117 23:04:00.747548    5660 network_create.go:106] attempt to create docker network force-systemd-env-20211117230357-9504 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1117 23:04:00.752001    5660 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true force-systemd-env-20211117230357-9504
	W1117 23:04:00.853926    5660 cli_runner.go:162] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true force-systemd-env-20211117230357-9504 returned with exit code 1
	W1117 23:04:00.854008    5660 network_create.go:98] failed to create docker network force-systemd-env-20211117230357-9504 192.168.49.0/24, will retry: subnet is taken
	I1117 23:04:00.867751    5660 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000006890] amended:false}} dirty:map[] misses:0}
	I1117 23:04:00.867751    5660 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1117 23:04:00.882133    5660 network.go:288] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000006890] amended:true}} dirty:map[192.168.49.0:0xc000006890 192.168.58.0:0xc0004b04e0] misses:0}
	I1117 23:04:00.882133    5660 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1117 23:04:00.882133    5660 network_create.go:106] attempt to create docker network force-systemd-env-20211117230357-9504 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I1117 23:04:00.886137    5660 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true force-systemd-env-20211117230357-9504
	I1117 23:04:01.096290    5660 network_create.go:90] docker network force-systemd-env-20211117230357-9504 192.168.58.0/24 created
	I1117 23:04:01.096368    5660 kic.go:106] calculated static IP "192.168.58.2" for the "force-systemd-env-20211117230357-9504" container
	I1117 23:04:01.103149    5660 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
	I1117 23:04:01.195307    5660 cli_runner.go:115] Run: docker volume create force-systemd-env-20211117230357-9504 --label name.minikube.sigs.k8s.io=force-systemd-env-20211117230357-9504 --label created_by.minikube.sigs.k8s.io=true
	I1117 23:04:01.290649    5660 oci.go:102] Successfully created a docker volume force-systemd-env-20211117230357-9504
	I1117 23:04:01.295458    5660 cli_runner.go:115] Run: docker run --rm --name force-systemd-env-20211117230357-9504-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-env-20211117230357-9504 --entrypoint /usr/bin/test -v force-systemd-env-20211117230357-9504:/var gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -d /var/lib
	I1117 23:04:02.530586    5660 cli_runner.go:168] Completed: docker run --rm --name force-systemd-env-20211117230357-9504-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-env-20211117230357-9504 --entrypoint /usr/bin/test -v force-systemd-env-20211117230357-9504:/var gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -d /var/lib: (1.2350904s)
	I1117 23:04:02.530694    5660 oci.go:106] Successfully prepared a docker volume force-systemd-env-20211117230357-9504
	I1117 23:04:02.530694    5660 preload.go:132] Checking if preload exists for k8s version v1.22.3 and runtime docker
	I1117 23:04:02.530839    5660 kic.go:179] Starting extracting preloaded images to volume ...
	I1117 23:04:02.535128    5660 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v force-systemd-env-20211117230357-9504:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir
	I1117 23:04:02.535826    5660 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	W1117 23:04:02.650786    5660 cli_runner.go:162] docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v force-systemd-env-20211117230357-9504:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir returned with exit code 125
	I1117 23:04:02.650786    5660 kic.go:186] Unable to extract preloaded tarball to volume: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v force-systemd-env-20211117230357-9504:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: status code not OK but 500: {"Message":"Unhandled exception: The notification platform is unavailable.\r\n\r\nThe notification platform is unavailable.\r\n","StackTrace":"   at Windows.UI.Notifications.ToastNotificationManager.CreateToastNotifier(String applicationId)\r\n   at Docker.WPF.PromptShareDirectory.<PromptUserAsync>d__0.MoveNext() in C:\\workspaces\\stable-2.3.x\\src\\github.com\\docker\\pinata\\win\\src\\Docker.WPF\\PromptShareDirectory.cs:line 25\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at Docker.ApiServices.Mounting.FileSharing.<DoShareAsync>d__6.MoveNext() in C:\\workspaces\\stable-2.3.x\\src\\github.com\\docker\\pinata\\win\\src\\Docker.ApiServices\\Mounting\\FileSharing.cs:line 80\r\n--- End of stack trace from previous location whe
re exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at Docker.ApiServices.Mounting.FileSharing.<ShareAsync>d__4.MoveNext() in C:\\workspaces\\stable-2.3.x\\src\\github.com\\docker\\pinata\\win\\src\\Docker.ApiServices\\Mounting\\FileSharing.cs:line 47\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at Docker.HttpApi.Controllers.FilesharingController.<ShareDirectory>d__2.MoveNext() in C:\\workspaces\\stable-2.3.x\\src\\github.com\\docker\\pinata\\win\\src\\Docker.HttpApi\\Controllers\\FilesharingController.cs:line 21\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.Exceptio
nDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at System.Threading.Tasks.TaskHelpersExtensions.<CastToObject>d__1`1.MoveNext()\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at System.Web.Http.Controllers.ApiControllerActionInvoker.<InvokeActionAsyncCore>d__1.MoveNext()\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at System.Web.Http.Controllers.ActionFilterResult.<ExecuteAsync>d__5.MoveNext()\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.Excepti
onServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at System.Web.Http.Dispatcher.HttpControllerDispatcher.<SendAsync>d__15.MoveNext()"}.
	See 'docker run --help'.
	I1117 23:04:02.874492    5660 info.go:263] docker info: {ID:L2W4:ZZCI:FAHF:ZZT6:NAWJ:ODFR:2ZBQ:YQ77:PYZZ:VR3E:WLUM:OBSV Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:40 OomKillDisable:true NGoroutines:52 SystemTime:2021-11-17 23:04:02.616500141 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:4.19.76-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://inde
x.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:6239399936 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:19.03.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7ad184331fa3e55e52b890ea95e65ba581ae3429 Expected:7ad184331fa3e55e52b890ea95e65ba581ae3429} RuncCommit:{ID:dc9208a3303feef5b3839f4323d9beb36df0a9dd Expected:dc9208a3303feef5b3839f4323d9beb36df0a9dd} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense:Community Engine Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	E1117 23:04:02.874808    5660 oci.go:197] error getting kernel modules path: Unable to locate kernel modules
	I1117 23:04:02.874808    5660 client.go:171] LocalClient.Create took 2.4454049s
	I1117 23:04:04.882666    5660 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1117 23:04:04.885367    5660 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20211117230357-9504
	W1117 23:04:04.978948    5660 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20211117230357-9504 returned with exit code 1
	I1117 23:04:04.979325    5660 retry.go:31] will retry after 276.165072ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-20211117230357-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20211117230357-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-20211117230357-9504
	I1117 23:04:05.260954    5660 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20211117230357-9504
	W1117 23:04:05.352705    5660 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20211117230357-9504 returned with exit code 1
	I1117 23:04:05.352851    5660 retry.go:31] will retry after 540.190908ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-20211117230357-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20211117230357-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-20211117230357-9504
	I1117 23:04:05.898062    5660 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20211117230357-9504
	W1117 23:04:05.985193    5660 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20211117230357-9504 returned with exit code 1
	I1117 23:04:05.985384    5660 retry.go:31] will retry after 655.06503ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-20211117230357-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20211117230357-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-20211117230357-9504
	I1117 23:04:06.645640    5660 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20211117230357-9504
	W1117 23:04:06.734638    5660 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20211117230357-9504 returned with exit code 1
	W1117 23:04:06.734970    5660 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-20211117230357-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20211117230357-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-20211117230357-9504
	
	W1117 23:04:06.734970    5660 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-20211117230357-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20211117230357-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-20211117230357-9504
	I1117 23:04:06.734970    5660 start.go:129] duration metric: createHost completed in 6.3085367s
	I1117 23:04:06.734970    5660 start.go:80] releasing machines lock for "force-systemd-env-20211117230357-9504", held for 6.3085367s
	W1117 23:04:06.734970    5660 start.go:532] error starting host: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	I1117 23:04:06.743920    5660 cli_runner.go:115] Run: docker container inspect force-systemd-env-20211117230357-9504 --format={{.State.Status}}
	W1117 23:04:06.832902    5660 cli_runner.go:162] docker container inspect force-systemd-env-20211117230357-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:04:06.833166    5660 delete.go:82] Unable to get host status for force-systemd-env-20211117230357-9504, assuming it has already been deleted: state: unknown state "force-systemd-env-20211117230357-9504": docker container inspect force-systemd-env-20211117230357-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-20211117230357-9504
	W1117 23:04:06.833424    5660 out.go:241] ! StartHost failed, but will try again: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	! StartHost failed, but will try again: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	I1117 23:04:06.833495    5660 start.go:547] Will try again in 5 seconds ...
	I1117 23:04:11.834080    5660 start.go:313] acquiring machines lock for force-systemd-env-20211117230357-9504: {Name:mkae7f7be238b691d27af3c4e77d3ac2238b1079 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 23:04:11.834305    5660 start.go:317] acquired machines lock for "force-systemd-env-20211117230357-9504" in 225.5µs
	I1117 23:04:11.834305    5660 start.go:93] Skipping create...Using existing machine configuration
	I1117 23:04:11.834305    5660 fix.go:55] fixHost starting: 
	I1117 23:04:11.843667    5660 cli_runner.go:115] Run: docker container inspect force-systemd-env-20211117230357-9504 --format={{.State.Status}}
	W1117 23:04:11.934876    5660 cli_runner.go:162] docker container inspect force-systemd-env-20211117230357-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:04:11.934876    5660 fix.go:108] recreateIfNeeded on force-systemd-env-20211117230357-9504: state= err=unknown state "force-systemd-env-20211117230357-9504": docker container inspect force-systemd-env-20211117230357-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-20211117230357-9504
	I1117 23:04:11.934876    5660 fix.go:113] machineExists: false. err=machine does not exist
	I1117 23:04:11.937877    5660 out.go:176] * docker "force-systemd-env-20211117230357-9504" container is missing, will recreate.
	I1117 23:04:11.937877    5660 delete.go:124] DEMOLISHING force-systemd-env-20211117230357-9504 ...
	I1117 23:04:11.944888    5660 cli_runner.go:115] Run: docker container inspect force-systemd-env-20211117230357-9504 --format={{.State.Status}}
	W1117 23:04:12.033023    5660 cli_runner.go:162] docker container inspect force-systemd-env-20211117230357-9504 --format={{.State.Status}} returned with exit code 1
	W1117 23:04:12.033023    5660 stop.go:75] unable to get state: unknown state "force-systemd-env-20211117230357-9504": docker container inspect force-systemd-env-20211117230357-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-20211117230357-9504
	I1117 23:04:12.033023    5660 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "force-systemd-env-20211117230357-9504": docker container inspect force-systemd-env-20211117230357-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-20211117230357-9504
	I1117 23:04:12.041029    5660 cli_runner.go:115] Run: docker container inspect force-systemd-env-20211117230357-9504 --format={{.State.Status}}
	W1117 23:04:12.133492    5660 cli_runner.go:162] docker container inspect force-systemd-env-20211117230357-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:04:12.133647    5660 delete.go:82] Unable to get host status for force-systemd-env-20211117230357-9504, assuming it has already been deleted: state: unknown state "force-systemd-env-20211117230357-9504": docker container inspect force-systemd-env-20211117230357-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-20211117230357-9504
	I1117 23:04:12.137604    5660 cli_runner.go:115] Run: docker container inspect -f {{.Id}} force-systemd-env-20211117230357-9504
	W1117 23:04:12.229414    5660 cli_runner.go:162] docker container inspect -f {{.Id}} force-systemd-env-20211117230357-9504 returned with exit code 1
	I1117 23:04:12.229414    5660 kic.go:360] could not find the container force-systemd-env-20211117230357-9504 to remove it. will try anyways
	I1117 23:04:12.233143    5660 cli_runner.go:115] Run: docker container inspect force-systemd-env-20211117230357-9504 --format={{.State.Status}}
	W1117 23:04:12.335340    5660 cli_runner.go:162] docker container inspect force-systemd-env-20211117230357-9504 --format={{.State.Status}} returned with exit code 1
	W1117 23:04:12.335660    5660 oci.go:83] error getting container status, will try to delete anyways: unknown state "force-systemd-env-20211117230357-9504": docker container inspect force-systemd-env-20211117230357-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-20211117230357-9504
	I1117 23:04:12.339873    5660 cli_runner.go:115] Run: docker exec --privileged -t force-systemd-env-20211117230357-9504 /bin/bash -c "sudo init 0"
	W1117 23:04:12.425740    5660 cli_runner.go:162] docker exec --privileged -t force-systemd-env-20211117230357-9504 /bin/bash -c "sudo init 0" returned with exit code 1
	I1117 23:04:12.425740    5660 oci.go:658] error shutdown force-systemd-env-20211117230357-9504: docker exec --privileged -t force-systemd-env-20211117230357-9504 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: force-systemd-env-20211117230357-9504
	I1117 23:04:13.431353    5660 cli_runner.go:115] Run: docker container inspect force-systemd-env-20211117230357-9504 --format={{.State.Status}}
	W1117 23:04:13.534990    5660 cli_runner.go:162] docker container inspect force-systemd-env-20211117230357-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:04:13.534990    5660 oci.go:670] temporary error verifying shutdown: unknown state "force-systemd-env-20211117230357-9504": docker container inspect force-systemd-env-20211117230357-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-20211117230357-9504
	I1117 23:04:13.534990    5660 oci.go:672] temporary error: container force-systemd-env-20211117230357-9504 status is  but expect it to be exited
	I1117 23:04:13.534990    5660 retry.go:31] will retry after 462.318748ms: couldn't verify container is exited. %v: unknown state "force-systemd-env-20211117230357-9504": docker container inspect force-systemd-env-20211117230357-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-20211117230357-9504
	I1117 23:04:14.001877    5660 cli_runner.go:115] Run: docker container inspect force-systemd-env-20211117230357-9504 --format={{.State.Status}}
	W1117 23:04:14.099089    5660 cli_runner.go:162] docker container inspect force-systemd-env-20211117230357-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:04:14.099261    5660 oci.go:670] temporary error verifying shutdown: unknown state "force-systemd-env-20211117230357-9504": docker container inspect force-systemd-env-20211117230357-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-20211117230357-9504
	I1117 23:04:14.099307    5660 oci.go:672] temporary error: container force-systemd-env-20211117230357-9504 status is  but expect it to be exited
	I1117 23:04:14.099376    5660 retry.go:31] will retry after 890.117305ms: couldn't verify container is exited. %v: unknown state "force-systemd-env-20211117230357-9504": docker container inspect force-systemd-env-20211117230357-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-20211117230357-9504
	I1117 23:04:14.994506    5660 cli_runner.go:115] Run: docker container inspect force-systemd-env-20211117230357-9504 --format={{.State.Status}}
	W1117 23:04:15.086119    5660 cli_runner.go:162] docker container inspect force-systemd-env-20211117230357-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:04:15.086119    5660 oci.go:670] temporary error verifying shutdown: unknown state "force-systemd-env-20211117230357-9504": docker container inspect force-systemd-env-20211117230357-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-20211117230357-9504
	I1117 23:04:15.086119    5660 oci.go:672] temporary error: container force-systemd-env-20211117230357-9504 status is  but expect it to be exited
	I1117 23:04:15.086119    5660 retry.go:31] will retry after 636.341646ms: couldn't verify container is exited. %v: unknown state "force-systemd-env-20211117230357-9504": docker container inspect force-systemd-env-20211117230357-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-20211117230357-9504
	I1117 23:04:15.727027    5660 cli_runner.go:115] Run: docker container inspect force-systemd-env-20211117230357-9504 --format={{.State.Status}}
	W1117 23:04:15.836772    5660 cli_runner.go:162] docker container inspect force-systemd-env-20211117230357-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:04:15.836772    5660 oci.go:670] temporary error verifying shutdown: unknown state "force-systemd-env-20211117230357-9504": docker container inspect force-systemd-env-20211117230357-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-20211117230357-9504
	I1117 23:04:15.836772    5660 oci.go:672] temporary error: container force-systemd-env-20211117230357-9504 status is  but expect it to be exited
	I1117 23:04:15.836772    5660 retry.go:31] will retry after 1.107876242s: couldn't verify container is exited. %v: unknown state "force-systemd-env-20211117230357-9504": docker container inspect force-systemd-env-20211117230357-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-20211117230357-9504
	I1117 23:04:16.949841    5660 cli_runner.go:115] Run: docker container inspect force-systemd-env-20211117230357-9504 --format={{.State.Status}}
	W1117 23:04:17.055838    5660 cli_runner.go:162] docker container inspect force-systemd-env-20211117230357-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:04:17.056180    5660 oci.go:670] temporary error verifying shutdown: unknown state "force-systemd-env-20211117230357-9504": docker container inspect force-systemd-env-20211117230357-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-20211117230357-9504
	I1117 23:04:17.056277    5660 oci.go:672] temporary error: container force-systemd-env-20211117230357-9504 status is  but expect it to be exited
	I1117 23:04:17.056380    5660 retry.go:31] will retry after 1.511079094s: couldn't verify container is exited. %v: unknown state "force-systemd-env-20211117230357-9504": docker container inspect force-systemd-env-20211117230357-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-20211117230357-9504
	I1117 23:04:18.575097    5660 cli_runner.go:115] Run: docker container inspect force-systemd-env-20211117230357-9504 --format={{.State.Status}}
	W1117 23:04:18.662959    5660 cli_runner.go:162] docker container inspect force-systemd-env-20211117230357-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:04:18.663038    5660 oci.go:670] temporary error verifying shutdown: unknown state "force-systemd-env-20211117230357-9504": docker container inspect force-systemd-env-20211117230357-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-20211117230357-9504
	I1117 23:04:18.663246    5660 oci.go:672] temporary error: container force-systemd-env-20211117230357-9504 status is  but expect it to be exited
	I1117 23:04:18.663246    5660 retry.go:31] will retry after 3.04096222s: couldn't verify container is exited. %v: unknown state "force-systemd-env-20211117230357-9504": docker container inspect force-systemd-env-20211117230357-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-20211117230357-9504
	I1117 23:04:21.710483    5660 cli_runner.go:115] Run: docker container inspect force-systemd-env-20211117230357-9504 --format={{.State.Status}}
	W1117 23:04:21.799525    5660 cli_runner.go:162] docker container inspect force-systemd-env-20211117230357-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:04:21.799660    5660 oci.go:670] temporary error verifying shutdown: unknown state "force-systemd-env-20211117230357-9504": docker container inspect force-systemd-env-20211117230357-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-20211117230357-9504
	I1117 23:04:21.799660    5660 oci.go:672] temporary error: container force-systemd-env-20211117230357-9504 status is  but expect it to be exited
	I1117 23:04:21.799660    5660 retry.go:31] will retry after 5.781953173s: couldn't verify container is exited. %v: unknown state "force-systemd-env-20211117230357-9504": docker container inspect force-systemd-env-20211117230357-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-20211117230357-9504
	I1117 23:04:27.589084    5660 cli_runner.go:115] Run: docker container inspect force-systemd-env-20211117230357-9504 --format={{.State.Status}}
	W1117 23:04:27.677870    5660 cli_runner.go:162] docker container inspect force-systemd-env-20211117230357-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:04:27.678209    5660 oci.go:670] temporary error verifying shutdown: unknown state "force-systemd-env-20211117230357-9504": docker container inspect force-systemd-env-20211117230357-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-20211117230357-9504
	I1117 23:04:27.678209    5660 oci.go:672] temporary error: container force-systemd-env-20211117230357-9504 status is  but expect it to be exited
	I1117 23:04:27.678312    5660 oci.go:87] couldn't shut down force-systemd-env-20211117230357-9504 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "force-systemd-env-20211117230357-9504": docker container inspect force-systemd-env-20211117230357-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-20211117230357-9504
	 
	I1117 23:04:27.683423    5660 cli_runner.go:115] Run: docker rm -f -v force-systemd-env-20211117230357-9504
	W1117 23:04:27.774224    5660 cli_runner.go:162] docker rm -f -v force-systemd-env-20211117230357-9504 returned with exit code 1
	W1117 23:04:27.775380    5660 delete.go:139] delete failed (probably ok) <nil>
	I1117 23:04:27.775380    5660 fix.go:120] Sleeping 1 second for extra luck!
	I1117 23:04:28.775724    5660 start.go:126] createHost starting for "" (driver="docker")
	I1117 23:04:28.780078    5660 out.go:203] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I1117 23:04:28.780078    5660 start.go:160] libmachine.API.Create for "force-systemd-env-20211117230357-9504" (driver="docker")
	I1117 23:04:28.780078    5660 client.go:168] LocalClient.Create starting
	I1117 23:04:28.780914    5660 main.go:130] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I1117 23:04:28.781552    5660 main.go:130] libmachine: Decoding PEM data...
	I1117 23:04:28.781552    5660 main.go:130] libmachine: Parsing certificate...
	I1117 23:04:28.781552    5660 main.go:130] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I1117 23:04:28.781552    5660 main.go:130] libmachine: Decoding PEM data...
	I1117 23:04:28.781552    5660 main.go:130] libmachine: Parsing certificate...
	I1117 23:04:28.786900    5660 cli_runner.go:115] Run: docker network inspect force-systemd-env-20211117230357-9504 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1117 23:04:28.885399    5660 network_create.go:67] Found existing network {name:force-systemd-env-20211117230357-9504 subnet:0xc000d06840 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 168 58 1] mtu:1500}
	I1117 23:04:28.885494    5660 kic.go:106] calculated static IP "192.168.58.2" for the "force-systemd-env-20211117230357-9504" container
	I1117 23:04:28.893648    5660 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
	I1117 23:04:28.983869    5660 cli_runner.go:115] Run: docker volume create force-systemd-env-20211117230357-9504 --label name.minikube.sigs.k8s.io=force-systemd-env-20211117230357-9504 --label created_by.minikube.sigs.k8s.io=true
	I1117 23:04:29.088004    5660 oci.go:102] Successfully created a docker volume force-systemd-env-20211117230357-9504
	I1117 23:04:29.091567    5660 cli_runner.go:115] Run: docker run --rm --name force-systemd-env-20211117230357-9504-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-env-20211117230357-9504 --entrypoint /usr/bin/test -v force-systemd-env-20211117230357-9504:/var gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -d /var/lib
	I1117 23:04:30.027797    5660 oci.go:106] Successfully prepared a docker volume force-systemd-env-20211117230357-9504
	I1117 23:04:30.027964    5660 preload.go:132] Checking if preload exists for k8s version v1.22.3 and runtime docker
	I1117 23:04:30.027964    5660 kic.go:179] Starting extracting preloaded images to volume ...
	I1117 23:04:30.032931    5660 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v force-systemd-env-20211117230357-9504:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir
	I1117 23:04:30.033314    5660 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	W1117 23:04:30.149781    5660 cli_runner.go:162] docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v force-systemd-env-20211117230357-9504:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir returned with exit code 125
	I1117 23:04:30.149781    5660 kic.go:186] Unable to extract preloaded tarball to volume: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v force-systemd-env-20211117230357-9504:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: status code not OK but 500: {"Message":"Unhandled exception: The notification platform is unavailable.\r\n\r\nThe notification platform is unavailable.\r\n","StackTrace":"   at Windows.UI.Notifications.ToastNotificationManager.CreateToastNotifier(String applicationId)\r\n   at Docker.WPF.PromptShareDirectory.<PromptUserAsync>d__0.MoveNext() in C:\\workspaces\\stable-2.3.x\\src\\github.com\\docker\\pinata\\win\\src\\Docker.WPF\\PromptShareDirectory.cs:line 25\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at Docker.ApiServices.Mounting.FileSharing.<DoShareAsync>d__6.MoveNext() in C:\\workspaces\\stable-2.3.x\\src\\github.com\\docker\\pinata\\win\\src\\Docker.ApiServices\\Mounting\\FileSharing.cs:line 80\r\n--- End of stack trace from previous location whe
re exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at Docker.ApiServices.Mounting.FileSharing.<ShareAsync>d__4.MoveNext() in C:\\workspaces\\stable-2.3.x\\src\\github.com\\docker\\pinata\\win\\src\\Docker.ApiServices\\Mounting\\FileSharing.cs:line 47\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at Docker.HttpApi.Controllers.FilesharingController.<ShareDirectory>d__2.MoveNext() in C:\\workspaces\\stable-2.3.x\\src\\github.com\\docker\\pinata\\win\\src\\Docker.HttpApi\\Controllers\\FilesharingController.cs:line 21\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.Exceptio
nDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at System.Threading.Tasks.TaskHelpersExtensions.<CastToObject>d__1`1.MoveNext()\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at System.Web.Http.Controllers.ApiControllerActionInvoker.<InvokeActionAsyncCore>d__1.MoveNext()\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at System.Web.Http.Controllers.ActionFilterResult.<ExecuteAsync>d__5.MoveNext()\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.Excepti
onServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at System.Web.Http.Dispatcher.HttpControllerDispatcher.<SendAsync>d__15.MoveNext()"}.
	See 'docker run --help'.
	I1117 23:04:30.420364    5660 info.go:263] docker info: {ID:L2W4:ZZCI:FAHF:ZZT6:NAWJ:ODFR:2ZBQ:YQ77:PYZZ:VR3E:WLUM:OBSV Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:41 OomKillDisable:true NGoroutines:53 SystemTime:2021-11-17 23:04:30.126536058 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:4.19.76-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://inde
x.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:6239399936 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:19.03.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7ad184331fa3e55e52b890ea95e65ba581ae3429 Expected:7ad184331fa3e55e52b890ea95e65ba581ae3429} RuncCommit:{ID:dc9208a3303feef5b3839f4323d9beb36df0a9dd Expected:dc9208a3303feef5b3839f4323d9beb36df0a9dd} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense:Community Engine Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	E1117 23:04:30.420753    5660 oci.go:197] error getting kernel modules path: Unable to locate kernel modules
	I1117 23:04:30.420753    5660 client.go:171] LocalClient.Create took 1.640663s
	I1117 23:04:32.430821    5660 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1117 23:04:32.433752    5660 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20211117230357-9504
	W1117 23:04:32.523746    5660 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20211117230357-9504 returned with exit code 1
	I1117 23:04:32.524100    5660 retry.go:31] will retry after 178.565968ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-20211117230357-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20211117230357-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-20211117230357-9504
	I1117 23:04:32.709048    5660 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20211117230357-9504
	W1117 23:04:32.797606    5660 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20211117230357-9504 returned with exit code 1
	I1117 23:04:32.797722    5660 retry.go:31] will retry after 330.246446ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-20211117230357-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20211117230357-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-20211117230357-9504
	I1117 23:04:33.134775    5660 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20211117230357-9504
	W1117 23:04:33.222838    5660 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20211117230357-9504 returned with exit code 1
	I1117 23:04:33.222838    5660 retry.go:31] will retry after 460.157723ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-20211117230357-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20211117230357-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-20211117230357-9504
	I1117 23:04:33.688665    5660 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20211117230357-9504
	W1117 23:04:33.782927    5660 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20211117230357-9504 returned with exit code 1
	W1117 23:04:33.783138    5660 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-20211117230357-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20211117230357-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-20211117230357-9504
	
	W1117 23:04:33.783138    5660 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-20211117230357-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20211117230357-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-20211117230357-9504
	I1117 23:04:33.783138    5660 start.go:129] duration metric: createHost completed in 5.007343s
	I1117 23:04:33.791315    5660 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1117 23:04:33.794654    5660 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20211117230357-9504
	W1117 23:04:33.886523    5660 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20211117230357-9504 returned with exit code 1
	I1117 23:04:33.886523    5660 retry.go:31] will retry after 195.758538ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-20211117230357-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20211117230357-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-20211117230357-9504
	I1117 23:04:34.086474    5660 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20211117230357-9504
	W1117 23:04:34.181402    5660 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20211117230357-9504 returned with exit code 1
	I1117 23:04:34.181402    5660 retry.go:31] will retry after 297.413196ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-20211117230357-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20211117230357-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-20211117230357-9504
	I1117 23:04:34.484928    5660 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20211117230357-9504
	W1117 23:04:34.573484    5660 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20211117230357-9504 returned with exit code 1
	I1117 23:04:34.573484    5660 retry.go:31] will retry after 663.23513ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-20211117230357-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20211117230357-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-20211117230357-9504
	I1117 23:04:35.242122    5660 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20211117230357-9504
	W1117 23:04:35.337191    5660 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20211117230357-9504 returned with exit code 1
	W1117 23:04:35.337191    5660 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-20211117230357-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20211117230357-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-20211117230357-9504
	
	W1117 23:04:35.337191    5660 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-20211117230357-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20211117230357-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-20211117230357-9504
	I1117 23:04:35.337191    5660 fix.go:57] fixHost completed within 23.5027096s
	I1117 23:04:35.337191    5660 start.go:80] releasing machines lock for "force-systemd-env-20211117230357-9504", held for 23.5027096s
	W1117 23:04:35.337191    5660 out.go:241] * Failed to start docker container. Running "minikube delete -p force-systemd-env-20211117230357-9504" may fix it: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	* Failed to start docker container. Running "minikube delete -p force-systemd-env-20211117230357-9504" may fix it: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	I1117 23:04:35.341192    5660 out.go:176] 
	W1117 23:04:35.341192    5660 out.go:241] X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	W1117 23:04:35.341192    5660 out.go:241] * 
	* 
	W1117 23:04:35.343192    5660 out.go:241] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1117 23:04:35.345187    5660 out.go:176] 

                                                
                                                
** /stderr **
docker_test.go:153: failed to start minikube with args: "out/minikube-windows-amd64.exe start -p force-systemd-env-20211117230357-9504 --memory=2048 --alsologtostderr -v=5 --driver=docker" : exit status 80
docker_test.go:105: (dbg) Run:  out/minikube-windows-amd64.exe -p force-systemd-env-20211117230357-9504 ssh "docker info --format {{.CgroupDriver}}"

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:105: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p force-systemd-env-20211117230357-9504 ssh "docker info --format {{.CgroupDriver}}": exit status 80 (1.8128559s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "force-systemd-env-20211117230357-9504": docker container inspect force-systemd-env-20211117230357-9504 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-20211117230357-9504
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                     │
	│    * If the above advice does not help, please let us know:                                                         │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                       │
	│                                                                                                                     │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                            │
	│    * Please also attach the following file to the GitHub issue:                                                     │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_ssh_2837ebd22544166cf14c5e2e977cc80019e59e54_0.log    │
	│                                                                                                                     │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
docker_test.go:107: failed to get docker cgroup driver. args "out/minikube-windows-amd64.exe -p force-systemd-env-20211117230357-9504 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 80
docker_test.go:162: *** TestForceSystemdEnv FAILED at 2021-11-17 23:04:37.2606484 +0000 GMT m=+2283.757493701
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestForceSystemdEnv]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect force-systemd-env-20211117230357-9504
helpers_test.go:235: (dbg) docker inspect force-systemd-env-20211117230357-9504:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "force-systemd-env-20211117230357-9504",
	        "Id": "a32cc8f38e13adb0482276745db1c3dbb0ad08a3cee23e8356ac017c8c9a1077",
	        "Created": "2021-11-17T23:04:00.966245735Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.58.0/24",
	                    "Gateway": "192.168.58.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p force-systemd-env-20211117230357-9504 -n force-systemd-env-20211117230357-9504

                                                
                                                
=== CONT  TestForceSystemdEnv
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p force-systemd-env-20211117230357-9504 -n force-systemd-env-20211117230357-9504: exit status 7 (1.8317427s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 23:04:39.178983    7900 status.go:247] status error: host: state: unknown state "force-systemd-env-20211117230357-9504": docker container inspect force-systemd-env-20211117230357-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-20211117230357-9504

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-env-20211117230357-9504" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:175: Cleaning up "force-systemd-env-20211117230357-9504" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p force-systemd-env-20211117230357-9504

                                                
                                                
=== CONT  TestForceSystemdEnv
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p force-systemd-env-20211117230357-9504: (3.4008764s)
--- FAIL: TestForceSystemdEnv (44.94s)

                                                
                                    
x
+
TestErrorSpam/setup (37.18s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:79: (dbg) Run:  out/minikube-windows-amd64.exe start -p nospam-20211117222914-9504 -n=1 --memory=2250 --wait=false --log_dir=C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-20211117222914-9504 --driver=docker
error_spam_test.go:79: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p nospam-20211117222914-9504 -n=1 --memory=2250 --wait=false --log_dir=C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-20211117222914-9504 --driver=docker: exit status 80 (37.1767818s)

                                                
                                                
-- stdout --
	* [nospam-20211117222914-9504] minikube v1.24.0 on Microsoft Windows 10 Enterprise N 10.0.18363 Build 18363
	  - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=12739
	* Using the docker driver based on user configuration
	* Starting control plane node nospam-20211117222914-9504 in cluster nospam-20211117222914-9504
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2250MB) ...
	* docker "nospam-20211117222914-9504" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2250MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 22:29:19.270467    9500 oci.go:197] error getting kernel modules path: Unable to locate kernel modules
	! StartHost failed, but will try again: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	E1117 22:29:46.647469    9500 oci.go:197] error getting kernel modules path: Unable to locate kernel modules
	* Failed to start docker container. Running "minikube delete -p nospam-20211117222914-9504" may fix it: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:81: "out/minikube-windows-amd64.exe start -p nospam-20211117222914-9504 -n=1 --memory=2250 --wait=false --log_dir=C:\\Users\\jenkins.minikube2\\AppData\\Local\\Temp\\nospam-20211117222914-9504 --driver=docker" failed: exit status 80
error_spam_test.go:94: unexpected stderr: "E1117 22:29:19.270467    9500 oci.go:197] error getting kernel modules path: Unable to locate kernel modules"
error_spam_test.go:94: unexpected stderr: "! StartHost failed, but will try again: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules"
error_spam_test.go:94: unexpected stderr: "E1117 22:29:46.647469    9500 oci.go:197] error getting kernel modules path: Unable to locate kernel modules"
error_spam_test.go:94: unexpected stderr: "* Failed to start docker container. Running \"minikube delete -p nospam-20211117222914-9504\" may fix it: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules"
error_spam_test.go:94: unexpected stderr: "X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules"
error_spam_test.go:94: unexpected stderr: "* "
error_spam_test.go:94: unexpected stderr: "╭─────────────────────────────────────────────────────────────────────────────────────────────╮"
error_spam_test.go:94: unexpected stderr: "│                                                                                             │"
error_spam_test.go:94: unexpected stderr: "│    * If the above advice does not help, please let us know:                                 │"
error_spam_test.go:94: unexpected stderr: "│      https://github.com/kubernetes/minikube/issues/new/choose                               │"
error_spam_test.go:94: unexpected stderr: "│                                                                                             │"
error_spam_test.go:94: unexpected stderr: "│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │"
error_spam_test.go:94: unexpected stderr: "│                                                                                             │"
error_spam_test.go:94: unexpected stderr: "╰─────────────────────────────────────────────────────────────────────────────────────────────╯"
error_spam_test.go:108: minikube stdout:
* [nospam-20211117222914-9504] minikube v1.24.0 on Microsoft Windows 10 Enterprise N 10.0.18363 Build 18363
- KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
- MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
- MINIKUBE_LOCATION=12739
* Using the docker driver based on user configuration
* Starting control plane node nospam-20211117222914-9504 in cluster nospam-20211117222914-9504
* Pulling base image ...
* Creating docker container (CPUs=2, Memory=2250MB) ...
* docker "nospam-20211117222914-9504" container is missing, will recreate.
* Creating docker container (CPUs=2, Memory=2250MB) ...

                                                
                                                

                                                
                                                
error_spam_test.go:109: minikube stderr:
E1117 22:29:19.270467    9500 oci.go:197] error getting kernel modules path: Unable to locate kernel modules
! StartHost failed, but will try again: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
E1117 22:29:46.647469    9500 oci.go:197] error getting kernel modules path: Unable to locate kernel modules
* Failed to start docker container. Running "minikube delete -p nospam-20211117222914-9504" may fix it: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
* 
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
error_spam_test.go:119: missing kubeadm init sub-step "Generating certificates and keys ..."
error_spam_test.go:119: missing kubeadm init sub-step "Booting up control plane ..."
error_spam_test.go:119: missing kubeadm init sub-step "Configuring RBAC rules ..."
--- FAIL: TestErrorSpam/setup (37.18s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (39.02s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2015: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-20211117223105-9504 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker
functional_test.go:2015: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p functional-20211117223105-9504 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker: exit status 80 (37.1718581s)

                                                
                                                
-- stdout --
	* [functional-20211117223105-9504] minikube v1.24.0 on Microsoft Windows 10 Enterprise N 10.0.18363 Build 18363
	  - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=12739
	* Using the docker driver based on user configuration
	* Starting control plane node functional-20211117223105-9504 in cluster functional-20211117223105-9504
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=4000MB) ...
	* docker "functional-20211117223105-9504" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=4000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Local proxy ignored: not passing HTTP_PROXY=localhost:58040 to docker env.
	! Local proxy ignored: not passing HTTP_PROXY=localhost:58040 to docker env.
	! Local proxy ignored: not passing HTTP_PROXY=localhost:58040 to docker env.
	! Local proxy ignored: not passing HTTP_PROXY=localhost:58040 to docker env.
	E1117 22:31:10.472310    9748 oci.go:197] error getting kernel modules path: Unable to locate kernel modules
	! StartHost failed, but will try again: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	! Local proxy ignored: not passing HTTP_PROXY=localhost:58040 to docker env.
	! Local proxy ignored: not passing HTTP_PROXY=localhost:58040 to docker env.
	E1117 22:31:37.830734    9748 oci.go:197] error getting kernel modules path: Unable to locate kernel modules
	* Failed to start docker container. Running "minikube delete -p functional-20211117223105-9504" may fix it: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2017: failed minikube start. args "out/minikube-windows-amd64.exe start -p functional-20211117223105-9504 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker": exit status 80
functional_test.go:2022: start stdout=* [functional-20211117223105-9504] minikube v1.24.0 on Microsoft Windows 10 Enterprise N 10.0.18363 Build 18363
- KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
- MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
- MINIKUBE_LOCATION=12739
* Using the docker driver based on user configuration
* Starting control plane node functional-20211117223105-9504 in cluster functional-20211117223105-9504
* Pulling base image ...
* Creating docker container (CPUs=2, Memory=4000MB) ...
* docker "functional-20211117223105-9504" container is missing, will recreate.
* Creating docker container (CPUs=2, Memory=4000MB) ...

                                                
                                                

                                                
                                                
, want: *Found network options:*
functional_test.go:2027: start stderr=! Local proxy ignored: not passing HTTP_PROXY=localhost:58040 to docker env.
! Local proxy ignored: not passing HTTP_PROXY=localhost:58040 to docker env.
! Local proxy ignored: not passing HTTP_PROXY=localhost:58040 to docker env.
! Local proxy ignored: not passing HTTP_PROXY=localhost:58040 to docker env.
E1117 22:31:10.472310    9748 oci.go:197] error getting kernel modules path: Unable to locate kernel modules
! StartHost failed, but will try again: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
! Local proxy ignored: not passing HTTP_PROXY=localhost:58040 to docker env.
! Local proxy ignored: not passing HTTP_PROXY=localhost:58040 to docker env.
E1117 22:31:37.830734    9748 oci.go:197] error getting kernel modules path: Unable to locate kernel modules
* Failed to start docker container. Running "minikube delete -p functional-20211117223105-9504" may fix it: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
* 
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
, want: *You appear to be using a proxy*
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestFunctional/serial/StartWithProxy]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect functional-20211117223105-9504
helpers_test.go:235: (dbg) docker inspect functional-20211117223105-9504:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "functional-20211117223105-9504",
	        "Id": "72218bd966d51a7f89406ef733cbb0e5b7382c2eca4e22dcf492d153f6d7a483",
	        "Created": "2021-11-17T22:31:08.713523089Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.49.0/24",
	                    "Gateway": "192.168.49.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-20211117223105-9504 -n functional-20211117223105-9504
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-20211117223105-9504 -n functional-20211117223105-9504: exit status 7 (1.7341535s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 22:31:44.590853    6320 status.go:247] status error: host: state: unknown state "functional-20211117223105-9504": docker container inspect functional-20211117223105-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117223105-9504

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-20211117223105-9504" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestFunctional/serial/StartWithProxy (39.02s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
functional_test.go:579: audit.json does not contain the profile "functional-20211117223105-9504"
--- FAIL: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (59.02s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:600: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-20211117223105-9504 --alsologtostderr -v=8
functional_test.go:600: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p functional-20211117223105-9504 --alsologtostderr -v=8: exit status 80 (57.066063s)

                                                
                                                
-- stdout --
	* [functional-20211117223105-9504] minikube v1.24.0 on Microsoft Windows 10 Enterprise N 10.0.18363 Build 18363
	  - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=12739
	* Using the docker driver based on existing profile
	* Starting control plane node functional-20211117223105-9504 in cluster functional-20211117223105-9504
	* Pulling base image ...
	* docker "functional-20211117223105-9504" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=4000MB) ...
	* docker "functional-20211117223105-9504" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=4000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1117 22:31:44.793885    6208 out.go:297] Setting OutFile to fd 644 ...
	I1117 22:31:44.867476    6208 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 22:31:44.867541    6208 out.go:310] Setting ErrFile to fd 640...
	I1117 22:31:44.867569    6208 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 22:31:44.878426    6208 out.go:304] Setting JSON to false
	I1117 22:31:44.880191    6208 start.go:112] hostinfo: {"hostname":"minikube2","uptime":77620,"bootTime":1637110684,"procs":127,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.18363 Build 18363","kernelVersion":"10.0.18363 Build 18363","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"29dced62-21fb-45d8-a34e-472b66ced002"}
	W1117 22:31:44.880191    6208 start.go:120] gopshost.Virtualization returned error: not implemented yet
	I1117 22:31:44.885994    6208 out.go:176] * [functional-20211117223105-9504] minikube v1.24.0 on Microsoft Windows 10 Enterprise N 10.0.18363 Build 18363
	I1117 22:31:44.886185    6208 notify.go:174] Checking for updates...
	I1117 22:31:44.889536    6208 out.go:176]   - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	I1117 22:31:44.892180    6208 out.go:176]   - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	I1117 22:31:44.894335    6208 out.go:176]   - MINIKUBE_LOCATION=12739
	I1117 22:31:44.894715    6208 config.go:176] Loaded profile config "functional-20211117223105-9504": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.3
	I1117 22:31:44.895241    6208 driver.go:343] Setting default libvirt URI to qemu:///system
	I1117 22:31:46.439421    6208 docker.go:132] docker version: linux-19.03.12
	I1117 22:31:46.444439    6208 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I1117 22:31:46.785526    6208 info.go:263] docker info: {ID:L2W4:ZZCI:FAHF:ZZT6:NAWJ:ODFR:2ZBQ:YQ77:PYZZ:VR3E:WLUM:OBSV Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:40 OomKillDisable:true NGoroutines:52 SystemTime:2021-11-17 22:31:46.516170617 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:4.19.76-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://inde
x.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:6239399936 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:19.03.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7ad184331fa3e55e52b890ea95e65ba581ae3429 Expected:7ad184331fa3e55e52b890ea95e65ba581ae3429} RuncCommit:{ID:dc9208a3303feef5b3839f4323d9beb36df0a9dd Expected:dc9208a3303feef5b3839f4323d9beb36df0a9dd} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense:Community Engine Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I1117 22:31:46.791115    6208 out.go:176] * Using the docker driver based on existing profile
	I1117 22:31:46.791115    6208 start.go:280] selected driver: docker
	I1117 22:31:46.791115    6208 start.go:775] validating driver "docker" against &{Name:functional-20211117223105-9504 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.3 ClusterName:functional-20211117223105-9504 Namespace:default APIServerName:minikubeCA APIServer
Names:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.22.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host}
	I1117 22:31:46.791115    6208 start.go:786] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I1117 22:31:46.803568    6208 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I1117 22:31:47.167052    6208 info.go:263] docker info: {ID:L2W4:ZZCI:FAHF:ZZT6:NAWJ:ODFR:2ZBQ:YQ77:PYZZ:VR3E:WLUM:OBSV Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:40 OomKillDisable:true NGoroutines:52 SystemTime:2021-11-17 22:31:46.883871361 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:4.19.76-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://inde
x.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:6239399936 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:19.03.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7ad184331fa3e55e52b890ea95e65ba581ae3429 Expected:7ad184331fa3e55e52b890ea95e65ba581ae3429} RuncCommit:{ID:dc9208a3303feef5b3839f4323d9beb36df0a9dd Expected:dc9208a3303feef5b3839f4323d9beb36df0a9dd} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense:Community Engine Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I1117 22:31:47.215789    6208 cni.go:93] Creating CNI manager for ""
	I1117 22:31:47.215789    6208 cni.go:167] CNI unnecessary in this configuration, recommending no CNI
	I1117 22:31:47.215789    6208 start_flags.go:282] config:
	{Name:functional-20211117223105-9504 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.3 ClusterName:functional-20211117223105-9504 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISock
et: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.22.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host}
	I1117 22:31:47.219194    6208 out.go:176] * Starting control plane node functional-20211117223105-9504 in cluster functional-20211117223105-9504
	I1117 22:31:47.219257    6208 cache.go:118] Beginning downloading kic base image for docker with docker
	I1117 22:31:47.222505    6208 out.go:176] * Pulling base image ...
	I1117 22:31:47.222505    6208 preload.go:132] Checking if preload exists for k8s version v1.22.3 and runtime docker
	I1117 22:31:47.222630    6208 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c in local docker daemon
	I1117 22:31:47.222630    6208 preload.go:148] Found local preload: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4
	I1117 22:31:47.222773    6208 cache.go:57] Caching tarball of preloaded images
	I1117 22:31:47.223254    6208 preload.go:174] Found C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1117 22:31:47.223280    6208 cache.go:60] Finished verifying existence of preloaded tar for  v1.22.3 on docker
	I1117 22:31:47.223280    6208 profile.go:147] Saving config to C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\functional-20211117223105-9504\config.json ...
	I1117 22:31:47.316896    6208 image.go:79] Found gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c in local docker daemon, skipping pull
	I1117 22:31:47.316943    6208 cache.go:140] gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c exists in daemon, skipping load
	I1117 22:31:47.316943    6208 cache.go:206] Successfully downloaded all kic artifacts
	I1117 22:31:47.316943    6208 start.go:313] acquiring machines lock for functional-20211117223105-9504: {Name:mkf27125e9684350d3c166e137abc6f49434f9ae Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 22:31:47.316943    6208 start.go:317] acquired machines lock for "functional-20211117223105-9504" in 0s
	I1117 22:31:47.316943    6208 start.go:93] Skipping create...Using existing machine configuration
	I1117 22:31:47.317616    6208 fix.go:55] fixHost starting: 
	I1117 22:31:47.325927    6208 cli_runner.go:115] Run: docker container inspect functional-20211117223105-9504 --format={{.State.Status}}
	W1117 22:31:47.415333    6208 cli_runner.go:162] docker container inspect functional-20211117223105-9504 --format={{.State.Status}} returned with exit code 1
	I1117 22:31:47.415543    6208 fix.go:108] recreateIfNeeded on functional-20211117223105-9504: state= err=unknown state "functional-20211117223105-9504": docker container inspect functional-20211117223105-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117223105-9504
	I1117 22:31:47.415543    6208 fix.go:113] machineExists: false. err=machine does not exist
	I1117 22:31:47.419593    6208 out.go:176] * docker "functional-20211117223105-9504" container is missing, will recreate.
	I1117 22:31:47.419593    6208 delete.go:124] DEMOLISHING functional-20211117223105-9504 ...
	I1117 22:31:47.427557    6208 cli_runner.go:115] Run: docker container inspect functional-20211117223105-9504 --format={{.State.Status}}
	W1117 22:31:47.513951    6208 cli_runner.go:162] docker container inspect functional-20211117223105-9504 --format={{.State.Status}} returned with exit code 1
	W1117 22:31:47.513984    6208 stop.go:75] unable to get state: unknown state "functional-20211117223105-9504": docker container inspect functional-20211117223105-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117223105-9504
	I1117 22:31:47.514241    6208 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "functional-20211117223105-9504": docker container inspect functional-20211117223105-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117223105-9504
	I1117 22:31:47.522060    6208 cli_runner.go:115] Run: docker container inspect functional-20211117223105-9504 --format={{.State.Status}}
	W1117 22:31:47.609948    6208 cli_runner.go:162] docker container inspect functional-20211117223105-9504 --format={{.State.Status}} returned with exit code 1
	I1117 22:31:47.609948    6208 delete.go:82] Unable to get host status for functional-20211117223105-9504, assuming it has already been deleted: state: unknown state "functional-20211117223105-9504": docker container inspect functional-20211117223105-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117223105-9504
	I1117 22:31:47.613958    6208 cli_runner.go:115] Run: docker container inspect -f {{.Id}} functional-20211117223105-9504
	W1117 22:31:47.705484    6208 cli_runner.go:162] docker container inspect -f {{.Id}} functional-20211117223105-9504 returned with exit code 1
	I1117 22:31:47.705484    6208 kic.go:360] could not find the container functional-20211117223105-9504 to remove it. will try anyways
	I1117 22:31:47.709207    6208 cli_runner.go:115] Run: docker container inspect functional-20211117223105-9504 --format={{.State.Status}}
	W1117 22:31:47.794484    6208 cli_runner.go:162] docker container inspect functional-20211117223105-9504 --format={{.State.Status}} returned with exit code 1
	W1117 22:31:47.794606    6208 oci.go:83] error getting container status, will try to delete anyways: unknown state "functional-20211117223105-9504": docker container inspect functional-20211117223105-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117223105-9504
	I1117 22:31:47.798643    6208 cli_runner.go:115] Run: docker exec --privileged -t functional-20211117223105-9504 /bin/bash -c "sudo init 0"
	W1117 22:31:47.891224    6208 cli_runner.go:162] docker exec --privileged -t functional-20211117223105-9504 /bin/bash -c "sudo init 0" returned with exit code 1
	I1117 22:31:47.891301    6208 oci.go:658] error shutdown functional-20211117223105-9504: docker exec --privileged -t functional-20211117223105-9504 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: functional-20211117223105-9504
	I1117 22:31:48.897542    6208 cli_runner.go:115] Run: docker container inspect functional-20211117223105-9504 --format={{.State.Status}}
	W1117 22:31:48.982210    6208 cli_runner.go:162] docker container inspect functional-20211117223105-9504 --format={{.State.Status}} returned with exit code 1
	I1117 22:31:48.982210    6208 oci.go:670] temporary error verifying shutdown: unknown state "functional-20211117223105-9504": docker container inspect functional-20211117223105-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117223105-9504
	I1117 22:31:48.982210    6208 oci.go:672] temporary error: container functional-20211117223105-9504 status is  but expect it to be exited
	I1117 22:31:48.982496    6208 retry.go:31] will retry after 552.330144ms: couldn't verify container is exited. %v: unknown state "functional-20211117223105-9504": docker container inspect functional-20211117223105-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117223105-9504
	I1117 22:31:49.543007    6208 cli_runner.go:115] Run: docker container inspect functional-20211117223105-9504 --format={{.State.Status}}
	W1117 22:31:49.632093    6208 cli_runner.go:162] docker container inspect functional-20211117223105-9504 --format={{.State.Status}} returned with exit code 1
	I1117 22:31:49.632365    6208 oci.go:670] temporary error verifying shutdown: unknown state "functional-20211117223105-9504": docker container inspect functional-20211117223105-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117223105-9504
	I1117 22:31:49.632365    6208 oci.go:672] temporary error: container functional-20211117223105-9504 status is  but expect it to be exited
	I1117 22:31:49.632365    6208 retry.go:31] will retry after 1.080381816s: couldn't verify container is exited. %v: unknown state "functional-20211117223105-9504": docker container inspect functional-20211117223105-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117223105-9504
	I1117 22:31:50.717629    6208 cli_runner.go:115] Run: docker container inspect functional-20211117223105-9504 --format={{.State.Status}}
	W1117 22:31:50.817597    6208 cli_runner.go:162] docker container inspect functional-20211117223105-9504 --format={{.State.Status}} returned with exit code 1
	I1117 22:31:50.817688    6208 oci.go:670] temporary error verifying shutdown: unknown state "functional-20211117223105-9504": docker container inspect functional-20211117223105-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117223105-9504
	I1117 22:31:50.817688    6208 oci.go:672] temporary error: container functional-20211117223105-9504 status is  but expect it to be exited
	I1117 22:31:50.817872    6208 retry.go:31] will retry after 1.31013006s: couldn't verify container is exited. %v: unknown state "functional-20211117223105-9504": docker container inspect functional-20211117223105-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117223105-9504
	I1117 22:31:52.134287    6208 cli_runner.go:115] Run: docker container inspect functional-20211117223105-9504 --format={{.State.Status}}
	W1117 22:31:52.228925    6208 cli_runner.go:162] docker container inspect functional-20211117223105-9504 --format={{.State.Status}} returned with exit code 1
	I1117 22:31:52.229008    6208 oci.go:670] temporary error verifying shutdown: unknown state "functional-20211117223105-9504": docker container inspect functional-20211117223105-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117223105-9504
	I1117 22:31:52.229008    6208 oci.go:672] temporary error: container functional-20211117223105-9504 status is  but expect it to be exited
	I1117 22:31:52.229138    6208 retry.go:31] will retry after 1.582392691s: couldn't verify container is exited. %v: unknown state "functional-20211117223105-9504": docker container inspect functional-20211117223105-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117223105-9504
	I1117 22:31:53.816765    6208 cli_runner.go:115] Run: docker container inspect functional-20211117223105-9504 --format={{.State.Status}}
	W1117 22:31:53.903974    6208 cli_runner.go:162] docker container inspect functional-20211117223105-9504 --format={{.State.Status}} returned with exit code 1
	I1117 22:31:53.903974    6208 oci.go:670] temporary error verifying shutdown: unknown state "functional-20211117223105-9504": docker container inspect functional-20211117223105-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117223105-9504
	I1117 22:31:53.903974    6208 oci.go:672] temporary error: container functional-20211117223105-9504 status is  but expect it to be exited
	I1117 22:31:53.904224    6208 retry.go:31] will retry after 2.340488664s: couldn't verify container is exited. %v: unknown state "functional-20211117223105-9504": docker container inspect functional-20211117223105-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117223105-9504
	I1117 22:31:56.251253    6208 cli_runner.go:115] Run: docker container inspect functional-20211117223105-9504 --format={{.State.Status}}
	W1117 22:31:56.340357    6208 cli_runner.go:162] docker container inspect functional-20211117223105-9504 --format={{.State.Status}} returned with exit code 1
	I1117 22:31:56.340556    6208 oci.go:670] temporary error verifying shutdown: unknown state "functional-20211117223105-9504": docker container inspect functional-20211117223105-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117223105-9504
	I1117 22:31:56.340556    6208 oci.go:672] temporary error: container functional-20211117223105-9504 status is  but expect it to be exited
	I1117 22:31:56.340641    6208 retry.go:31] will retry after 4.506218855s: couldn't verify container is exited. %v: unknown state "functional-20211117223105-9504": docker container inspect functional-20211117223105-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117223105-9504
	I1117 22:32:00.852134    6208 cli_runner.go:115] Run: docker container inspect functional-20211117223105-9504 --format={{.State.Status}}
	W1117 22:32:00.942964    6208 cli_runner.go:162] docker container inspect functional-20211117223105-9504 --format={{.State.Status}} returned with exit code 1
	I1117 22:32:00.943278    6208 oci.go:670] temporary error verifying shutdown: unknown state "functional-20211117223105-9504": docker container inspect functional-20211117223105-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117223105-9504
	I1117 22:32:00.943373    6208 oci.go:672] temporary error: container functional-20211117223105-9504 status is  but expect it to be exited
	I1117 22:32:00.943373    6208 retry.go:31] will retry after 3.221479586s: couldn't verify container is exited. %v: unknown state "functional-20211117223105-9504": docker container inspect functional-20211117223105-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117223105-9504
	I1117 22:32:04.170129    6208 cli_runner.go:115] Run: docker container inspect functional-20211117223105-9504 --format={{.State.Status}}
	W1117 22:32:04.255526    6208 cli_runner.go:162] docker container inspect functional-20211117223105-9504 --format={{.State.Status}} returned with exit code 1
	I1117 22:32:04.255714    6208 oci.go:670] temporary error verifying shutdown: unknown state "functional-20211117223105-9504": docker container inspect functional-20211117223105-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117223105-9504
	I1117 22:32:04.255810    6208 oci.go:672] temporary error: container functional-20211117223105-9504 status is  but expect it to be exited
	I1117 22:32:04.255810    6208 oci.go:87] couldn't shut down functional-20211117223105-9504 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "functional-20211117223105-9504": docker container inspect functional-20211117223105-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117223105-9504
	 
	I1117 22:32:04.260666    6208 cli_runner.go:115] Run: docker rm -f -v functional-20211117223105-9504
	W1117 22:32:04.348783    6208 cli_runner.go:162] docker rm -f -v functional-20211117223105-9504 returned with exit code 1
	W1117 22:32:04.349726    6208 delete.go:139] delete failed (probably ok) <nil>
	I1117 22:32:04.349726    6208 fix.go:120] Sleeping 1 second for extra luck!
	I1117 22:32:05.350206    6208 start.go:126] createHost starting for "" (driver="docker")
	I1117 22:32:05.354773    6208 out.go:203] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I1117 22:32:05.354993    6208 start.go:160] libmachine.API.Create for "functional-20211117223105-9504" (driver="docker")
	I1117 22:32:05.354993    6208 client.go:168] LocalClient.Create starting
	I1117 22:32:05.355742    6208 main.go:130] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I1117 22:32:05.355982    6208 main.go:130] libmachine: Decoding PEM data...
	I1117 22:32:05.356061    6208 main.go:130] libmachine: Parsing certificate...
	I1117 22:32:05.356314    6208 main.go:130] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I1117 22:32:05.356363    6208 main.go:130] libmachine: Decoding PEM data...
	I1117 22:32:05.356363    6208 main.go:130] libmachine: Parsing certificate...
	I1117 22:32:05.360534    6208 cli_runner.go:115] Run: docker network inspect functional-20211117223105-9504 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1117 22:32:05.448369    6208 network_create.go:67] Found existing network {name:functional-20211117223105-9504 subnet:0xc0010affb0 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 168 49 1] mtu:1500}
	I1117 22:32:05.448540    6208 kic.go:106] calculated static IP "192.168.49.2" for the "functional-20211117223105-9504" container
	I1117 22:32:05.456475    6208 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
	I1117 22:32:05.551609    6208 cli_runner.go:115] Run: docker volume create functional-20211117223105-9504 --label name.minikube.sigs.k8s.io=functional-20211117223105-9504 --label created_by.minikube.sigs.k8s.io=true
	I1117 22:32:05.637806    6208 oci.go:102] Successfully created a docker volume functional-20211117223105-9504
	I1117 22:32:05.642009    6208 cli_runner.go:115] Run: docker run --rm --name functional-20211117223105-9504-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=functional-20211117223105-9504 --entrypoint /usr/bin/test -v functional-20211117223105-9504:/var gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -d /var/lib
	I1117 22:32:06.497427    6208 oci.go:106] Successfully prepared a docker volume functional-20211117223105-9504
	I1117 22:32:06.497427    6208 preload.go:132] Checking if preload exists for k8s version v1.22.3 and runtime docker
	I1117 22:32:06.497427    6208 kic.go:179] Starting extracting preloaded images to volume ...
	I1117 22:32:06.502553    6208 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v functional-20211117223105-9504:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir
	I1117 22:32:06.503126    6208 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	W1117 22:32:06.616737    6208 cli_runner.go:162] docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v functional-20211117223105-9504:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir returned with exit code 125
	I1117 22:32:06.616836    6208 kic.go:186] Unable to extract preloaded tarball to volume: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v functional-20211117223105-9504:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: status code not OK but 500: {"Message":"Unhandled exception: The notification platform is unavailable.\r\n\r\nThe notification platform is unavailable.\r\n","StackTrace":"   at Windows.UI.Notifications.ToastNotificationManager.CreateToastNotifier(String applicationId)\r\n   at Docker.WPF.PromptShareDirectory.<PromptUserAsync>d__0.MoveNext() in C:\\workspaces\\stable-2.3.x\\src\\github.com\\docker\\pinata\\win\\src\\Docker.WPF\\PromptShareDirectory.cs:line 25\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at Docker.ApiServices.Mounting.FileSharing.<DoShareAsync>d__6.MoveNext() in C:\\workspaces\\stable-2.3.x\\src\\github.com\\docker\\pinata\\win\\src\\Docker.ApiServices\\Mounting\\FileSharing.cs:line 80\r\n--- End of stack trace from previous location whe
re exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at Docker.ApiServices.Mounting.FileSharing.<ShareAsync>d__4.MoveNext() in C:\\workspaces\\stable-2.3.x\\src\\github.com\\docker\\pinata\\win\\src\\Docker.ApiServices\\Mounting\\FileSharing.cs:line 47\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at Docker.HttpApi.Controllers.FilesharingController.<ShareDirectory>d__2.MoveNext() in C:\\workspaces\\stable-2.3.x\\src\\github.com\\docker\\pinata\\win\\src\\Docker.HttpApi\\Controllers\\FilesharingController.cs:line 21\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.Exceptio
nDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at System.Threading.Tasks.TaskHelpersExtensions.<CastToObject>d__1`1.MoveNext()\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at System.Web.Http.Controllers.ApiControllerActionInvoker.<InvokeActionAsyncCore>d__1.MoveNext()\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at System.Web.Http.Controllers.ActionFilterResult.<ExecuteAsync>d__5.MoveNext()\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.Excepti
onServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at System.Web.Http.Dispatcher.HttpControllerDispatcher.<SendAsync>d__15.MoveNext()"}.
	See 'docker run --help'.
	I1117 22:32:06.857549    6208 info.go:263] docker info: {ID:L2W4:ZZCI:FAHF:ZZT6:NAWJ:ODFR:2ZBQ:YQ77:PYZZ:VR3E:WLUM:OBSV Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:41 OomKillDisable:true NGoroutines:53 SystemTime:2021-11-17 22:32:06.597456001 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:4.19.76-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://inde
x.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:6239399936 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:19.03.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7ad184331fa3e55e52b890ea95e65ba581ae3429 Expected:7ad184331fa3e55e52b890ea95e65ba581ae3429} RuncCommit:{ID:dc9208a3303feef5b3839f4323d9beb36df0a9dd Expected:dc9208a3303feef5b3839f4323d9beb36df0a9dd} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense:Community Engine Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	E1117 22:32:06.857549    6208 oci.go:197] error getting kernel modules path: Unable to locate kernel modules
	I1117 22:32:06.857549    6208 client.go:171] LocalClient.Create took 1.5025448s
	I1117 22:32:08.867690    6208 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1117 22:32:08.870613    6208 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117223105-9504
	W1117 22:32:08.957567    6208 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117223105-9504 returned with exit code 1
	I1117 22:32:08.958057    6208 retry.go:31] will retry after 149.242379ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20211117223105-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117223105-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117223105-9504
	I1117 22:32:09.112503    6208 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117223105-9504
	W1117 22:32:09.200886    6208 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117223105-9504 returned with exit code 1
	I1117 22:32:09.201223    6208 retry.go:31] will retry after 300.341948ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20211117223105-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117223105-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117223105-9504
	I1117 22:32:09.507101    6208 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117223105-9504
	W1117 22:32:09.596070    6208 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117223105-9504 returned with exit code 1
	I1117 22:32:09.596070    6208 retry.go:31] will retry after 571.057104ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20211117223105-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117223105-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117223105-9504
	I1117 22:32:10.172757    6208 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117223105-9504
	W1117 22:32:10.272756    6208 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117223105-9504 returned with exit code 1
	W1117 22:32:10.273020    6208 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20211117223105-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117223105-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117223105-9504
	
	W1117 22:32:10.273020    6208 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20211117223105-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117223105-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117223105-9504
	I1117 22:32:10.273020    6208 start.go:129] duration metric: createHost completed in 4.9226427s
	I1117 22:32:10.280538    6208 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1117 22:32:10.284551    6208 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117223105-9504
	W1117 22:32:10.373500    6208 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117223105-9504 returned with exit code 1
	I1117 22:32:10.373715    6208 retry.go:31] will retry after 178.565968ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20211117223105-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117223105-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117223105-9504
	I1117 22:32:10.558512    6208 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117223105-9504
	W1117 22:32:10.646122    6208 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117223105-9504 returned with exit code 1
	I1117 22:32:10.646122    6208 retry.go:31] will retry after 330.246446ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20211117223105-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117223105-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117223105-9504
	I1117 22:32:10.982145    6208 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117223105-9504
	W1117 22:32:11.072880    6208 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117223105-9504 returned with exit code 1
	I1117 22:32:11.073101    6208 retry.go:31] will retry after 460.157723ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20211117223105-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117223105-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117223105-9504
	I1117 22:32:11.538732    6208 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117223105-9504
	W1117 22:32:11.626186    6208 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117223105-9504 returned with exit code 1
	W1117 22:32:11.626393    6208 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20211117223105-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117223105-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117223105-9504
	
	W1117 22:32:11.626393    6208 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20211117223105-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117223105-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117223105-9504
	I1117 22:32:11.626393    6208 fix.go:57] fixHost completed within 24.309268s
	I1117 22:32:11.626393    6208 start.go:80] releasing machines lock for "functional-20211117223105-9504", held for 24.309268s
	W1117 22:32:11.626393    6208 start.go:532] error starting host: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	W1117 22:32:11.627124    6208 out.go:241] ! StartHost failed, but will try again: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	! StartHost failed, but will try again: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	I1117 22:32:11.627124    6208 start.go:547] Will try again in 5 seconds ...
	I1117 22:32:16.629180    6208 start.go:313] acquiring machines lock for functional-20211117223105-9504: {Name:mkf27125e9684350d3c166e137abc6f49434f9ae Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 22:32:16.629522    6208 start.go:317] acquired machines lock for "functional-20211117223105-9504" in 240.5µs
	I1117 22:32:16.629767    6208 start.go:93] Skipping create...Using existing machine configuration
	I1117 22:32:16.629795    6208 fix.go:55] fixHost starting: 
	I1117 22:32:16.637954    6208 cli_runner.go:115] Run: docker container inspect functional-20211117223105-9504 --format={{.State.Status}}
	W1117 22:32:16.721684    6208 cli_runner.go:162] docker container inspect functional-20211117223105-9504 --format={{.State.Status}} returned with exit code 1
	I1117 22:32:16.721684    6208 fix.go:108] recreateIfNeeded on functional-20211117223105-9504: state= err=unknown state "functional-20211117223105-9504": docker container inspect functional-20211117223105-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117223105-9504
	I1117 22:32:16.721684    6208 fix.go:113] machineExists: false. err=machine does not exist
	I1117 22:32:16.732583    6208 out.go:176] * docker "functional-20211117223105-9504" container is missing, will recreate.
	I1117 22:32:16.732583    6208 delete.go:124] DEMOLISHING functional-20211117223105-9504 ...
	I1117 22:32:16.740899    6208 cli_runner.go:115] Run: docker container inspect functional-20211117223105-9504 --format={{.State.Status}}
	W1117 22:32:16.825580    6208 cli_runner.go:162] docker container inspect functional-20211117223105-9504 --format={{.State.Status}} returned with exit code 1
	W1117 22:32:16.825695    6208 stop.go:75] unable to get state: unknown state "functional-20211117223105-9504": docker container inspect functional-20211117223105-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117223105-9504
	I1117 22:32:16.825856    6208 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "functional-20211117223105-9504": docker container inspect functional-20211117223105-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117223105-9504
	I1117 22:32:16.836692    6208 cli_runner.go:115] Run: docker container inspect functional-20211117223105-9504 --format={{.State.Status}}
	W1117 22:32:16.922171    6208 cli_runner.go:162] docker container inspect functional-20211117223105-9504 --format={{.State.Status}} returned with exit code 1
	I1117 22:32:16.922346    6208 delete.go:82] Unable to get host status for functional-20211117223105-9504, assuming it has already been deleted: state: unknown state "functional-20211117223105-9504": docker container inspect functional-20211117223105-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117223105-9504
	I1117 22:32:16.926501    6208 cli_runner.go:115] Run: docker container inspect -f {{.Id}} functional-20211117223105-9504
	W1117 22:32:17.013420    6208 cli_runner.go:162] docker container inspect -f {{.Id}} functional-20211117223105-9504 returned with exit code 1
	I1117 22:32:17.013420    6208 kic.go:360] could not find the container functional-20211117223105-9504 to remove it. will try anyways
	I1117 22:32:17.017932    6208 cli_runner.go:115] Run: docker container inspect functional-20211117223105-9504 --format={{.State.Status}}
	W1117 22:32:17.106982    6208 cli_runner.go:162] docker container inspect functional-20211117223105-9504 --format={{.State.Status}} returned with exit code 1
	W1117 22:32:17.106982    6208 oci.go:83] error getting container status, will try to delete anyways: unknown state "functional-20211117223105-9504": docker container inspect functional-20211117223105-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117223105-9504
	I1117 22:32:17.111398    6208 cli_runner.go:115] Run: docker exec --privileged -t functional-20211117223105-9504 /bin/bash -c "sudo init 0"
	W1117 22:32:17.223251    6208 cli_runner.go:162] docker exec --privileged -t functional-20211117223105-9504 /bin/bash -c "sudo init 0" returned with exit code 1
	I1117 22:32:17.223251    6208 oci.go:658] error shutdown functional-20211117223105-9504: docker exec --privileged -t functional-20211117223105-9504 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: functional-20211117223105-9504
	I1117 22:32:18.230520    6208 cli_runner.go:115] Run: docker container inspect functional-20211117223105-9504 --format={{.State.Status}}
	W1117 22:32:18.333355    6208 cli_runner.go:162] docker container inspect functional-20211117223105-9504 --format={{.State.Status}} returned with exit code 1
	I1117 22:32:18.333355    6208 oci.go:670] temporary error verifying shutdown: unknown state "functional-20211117223105-9504": docker container inspect functional-20211117223105-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117223105-9504
	I1117 22:32:18.333475    6208 oci.go:672] temporary error: container functional-20211117223105-9504 status is  but expect it to be exited
	I1117 22:32:18.333531    6208 retry.go:31] will retry after 391.517075ms: couldn't verify container is exited. %v: unknown state "functional-20211117223105-9504": docker container inspect functional-20211117223105-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117223105-9504
	I1117 22:32:18.734176    6208 cli_runner.go:115] Run: docker container inspect functional-20211117223105-9504 --format={{.State.Status}}
	W1117 22:32:18.826248    6208 cli_runner.go:162] docker container inspect functional-20211117223105-9504 --format={{.State.Status}} returned with exit code 1
	I1117 22:32:18.826347    6208 oci.go:670] temporary error verifying shutdown: unknown state "functional-20211117223105-9504": docker container inspect functional-20211117223105-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117223105-9504
	I1117 22:32:18.826347    6208 oci.go:672] temporary error: container functional-20211117223105-9504 status is  but expect it to be exited
	I1117 22:32:18.826418    6208 retry.go:31] will retry after 594.826393ms: couldn't verify container is exited. %v: unknown state "functional-20211117223105-9504": docker container inspect functional-20211117223105-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117223105-9504
	I1117 22:32:19.426906    6208 cli_runner.go:115] Run: docker container inspect functional-20211117223105-9504 --format={{.State.Status}}
	W1117 22:32:19.513678    6208 cli_runner.go:162] docker container inspect functional-20211117223105-9504 --format={{.State.Status}} returned with exit code 1
	I1117 22:32:19.513878    6208 oci.go:670] temporary error verifying shutdown: unknown state "functional-20211117223105-9504": docker container inspect functional-20211117223105-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117223105-9504
	I1117 22:32:19.513878    6208 oci.go:672] temporary error: container functional-20211117223105-9504 status is  but expect it to be exited
	I1117 22:32:19.513878    6208 retry.go:31] will retry after 1.326470261s: couldn't verify container is exited. %v: unknown state "functional-20211117223105-9504": docker container inspect functional-20211117223105-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117223105-9504
	I1117 22:32:20.845106    6208 cli_runner.go:115] Run: docker container inspect functional-20211117223105-9504 --format={{.State.Status}}
	W1117 22:32:20.940493    6208 cli_runner.go:162] docker container inspect functional-20211117223105-9504 --format={{.State.Status}} returned with exit code 1
	I1117 22:32:20.940534    6208 oci.go:670] temporary error verifying shutdown: unknown state "functional-20211117223105-9504": docker container inspect functional-20211117223105-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117223105-9504
	I1117 22:32:20.940534    6208 oci.go:672] temporary error: container functional-20211117223105-9504 status is  but expect it to be exited
	I1117 22:32:20.940534    6208 retry.go:31] will retry after 1.212558276s: couldn't verify container is exited. %v: unknown state "functional-20211117223105-9504": docker container inspect functional-20211117223105-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117223105-9504
	I1117 22:32:22.159259    6208 cli_runner.go:115] Run: docker container inspect functional-20211117223105-9504 --format={{.State.Status}}
	W1117 22:32:22.248253    6208 cli_runner.go:162] docker container inspect functional-20211117223105-9504 --format={{.State.Status}} returned with exit code 1
	I1117 22:32:22.248304    6208 oci.go:670] temporary error verifying shutdown: unknown state "functional-20211117223105-9504": docker container inspect functional-20211117223105-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117223105-9504
	I1117 22:32:22.248304    6208 oci.go:672] temporary error: container functional-20211117223105-9504 status is  but expect it to be exited
	I1117 22:32:22.248304    6208 retry.go:31] will retry after 1.779941781s: couldn't verify container is exited. %v: unknown state "functional-20211117223105-9504": docker container inspect functional-20211117223105-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117223105-9504
	I1117 22:32:24.034204    6208 cli_runner.go:115] Run: docker container inspect functional-20211117223105-9504 --format={{.State.Status}}
	W1117 22:32:24.123857    6208 cli_runner.go:162] docker container inspect functional-20211117223105-9504 --format={{.State.Status}} returned with exit code 1
	I1117 22:32:24.123857    6208 oci.go:670] temporary error verifying shutdown: unknown state "functional-20211117223105-9504": docker container inspect functional-20211117223105-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117223105-9504
	I1117 22:32:24.123967    6208 oci.go:672] temporary error: container functional-20211117223105-9504 status is  but expect it to be exited
	I1117 22:32:24.124019    6208 retry.go:31] will retry after 3.268621161s: couldn't verify container is exited. %v: unknown state "functional-20211117223105-9504": docker container inspect functional-20211117223105-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117223105-9504
	I1117 22:32:27.399127    6208 cli_runner.go:115] Run: docker container inspect functional-20211117223105-9504 --format={{.State.Status}}
	W1117 22:32:27.487608    6208 cli_runner.go:162] docker container inspect functional-20211117223105-9504 --format={{.State.Status}} returned with exit code 1
	I1117 22:32:27.487756    6208 oci.go:670] temporary error verifying shutdown: unknown state "functional-20211117223105-9504": docker container inspect functional-20211117223105-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117223105-9504
	I1117 22:32:27.487756    6208 oci.go:672] temporary error: container functional-20211117223105-9504 status is  but expect it to be exited
	I1117 22:32:27.487756    6208 retry.go:31] will retry after 6.097818893s: couldn't verify container is exited. %v: unknown state "functional-20211117223105-9504": docker container inspect functional-20211117223105-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117223105-9504
	I1117 22:32:33.591252    6208 cli_runner.go:115] Run: docker container inspect functional-20211117223105-9504 --format={{.State.Status}}
	W1117 22:32:33.679008    6208 cli_runner.go:162] docker container inspect functional-20211117223105-9504 --format={{.State.Status}} returned with exit code 1
	I1117 22:32:33.679165    6208 oci.go:670] temporary error verifying shutdown: unknown state "functional-20211117223105-9504": docker container inspect functional-20211117223105-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117223105-9504
	I1117 22:32:33.679165    6208 oci.go:672] temporary error: container functional-20211117223105-9504 status is  but expect it to be exited
	I1117 22:32:33.679252    6208 oci.go:87] couldn't shut down functional-20211117223105-9504 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "functional-20211117223105-9504": docker container inspect functional-20211117223105-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117223105-9504
	 
	I1117 22:32:33.685013    6208 cli_runner.go:115] Run: docker rm -f -v functional-20211117223105-9504
	W1117 22:32:33.768789    6208 cli_runner.go:162] docker rm -f -v functional-20211117223105-9504 returned with exit code 1
	W1117 22:32:33.769416    6208 delete.go:139] delete failed (probably ok) <nil>
	I1117 22:32:33.769416    6208 fix.go:120] Sleeping 1 second for extra luck!
	I1117 22:32:34.770557    6208 start.go:126] createHost starting for "" (driver="docker")
	I1117 22:32:34.774553    6208 out.go:203] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I1117 22:32:34.774754    6208 start.go:160] libmachine.API.Create for "functional-20211117223105-9504" (driver="docker")
	I1117 22:32:34.774754    6208 client.go:168] LocalClient.Create starting
	I1117 22:32:34.775391    6208 main.go:130] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I1117 22:32:34.775600    6208 main.go:130] libmachine: Decoding PEM data...
	I1117 22:32:34.775672    6208 main.go:130] libmachine: Parsing certificate...
	I1117 22:32:34.775874    6208 main.go:130] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I1117 22:32:34.776010    6208 main.go:130] libmachine: Decoding PEM data...
	I1117 22:32:34.776114    6208 main.go:130] libmachine: Parsing certificate...
	I1117 22:32:34.780456    6208 cli_runner.go:115] Run: docker network inspect functional-20211117223105-9504 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1117 22:32:34.870631    6208 network_create.go:67] Found existing network {name:functional-20211117223105-9504 subnet:0xc0012fc090 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 168 49 1] mtu:1500}
	I1117 22:32:34.870631    6208 kic.go:106] calculated static IP "192.168.49.2" for the "functional-20211117223105-9504" container
	I1117 22:32:34.878514    6208 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
	I1117 22:32:34.986123    6208 cli_runner.go:115] Run: docker volume create functional-20211117223105-9504 --label name.minikube.sigs.k8s.io=functional-20211117223105-9504 --label created_by.minikube.sigs.k8s.io=true
	I1117 22:32:35.077418    6208 oci.go:102] Successfully created a docker volume functional-20211117223105-9504
	I1117 22:32:35.082356    6208 cli_runner.go:115] Run: docker run --rm --name functional-20211117223105-9504-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=functional-20211117223105-9504 --entrypoint /usr/bin/test -v functional-20211117223105-9504:/var gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -d /var/lib
	I1117 22:32:35.949702    6208 oci.go:106] Successfully prepared a docker volume functional-20211117223105-9504
	I1117 22:32:35.950045    6208 preload.go:132] Checking if preload exists for k8s version v1.22.3 and runtime docker
	I1117 22:32:35.950161    6208 kic.go:179] Starting extracting preloaded images to volume ...
	I1117 22:32:35.955778    6208 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I1117 22:32:35.955778    6208 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v functional-20211117223105-9504:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir
	W1117 22:32:36.068486    6208 cli_runner.go:162] docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v functional-20211117223105-9504:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir returned with exit code 125
	I1117 22:32:36.068486    6208 kic.go:186] Unable to extract preloaded tarball to volume: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v functional-20211117223105-9504:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: status code not OK but 500: {"Message":"Unhandled exception: The notification platform is unavailable.\r\n\r\nThe notification platform is unavailable.\r\n","StackTrace":"   at Windows.UI.Notifications.ToastNotificationManager.CreateToastNotifier(String applicationId)\r\n   at Docker.WPF.PromptShareDirectory.<PromptUserAsync>d__0.MoveNext() in C:\\workspaces\\stable-2.3.x\\src\\github.com\\docker\\pinata\\win\\src\\Docker.WPF\\PromptShareDirectory.cs:line 25\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at Docker.ApiServices.Mounting.FileSharing.<DoShareAsync>d__6.MoveNext() in C:\\workspaces\\stable-2.3.x\\src\\github.com\\docker\\pinata\\win\\src\\Docker.ApiServices\\Mounting\\FileSharing.cs:line 80\r\n--- End of stack trace from previous location whe
re exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at Docker.ApiServices.Mounting.FileSharing.<ShareAsync>d__4.MoveNext() in C:\\workspaces\\stable-2.3.x\\src\\github.com\\docker\\pinata\\win\\src\\Docker.ApiServices\\Mounting\\FileSharing.cs:line 47\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at Docker.HttpApi.Controllers.FilesharingController.<ShareDirectory>d__2.MoveNext() in C:\\workspaces\\stable-2.3.x\\src\\github.com\\docker\\pinata\\win\\src\\Docker.HttpApi\\Controllers\\FilesharingController.cs:line 21\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.Exceptio
nDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at System.Threading.Tasks.TaskHelpersExtensions.<CastToObject>d__1`1.MoveNext()\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at System.Web.Http.Controllers.ApiControllerActionInvoker.<InvokeActionAsyncCore>d__1.MoveNext()\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at System.Web.Http.Controllers.ActionFilterResult.<ExecuteAsync>d__5.MoveNext()\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.Excepti
onServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at System.Web.Http.Dispatcher.HttpControllerDispatcher.<SendAsync>d__15.MoveNext()"}.
	See 'docker run --help'.
	I1117 22:32:36.309996    6208 info.go:263] docker info: {ID:L2W4:ZZCI:FAHF:ZZT6:NAWJ:ODFR:2ZBQ:YQ77:PYZZ:VR3E:WLUM:OBSV Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:40 OomKillDisable:true NGoroutines:52 SystemTime:2021-11-17 22:32:36.035311506 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:4.19.76-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://inde
x.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:6239399936 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:19.03.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7ad184331fa3e55e52b890ea95e65ba581ae3429 Expected:7ad184331fa3e55e52b890ea95e65ba581ae3429} RuncCommit:{ID:dc9208a3303feef5b3839f4323d9beb36df0a9dd Expected:dc9208a3303feef5b3839f4323d9beb36df0a9dd} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense:Community Engine Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	E1117 22:32:36.310388    6208 oci.go:197] error getting kernel modules path: Unable to locate kernel modules
	I1117 22:32:36.310415    6208 client.go:171] LocalClient.Create took 1.5356491s
	I1117 22:32:38.319356    6208 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1117 22:32:38.322954    6208 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117223105-9504
	W1117 22:32:38.407175    6208 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117223105-9504 returned with exit code 1
	I1117 22:32:38.408182    6208 retry.go:31] will retry after 198.278561ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20211117223105-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117223105-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117223105-9504
	I1117 22:32:38.611940    6208 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117223105-9504
	W1117 22:32:38.698994    6208 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117223105-9504 returned with exit code 1
	I1117 22:32:38.699258    6208 retry.go:31] will retry after 298.905961ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20211117223105-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117223105-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117223105-9504
	I1117 22:32:39.003777    6208 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117223105-9504
	W1117 22:32:39.090842    6208 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117223105-9504 returned with exit code 1
	I1117 22:32:39.090842    6208 retry.go:31] will retry after 704.572333ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20211117223105-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117223105-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117223105-9504
	I1117 22:32:39.801932    6208 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117223105-9504
	W1117 22:32:39.889821    6208 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117223105-9504 returned with exit code 1
	W1117 22:32:39.890083    6208 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20211117223105-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117223105-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117223105-9504
	
	W1117 22:32:39.890160    6208 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20211117223105-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117223105-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117223105-9504
	I1117 22:32:39.890160    6208 start.go:129] duration metric: createHost completed in 5.1192549s
	I1117 22:32:39.897646    6208 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1117 22:32:39.900672    6208 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117223105-9504
	W1117 22:32:39.996727    6208 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117223105-9504 returned with exit code 1
	I1117 22:32:39.997127    6208 retry.go:31] will retry after 341.333754ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20211117223105-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117223105-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117223105-9504
	I1117 22:32:40.344321    6208 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117223105-9504
	W1117 22:32:40.430036    6208 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117223105-9504 returned with exit code 1
	I1117 22:32:40.430305    6208 retry.go:31] will retry after 448.769687ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20211117223105-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117223105-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117223105-9504
	I1117 22:32:40.884685    6208 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117223105-9504
	W1117 22:32:40.969626    6208 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117223105-9504 returned with exit code 1
	I1117 22:32:40.970000    6208 retry.go:31] will retry after 575.898922ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20211117223105-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117223105-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117223105-9504
	I1117 22:32:41.554203    6208 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117223105-9504
	W1117 22:32:41.640117    6208 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117223105-9504 returned with exit code 1
	W1117 22:32:41.640604    6208 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20211117223105-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117223105-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117223105-9504
	
	W1117 22:32:41.640604    6208 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20211117223105-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117223105-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117223105-9504
	I1117 22:32:41.640604    6208 fix.go:57] fixHost completed within 25.0106216s
	I1117 22:32:41.640604    6208 start.go:80] releasing machines lock for "functional-20211117223105-9504", held for 25.0108652s
	W1117 22:32:41.640825    6208 out.go:241] * Failed to start docker container. Running "minikube delete -p functional-20211117223105-9504" may fix it: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	* Failed to start docker container. Running "minikube delete -p functional-20211117223105-9504" may fix it: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	I1117 22:32:41.651279    6208 out.go:176] 
	W1117 22:32:41.651279    6208 out.go:241] X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	W1117 22:32:41.651279    6208 out.go:241] * 
	* 
	W1117 22:32:41.652873    6208 out.go:241] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1117 22:32:41.655140    6208 out.go:176] 

                                                
                                                
** /stderr **
functional_test.go:602: failed to soft start minikube. args "out/minikube-windows-amd64.exe start -p functional-20211117223105-9504 --alsologtostderr -v=8": exit status 80
functional_test.go:604: soft start took 57.1710395s for "functional-20211117223105-9504" cluster.
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestFunctional/serial/SoftStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect functional-20211117223105-9504
helpers_test.go:235: (dbg) docker inspect functional-20211117223105-9504:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "functional-20211117223105-9504",
	        "Id": "72218bd966d51a7f89406ef733cbb0e5b7382c2eca4e22dcf492d153f6d7a483",
	        "Created": "2021-11-17T22:31:08.713523089Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.49.0/24",
	                    "Gateway": "192.168.49.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-20211117223105-9504 -n functional-20211117223105-9504
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-20211117223105-9504 -n functional-20211117223105-9504: exit status 7 (1.7401978s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 22:32:43.608372   11412 status.go:247] status error: host: state: unknown state "functional-20211117223105-9504": docker container inspect functional-20211117223105-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117223105-9504

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-20211117223105-9504" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestFunctional/serial/SoftStart (59.02s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (2.22s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:622: (dbg) Run:  kubectl config current-context
functional_test.go:622: (dbg) Non-zero exit: kubectl config current-context: exit status 1 (392.8434ms)

                                                
                                                
** stderr ** 
	W1117 22:32:43.945116   10728 loader.go:223] Config not found: C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	error: current-context is not set

                                                
                                                
** /stderr **
functional_test.go:624: failed to get current-context. args "kubectl config current-context" : exit status 1
functional_test.go:628: expected current-context = "functional-20211117223105-9504", but got *""*
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestFunctional/serial/KubeContext]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect functional-20211117223105-9504
helpers_test.go:235: (dbg) docker inspect functional-20211117223105-9504:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "functional-20211117223105-9504",
	        "Id": "72218bd966d51a7f89406ef733cbb0e5b7382c2eca4e22dcf492d153f6d7a483",
	        "Created": "2021-11-17T22:31:08.713523089Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.49.0/24",
	                    "Gateway": "192.168.49.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-20211117223105-9504 -n functional-20211117223105-9504
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-20211117223105-9504 -n functional-20211117223105-9504: exit status 7 (1.7242405s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 22:32:45.829571    9252 status.go:247] status error: host: state: unknown state "functional-20211117223105-9504": docker container inspect functional-20211117223105-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117223105-9504

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-20211117223105-9504" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestFunctional/serial/KubeContext (2.22s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (2.15s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:637: (dbg) Run:  kubectl --context functional-20211117223105-9504 get po -A
functional_test.go:637: (dbg) Non-zero exit: kubectl --context functional-20211117223105-9504 get po -A: exit status 1 (279.4124ms)

                                                
                                                
** stderr ** 
	W1117 22:32:46.050589    7608 loader.go:223] Config not found: C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	Error in configuration: 
	* context was not found for specified context: functional-20211117223105-9504
	* cluster has no server defined

                                                
                                                
** /stderr **
functional_test.go:639: failed to get kubectl pods: args "kubectl --context functional-20211117223105-9504 get po -A" : exit status 1
functional_test.go:643: expected stderr to be empty but got *"W1117 22:32:46.050589    7608 loader.go:223] Config not found: C:\\Users\\jenkins.minikube2\\minikube-integration\\kubeconfig\nError in configuration: \n* context was not found for specified context: functional-20211117223105-9504\n* cluster has no server defined\n"*: args "kubectl --context functional-20211117223105-9504 get po -A"
functional_test.go:646: expected stdout to include *kube-system* but got *""*. args: "kubectl --context functional-20211117223105-9504 get po -A"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestFunctional/serial/KubectlGetPods]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect functional-20211117223105-9504
helpers_test.go:235: (dbg) docker inspect functional-20211117223105-9504:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "functional-20211117223105-9504",
	        "Id": "72218bd966d51a7f89406ef733cbb0e5b7382c2eca4e22dcf492d153f6d7a483",
	        "Created": "2021-11-17T22:31:08.713523089Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.49.0/24",
	                    "Gateway": "192.168.49.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-20211117223105-9504 -n functional-20211117223105-9504
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-20211117223105-9504 -n functional-20211117223105-9504: exit status 7 (1.7356993s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 22:32:47.974723   12248 status.go:247] status error: host: state: unknown state "functional-20211117223105-9504": docker container inspect functional-20211117223105-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117223105-9504

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-20211117223105-9504" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestFunctional/serial/KubectlGetPods (2.15s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (5.38s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:983: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20211117223105-9504 cache add k8s.gcr.io/pause:3.1
functional_test.go:983: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-20211117223105-9504 cache add k8s.gcr.io/pause:3.1: exit status 10 (1.7959729s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! "minikube cache" will be deprecated in upcoming versions, please switch to "minikube image load"
	X Exiting due to MK_CACHE_LOAD: save to dir: caching images: caching image "C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\cache\\images\\k8s.gcr.io\\pause_3.1": write: unable to calculate manifest: Error: No such image: k8s.gcr.io/pause:3.1
	* 
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                       │
	│    * If the above advice does not help, please let us know:                                                           │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                         │
	│                                                                                                                       │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                              │
	│    * Please also attach the following file to the GitHub issue:                                                       │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_cache_3602adf0e91aa53555a81eb8e73b2395349ccc18_0.log    │
	│                                                                                                                       │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:985: failed to 'cache add' remote image "k8s.gcr.io/pause:3.1". args "out/minikube-windows-amd64.exe -p functional-20211117223105-9504 cache add k8s.gcr.io/pause:3.1" err exit status 10
functional_test.go:983: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20211117223105-9504 cache add k8s.gcr.io/pause:3.3
functional_test.go:983: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-20211117223105-9504 cache add k8s.gcr.io/pause:3.3: exit status 10 (1.8072175s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! "minikube cache" will be deprecated in upcoming versions, please switch to "minikube image load"
	X Exiting due to MK_CACHE_LOAD: save to dir: caching images: caching image "C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\cache\\images\\k8s.gcr.io\\pause_3.3": write: unable to calculate manifest: Error: No such image: k8s.gcr.io/pause:3.3
	* 
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                       │
	│    * If the above advice does not help, please let us know:                                                           │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                         │
	│                                                                                                                       │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                              │
	│    * Please also attach the following file to the GitHub issue:                                                       │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_cache_283d3d162ec1dadb637ad408d6e92db6f82d1ecd_0.log    │
	│                                                                                                                       │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:985: failed to 'cache add' remote image "k8s.gcr.io/pause:3.3". args "out/minikube-windows-amd64.exe -p functional-20211117223105-9504 cache add k8s.gcr.io/pause:3.3" err exit status 10
functional_test.go:983: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20211117223105-9504 cache add k8s.gcr.io/pause:latest
functional_test.go:983: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-20211117223105-9504 cache add k8s.gcr.io/pause:latest: exit status 10 (1.7794594s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! "minikube cache" will be deprecated in upcoming versions, please switch to "minikube image load"
	X Exiting due to MK_CACHE_LOAD: save to dir: caching images: caching image "C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\cache\\images\\k8s.gcr.io\\pause_latest": write: unable to calculate manifest: Error: No such image: k8s.gcr.io/pause:latest
	* 
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                       │
	│    * If the above advice does not help, please let us know:                                                           │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                         │
	│                                                                                                                       │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                              │
	│    * Please also attach the following file to the GitHub issue:                                                       │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_cache_2661bae674a31ecac63d9626c60205c285fbc61d_0.log    │
	│                                                                                                                       │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:985: failed to 'cache add' remote image "k8s.gcr.io/pause:latest". args "out/minikube-windows-amd64.exe -p functional-20211117223105-9504 cache add k8s.gcr.io/pause:latest" err exit status 10
--- FAIL: TestFunctional/serial/CacheCmd/cache/add_remote (5.38s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 (0.33s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3
functional_test.go:1039: (dbg) Run:  out/minikube-windows-amd64.exe cache delete k8s.gcr.io/pause:3.3
functional_test.go:1039: (dbg) Non-zero exit: out/minikube-windows-amd64.exe cache delete k8s.gcr.io/pause:3.3: exit status 30 (334.4713ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to HOST_DEL_CACHE: remove C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\k8s.gcr.io\pause_3.3: The system cannot find the file specified.
	* 
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                       │
	│    * If the above advice does not help, please let us know:                                                           │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                         │
	│                                                                                                                       │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                              │
	│    * Please also attach the following file to the GitHub issue:                                                       │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_cache_0200f5de9a7310ab3e921761a9abba90ba90b915_0.log    │
	│                                                                                                                       │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1041: failed to delete image k8s.gcr.io/pause:3.3 from cache. args "out/minikube-windows-amd64.exe cache delete k8s.gcr.io/pause:3.3": exit status 30
--- FAIL: TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 (0.33s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.29s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1047: (dbg) Run:  out/minikube-windows-amd64.exe cache list
functional_test.go:1052: expected 'cache list' output to include 'k8s.gcr.io/pause' but got: ******
--- FAIL: TestFunctional/serial/CacheCmd/cache/list (0.29s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (1.73s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1061: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20211117223105-9504 ssh sudo crictl images
functional_test.go:1061: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-20211117223105-9504 ssh sudo crictl images: exit status 80 (1.732086s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "functional-20211117223105-9504": docker container inspect functional-20211117223105-9504 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117223105-9504
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                     │
	│    * If the above advice does not help, please let us know:                                                         │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                       │
	│                                                                                                                     │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                            │
	│    * Please also attach the following file to the GitHub issue:                                                     │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_ssh_f40552ee918ac053c4c404bc1ee7532c196ce64c_0.log    │
	│                                                                                                                     │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1063: failed to get images by "out/minikube-windows-amd64.exe -p functional-20211117223105-9504 ssh sudo crictl images" ssh exit status 80
functional_test.go:1067: expected sha for pause:3.3 "0184c1613d929" to be in the output but got *
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "functional-20211117223105-9504": docker container inspect functional-20211117223105-9504 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117223105-9504
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                     │
	│    * If the above advice does not help, please let us know:                                                         │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                       │
	│                                                                                                                     │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                            │
	│    * Please also attach the following file to the GitHub issue:                                                     │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_ssh_f40552ee918ac053c4c404bc1ee7532c196ce64c_0.log    │
	│                                                                                                                     │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr ***
--- FAIL: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (1.73s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (5.65s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1084: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20211117223105-9504 ssh sudo docker rmi k8s.gcr.io/pause:latest
functional_test.go:1084: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-20211117223105-9504 ssh sudo docker rmi k8s.gcr.io/pause:latest: exit status 80 (1.7571023s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "functional-20211117223105-9504": docker container inspect functional-20211117223105-9504 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117223105-9504
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                     │
	│    * If the above advice does not help, please let us know:                                                         │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                       │
	│                                                                                                                     │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                            │
	│    * Please also attach the following file to the GitHub issue:                                                     │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_ssh_695159ccd5e0da3f5d811f2823eb9163b9dc65a6_0.log    │
	│                                                                                                                     │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1087: failed to manually delete image "out/minikube-windows-amd64.exe -p functional-20211117223105-9504 ssh sudo docker rmi k8s.gcr.io/pause:latest" : exit status 80
functional_test.go:1090: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20211117223105-9504 ssh sudo crictl inspecti k8s.gcr.io/pause:latest
functional_test.go:1090: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-20211117223105-9504 ssh sudo crictl inspecti k8s.gcr.io/pause:latest: exit status 80 (1.791071s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "functional-20211117223105-9504": docker container inspect functional-20211117223105-9504 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117223105-9504
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                     │
	│    * If the above advice does not help, please let us know:                                                         │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                       │
	│                                                                                                                     │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                            │
	│    * Please also attach the following file to the GitHub issue:                                                     │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_ssh_90c12c9ea894b73e3971aa1ec67d0a7aeefe0b8f_0.log    │
	│                                                                                                                     │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1095: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20211117223105-9504 cache reload
functional_test.go:1100: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20211117223105-9504 ssh sudo crictl inspecti k8s.gcr.io/pause:latest
functional_test.go:1100: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-20211117223105-9504 ssh sudo crictl inspecti k8s.gcr.io/pause:latest: exit status 80 (1.8026571s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "functional-20211117223105-9504": docker container inspect functional-20211117223105-9504 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117223105-9504
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                     │
	│    * If the above advice does not help, please let us know:                                                         │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                       │
	│                                                                                                                     │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                            │
	│    * Please also attach the following file to the GitHub issue:                                                     │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_ssh_90c12c9ea894b73e3971aa1ec67d0a7aeefe0b8f_0.log    │
	│                                                                                                                     │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1102: expected "out/minikube-windows-amd64.exe -p functional-20211117223105-9504 ssh sudo crictl inspecti k8s.gcr.io/pause:latest" to run successfully but got error: exit status 80
--- FAIL: TestFunctional/serial/CacheCmd/cache/cache_reload (5.65s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.64s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1109: (dbg) Run:  out/minikube-windows-amd64.exe cache delete k8s.gcr.io/pause:3.1
functional_test.go:1109: (dbg) Non-zero exit: out/minikube-windows-amd64.exe cache delete k8s.gcr.io/pause:3.1: exit status 30 (301.7488ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to HOST_DEL_CACHE: remove C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\k8s.gcr.io\pause_3.1: The system cannot find the file specified.
	* 
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                       │
	│    * If the above advice does not help, please let us know:                                                           │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                         │
	│                                                                                                                       │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                              │
	│    * Please also attach the following file to the GitHub issue:                                                       │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_cache_628bcd45961c95abb7104c276d5002b64ad98980_0.log    │
	│                                                                                                                       │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1111: failed to delete k8s.gcr.io/pause:3.1 from cache. args "out/minikube-windows-amd64.exe cache delete k8s.gcr.io/pause:3.1": exit status 30
functional_test.go:1109: (dbg) Run:  out/minikube-windows-amd64.exe cache delete k8s.gcr.io/pause:latest
functional_test.go:1109: (dbg) Non-zero exit: out/minikube-windows-amd64.exe cache delete k8s.gcr.io/pause:latest: exit status 30 (337.5468ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to HOST_DEL_CACHE: remove C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\k8s.gcr.io\pause_latest: The system cannot find the file specified.
	* 
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                       │
	│    * If the above advice does not help, please let us know:                                                           │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                         │
	│                                                                                                                       │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                              │
	│    * Please also attach the following file to the GitHub issue:                                                       │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_cache_d3bc4acdc274c80f7ac3938f15b56091c2d7a8d5_0.log    │
	│                                                                                                                       │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1111: failed to delete k8s.gcr.io/pause:latest from cache. args "out/minikube-windows-amd64.exe cache delete k8s.gcr.io/pause:latest": exit status 30
--- FAIL: TestFunctional/serial/CacheCmd/cache/delete (0.64s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (4.06s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:657: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20211117223105-9504 kubectl -- --context functional-20211117223105-9504 get pods
functional_test.go:657: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-20211117223105-9504 kubectl -- --context functional-20211117223105-9504 get pods: exit status 1 (2.1923772s)

                                                
                                                
** stderr ** 
	W1117 22:33:09.329312    6412 loader.go:221] Config not found: C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	Error in configuration: 
	* context was not found for specified context: functional-20211117223105-9504
	* no server found for cluster "functional-20211117223105-9504"

                                                
                                                
** /stderr **
functional_test.go:660: failed to get pods. args "out/minikube-windows-amd64.exe -p functional-20211117223105-9504 kubectl -- --context functional-20211117223105-9504 get pods": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestFunctional/serial/MinikubeKubectlCmd]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect functional-20211117223105-9504
helpers_test.go:235: (dbg) docker inspect functional-20211117223105-9504:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "functional-20211117223105-9504",
	        "Id": "72218bd966d51a7f89406ef733cbb0e5b7382c2eca4e22dcf492d153f6d7a483",
	        "Created": "2021-11-17T22:31:08.713523089Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.49.0/24",
	                    "Gateway": "192.168.49.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-20211117223105-9504 -n functional-20211117223105-9504
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-20211117223105-9504 -n functional-20211117223105-9504: exit status 7 (1.7698234s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 22:33:11.269827     256 status.go:247] status error: host: state: unknown state "functional-20211117223105-9504": docker container inspect functional-20211117223105-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117223105-9504

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-20211117223105-9504" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestFunctional/serial/MinikubeKubectlCmd (4.06s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (3.77s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:682: (dbg) Run:  out\kubectl.exe --context functional-20211117223105-9504 get pods
functional_test.go:682: (dbg) Non-zero exit: out\kubectl.exe --context functional-20211117223105-9504 get pods: exit status 1 (1.8376298s)

                                                
                                                
** stderr ** 
	W1117 22:33:13.029744   10632 loader.go:221] Config not found: C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	Error in configuration: 
	* context was not found for specified context: functional-20211117223105-9504
	* no server found for cluster "functional-20211117223105-9504"

                                                
                                                
** /stderr **
functional_test.go:685: failed to run kubectl directly. args "out\\kubectl.exe --context functional-20211117223105-9504 get pods": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestFunctional/serial/MinikubeKubectlCmdDirectly]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect functional-20211117223105-9504
helpers_test.go:235: (dbg) docker inspect functional-20211117223105-9504:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "functional-20211117223105-9504",
	        "Id": "72218bd966d51a7f89406ef733cbb0e5b7382c2eca4e22dcf492d153f6d7a483",
	        "Created": "2021-11-17T22:31:08.713523089Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.49.0/24",
	                    "Gateway": "192.168.49.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-20211117223105-9504 -n functional-20211117223105-9504
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-20211117223105-9504 -n functional-20211117223105-9504: exit status 7 (1.8288313s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 22:33:15.036618   12276 status.go:247] status error: host: state: unknown state "functional-20211117223105-9504": docker container inspect functional-20211117223105-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117223105-9504

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-20211117223105-9504" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestFunctional/serial/MinikubeKubectlCmdDirectly (3.77s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (58.84s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:698: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-20211117223105-9504 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:698: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p functional-20211117223105-9504 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: exit status 80 (56.9948674s)

                                                
                                                
-- stdout --
	* [functional-20211117223105-9504] minikube v1.24.0 on Microsoft Windows 10 Enterprise N 10.0.18363 Build 18363
	  - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=12739
	* Using the docker driver based on existing profile
	* Starting control plane node functional-20211117223105-9504 in cluster functional-20211117223105-9504
	* Pulling base image ...
	* docker "functional-20211117223105-9504" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=4000MB) ...
	* docker "functional-20211117223105-9504" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=4000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 22:33:37.310097    3676 oci.go:197] error getting kernel modules path: Unable to locate kernel modules
	! StartHost failed, but will try again: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	E1117 22:34:06.693318    3676 oci.go:197] error getting kernel modules path: Unable to locate kernel modules
	* Failed to start docker container. Running "minikube delete -p functional-20211117223105-9504" may fix it: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:700: failed to restart minikube. args "out/minikube-windows-amd64.exe start -p functional-20211117223105-9504 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all": exit status 80
functional_test.go:702: restart took 56.9954174s for "functional-20211117223105-9504" cluster.
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestFunctional/serial/ExtraConfig]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect functional-20211117223105-9504
helpers_test.go:235: (dbg) docker inspect functional-20211117223105-9504:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "functional-20211117223105-9504",
	        "Id": "72218bd966d51a7f89406ef733cbb0e5b7382c2eca4e22dcf492d153f6d7a483",
	        "Created": "2021-11-17T22:31:08.713523089Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.49.0/24",
	                    "Gateway": "192.168.49.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-20211117223105-9504 -n functional-20211117223105-9504
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-20211117223105-9504 -n functional-20211117223105-9504: exit status 7 (1.741929s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 22:34:13.878418    8108 status.go:247] status error: host: state: unknown state "functional-20211117223105-9504": docker container inspect functional-20211117223105-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117223105-9504

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-20211117223105-9504" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestFunctional/serial/ExtraConfig (58.84s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (2.16s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:752: (dbg) Run:  kubectl --context functional-20211117223105-9504 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:752: (dbg) Non-zero exit: kubectl --context functional-20211117223105-9504 get po -l tier=control-plane -n kube-system -o=json: exit status 1 (293.7187ms)

                                                
                                                
** stderr ** 
	W1117 22:34:14.114791    7356 loader.go:223] Config not found: C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	error: context "functional-20211117223105-9504" does not exist

                                                
                                                
** /stderr **
functional_test.go:754: failed to get components. args "kubectl --context functional-20211117223105-9504 get po -l tier=control-plane -n kube-system -o=json": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestFunctional/serial/ComponentHealth]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect functional-20211117223105-9504
helpers_test.go:235: (dbg) docker inspect functional-20211117223105-9504:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "functional-20211117223105-9504",
	        "Id": "72218bd966d51a7f89406ef733cbb0e5b7382c2eca4e22dcf492d153f6d7a483",
	        "Created": "2021-11-17T22:31:08.713523089Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.49.0/24",
	                    "Gateway": "192.168.49.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-20211117223105-9504 -n functional-20211117223105-9504
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-20211117223105-9504 -n functional-20211117223105-9504: exit status 7 (1.7628534s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 22:34:16.040825    8660 status.go:247] status error: host: state: unknown state "functional-20211117223105-9504": docker container inspect functional-20211117223105-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117223105-9504

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-20211117223105-9504" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestFunctional/serial/ComponentHealth (2.16s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (2.22s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1173: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20211117223105-9504 logs
functional_test.go:1173: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-20211117223105-9504 logs: exit status 80 (1.9020348s)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|----------------------------------------------------------|-------------------------------------|-------------------|---------|-------------------------------|-------------------------------|
	| Command |                           Args                           |               Profile               |       User        | Version |          Start Time           |           End Time            |
	|---------|----------------------------------------------------------|-------------------------------------|-------------------|---------|-------------------------------|-------------------------------|
	| delete  | --all                                                    | download-only-20211117222633-9504   | minikube2\jenkins | v1.24.0 | Wed, 17 Nov 2021 22:27:15 GMT | Wed, 17 Nov 2021 22:27:18 GMT |
	| delete  | -p                                                       | download-only-20211117222633-9504   | minikube2\jenkins | v1.24.0 | Wed, 17 Nov 2021 22:27:18 GMT | Wed, 17 Nov 2021 22:27:20 GMT |
	|         | download-only-20211117222633-9504                        |                                     |                   |         |                               |                               |
	| delete  | -p                                                       | download-only-20211117222633-9504   | minikube2\jenkins | v1.24.0 | Wed, 17 Nov 2021 22:27:20 GMT | Wed, 17 Nov 2021 22:27:22 GMT |
	|         | download-only-20211117222633-9504                        |                                     |                   |         |                               |                               |
	| delete  | -p                                                       | download-docker-20211117222722-9504 | minikube2\jenkins | v1.24.0 | Wed, 17 Nov 2021 22:27:43 GMT | Wed, 17 Nov 2021 22:27:46 GMT |
	|         | download-docker-20211117222722-9504                      |                                     |                   |         |                               |                               |
	| delete  | -p addons-20211117222746-9504                            | addons-20211117222746-9504          | minikube2\jenkins | v1.24.0 | Wed, 17 Nov 2021 22:29:11 GMT | Wed, 17 Nov 2021 22:29:14 GMT |
	| delete  | -p nospam-20211117222914-9504                            | nospam-20211117222914-9504          | minikube2\jenkins | v1.24.0 | Wed, 17 Nov 2021 22:31:03 GMT | Wed, 17 Nov 2021 22:31:05 GMT |
	| -p      | functional-20211117223105-9504 cache add                 | functional-20211117223105-9504      | minikube2\jenkins | v1.24.0 | Wed, 17 Nov 2021 22:32:54 GMT | Wed, 17 Nov 2021 22:32:56 GMT |
	|         | minikube-local-cache-test:functional-20211117223105-9504 |                                     |                   |         |                               |                               |
	| -p      | functional-20211117223105-9504 cache delete              | functional-20211117223105-9504      | minikube2\jenkins | v1.24.0 | Wed, 17 Nov 2021 22:32:56 GMT | Wed, 17 Nov 2021 22:32:56 GMT |
	|         | minikube-local-cache-test:functional-20211117223105-9504 |                                     |                   |         |                               |                               |
	| cache   | list                                                     | minikube                            | minikube2\jenkins | v1.24.0 | Wed, 17 Nov 2021 22:32:57 GMT | Wed, 17 Nov 2021 22:32:57 GMT |
	| -p      | functional-20211117223105-9504                           | functional-20211117223105-9504      | minikube2\jenkins | v1.24.0 | Wed, 17 Nov 2021 22:33:02 GMT | Wed, 17 Nov 2021 22:33:02 GMT |
	|         | cache reload                                             |                                     |                   |         |                               |                               |
	|---------|----------------------------------------------------------|-------------------------------------|-------------------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/11/17 22:33:15
	Running on machine: minikube2
	Binary: Built with gc go1.17.3 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1117 22:33:15.247898    3676 out.go:297] Setting OutFile to fd 1004 ...
	I1117 22:33:15.325035    3676 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 22:33:15.325035    3676 out.go:310] Setting ErrFile to fd 1008...
	I1117 22:33:15.325035    3676 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 22:33:15.336661    3676 out.go:304] Setting JSON to false
	I1117 22:33:15.338820    3676 start.go:112] hostinfo: {"hostname":"minikube2","uptime":77711,"bootTime":1637110684,"procs":127,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.18363 Build 18363","kernelVersion":"10.0.18363 Build 18363","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"29dced62-21fb-45d8-a34e-472b66ced002"}
	W1117 22:33:15.338820    3676 start.go:120] gopshost.Virtualization returned error: not implemented yet
	I1117 22:33:15.344752    3676 out.go:176] * [functional-20211117223105-9504] minikube v1.24.0 on Microsoft Windows 10 Enterprise N 10.0.18363 Build 18363
	I1117 22:33:15.344914    3676 notify.go:174] Checking for updates...
	I1117 22:33:15.347868    3676 out.go:176]   - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	I1117 22:33:15.350107    3676 out.go:176]   - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	I1117 22:33:15.352155    3676 out.go:176]   - MINIKUBE_LOCATION=12739
	I1117 22:33:15.352155    3676 config.go:176] Loaded profile config "functional-20211117223105-9504": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.3
	I1117 22:33:15.353103    3676 driver.go:343] Setting default libvirt URI to qemu:///system
	I1117 22:33:16.938081    3676 docker.go:132] docker version: linux-19.03.12
	I1117 22:33:16.941082    3676 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I1117 22:33:17.299819    3676 info.go:263] docker info: {ID:L2W4:ZZCI:FAHF:ZZT6:NAWJ:ODFR:2ZBQ:YQ77:PYZZ:VR3E:WLUM:OBSV Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:43 OomKillDisable:true NGoroutines:52 SystemTime:2021-11-17 22:33:17.02211151 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:4.19.76-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://index
.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:6239399936 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:19.03.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7ad184331fa3e55e52b890ea95e65ba581ae3429 Expected:7ad184331fa3e55e52b890ea95e65ba581ae3429} RuncCommit:{ID:dc9208a3303feef5b3839f4323d9beb36df0a9dd Expected:dc9208a3303feef5b3839f4323d9beb36df0a9dd} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense:Community Engine Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I1117 22:33:17.310098    3676 out.go:176] * Using the docker driver based on existing profile
	I1117 22:33:17.310098    3676 start.go:280] selected driver: docker
	I1117 22:33:17.310098    3676 start.go:775] validating driver "docker" against &{Name:functional-20211117223105-9504 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.3 ClusterName:functional-20211117223105-9504 Namespace:default APIServerName:minikubeCA APIServer
Names:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.22.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host}
	I1117 22:33:17.310661    3676 start.go:786] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I1117 22:33:17.322221    3676 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I1117 22:33:17.650902    3676 info.go:263] docker info: {ID:L2W4:ZZCI:FAHF:ZZT6:NAWJ:ODFR:2ZBQ:YQ77:PYZZ:VR3E:WLUM:OBSV Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:43 OomKillDisable:true NGoroutines:52 SystemTime:2021-11-17 22:33:17.40195517 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:4.19.76-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://index
.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:6239399936 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:19.03.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7ad184331fa3e55e52b890ea95e65ba581ae3429 Expected:7ad184331fa3e55e52b890ea95e65ba581ae3429} RuncCommit:{ID:dc9208a3303feef5b3839f4323d9beb36df0a9dd Expected:dc9208a3303feef5b3839f4323d9beb36df0a9dd} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense:Community Engine Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I1117 22:33:17.699844    3676 start_flags.go:758] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1117 22:33:17.699844    3676 cni.go:93] Creating CNI manager for ""
	I1117 22:33:17.699844    3676 cni.go:167] CNI unnecessary in this configuration, recommending no CNI
	I1117 22:33:17.699844    3676 start_flags.go:282] config:
	{Name:functional-20211117223105-9504 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.3 ClusterName:functional-20211117223105-9504 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISock
et: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.22.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host}
	I1117 22:33:17.704189    3676 out.go:176] * Starting control plane node functional-20211117223105-9504 in cluster functional-20211117223105-9504
	I1117 22:33:17.704189    3676 cache.go:118] Beginning downloading kic base image for docker with docker
	I1117 22:33:17.712248    3676 out.go:176] * Pulling base image ...
	I1117 22:33:17.712872    3676 preload.go:132] Checking if preload exists for k8s version v1.22.3 and runtime docker
	I1117 22:33:17.712939    3676 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c in local docker daemon
	I1117 22:33:17.713048    3676 preload.go:148] Found local preload: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4
	I1117 22:33:17.713184    3676 cache.go:57] Caching tarball of preloaded images
	I1117 22:33:17.713371    3676 preload.go:174] Found C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1117 22:33:17.713371    3676 cache.go:60] Finished verifying existence of preloaded tar for  v1.22.3 on docker
	I1117 22:33:17.713901    3676 profile.go:147] Saving config to C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\functional-20211117223105-9504\config.json ...
	I1117 22:33:17.805953    3676 image.go:79] Found gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c in local docker daemon, skipping pull
	I1117 22:33:17.805953    3676 cache.go:140] gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c exists in daemon, skipping load
	I1117 22:33:17.805953    3676 cache.go:206] Successfully downloaded all kic artifacts
	I1117 22:33:17.805953    3676 start.go:313] acquiring machines lock for functional-20211117223105-9504: {Name:mkf27125e9684350d3c166e137abc6f49434f9ae Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 22:33:17.805953    3676 start.go:317] acquired machines lock for "functional-20211117223105-9504" in 0s
	I1117 22:33:17.805953    3676 start.go:93] Skipping create...Using existing machine configuration
	I1117 22:33:17.805953    3676 fix.go:55] fixHost starting: 
	I1117 22:33:17.811906    3676 cli_runner.go:115] Run: docker container inspect functional-20211117223105-9504 --format={{.State.Status}}
	W1117 22:33:17.901111    3676 cli_runner.go:162] docker container inspect functional-20211117223105-9504 --format={{.State.Status}} returned with exit code 1
	I1117 22:33:17.901111    3676 fix.go:108] recreateIfNeeded on functional-20211117223105-9504: state= err=unknown state "functional-20211117223105-9504": docker container inspect functional-20211117223105-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117223105-9504
	I1117 22:33:17.901111    3676 fix.go:113] machineExists: false. err=machine does not exist
	I1117 22:33:17.907114    3676 out.go:176] * docker "functional-20211117223105-9504" container is missing, will recreate.
	I1117 22:33:17.907114    3676 delete.go:124] DEMOLISHING functional-20211117223105-9504 ...
	I1117 22:33:17.914099    3676 cli_runner.go:115] Run: docker container inspect functional-20211117223105-9504 --format={{.State.Status}}
	W1117 22:33:18.008073    3676 cli_runner.go:162] docker container inspect functional-20211117223105-9504 --format={{.State.Status}} returned with exit code 1
	W1117 22:33:18.008073    3676 stop.go:75] unable to get state: unknown state "functional-20211117223105-9504": docker container inspect functional-20211117223105-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117223105-9504
	I1117 22:33:18.008073    3676 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "functional-20211117223105-9504": docker container inspect functional-20211117223105-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117223105-9504
	I1117 22:33:18.015050    3676 cli_runner.go:115] Run: docker container inspect functional-20211117223105-9504 --format={{.State.Status}}
	W1117 22:33:18.108865    3676 cli_runner.go:162] docker container inspect functional-20211117223105-9504 --format={{.State.Status}} returned with exit code 1
	I1117 22:33:18.108865    3676 delete.go:82] Unable to get host status for functional-20211117223105-9504, assuming it has already been deleted: state: unknown state "functional-20211117223105-9504": docker container inspect functional-20211117223105-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117223105-9504
	I1117 22:33:18.111875    3676 cli_runner.go:115] Run: docker container inspect -f {{.Id}} functional-20211117223105-9504
	W1117 22:33:18.211502    3676 cli_runner.go:162] docker container inspect -f {{.Id}} functional-20211117223105-9504 returned with exit code 1
	I1117 22:33:18.211502    3676 kic.go:360] could not find the container functional-20211117223105-9504 to remove it. will try anyways
	I1117 22:33:18.215477    3676 cli_runner.go:115] Run: docker container inspect functional-20211117223105-9504 --format={{.State.Status}}
	W1117 22:33:18.308688    3676 cli_runner.go:162] docker container inspect functional-20211117223105-9504 --format={{.State.Status}} returned with exit code 1
	W1117 22:33:18.308688    3676 oci.go:83] error getting container status, will try to delete anyways: unknown state "functional-20211117223105-9504": docker container inspect functional-20211117223105-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117223105-9504
	I1117 22:33:18.311735    3676 cli_runner.go:115] Run: docker exec --privileged -t functional-20211117223105-9504 /bin/bash -c "sudo init 0"
	W1117 22:33:18.399243    3676 cli_runner.go:162] docker exec --privileged -t functional-20211117223105-9504 /bin/bash -c "sudo init 0" returned with exit code 1
	I1117 22:33:18.399243    3676 oci.go:658] error shutdown functional-20211117223105-9504: docker exec --privileged -t functional-20211117223105-9504 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: functional-20211117223105-9504
	I1117 22:33:19.402929    3676 cli_runner.go:115] Run: docker container inspect functional-20211117223105-9504 --format={{.State.Status}}
	W1117 22:33:19.488280    3676 cli_runner.go:162] docker container inspect functional-20211117223105-9504 --format={{.State.Status}} returned with exit code 1
	I1117 22:33:19.488280    3676 oci.go:670] temporary error verifying shutdown: unknown state "functional-20211117223105-9504": docker container inspect functional-20211117223105-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117223105-9504
	I1117 22:33:19.488280    3676 oci.go:672] temporary error: container functional-20211117223105-9504 status is  but expect it to be exited
	I1117 22:33:19.488280    3676 retry.go:31] will retry after 552.330144ms: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20211117223105-9504": docker container inspect functional-20211117223105-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117223105-9504
	I1117 22:33:20.044614    3676 cli_runner.go:115] Run: docker container inspect functional-20211117223105-9504 --format={{.State.Status}}
	W1117 22:33:20.141722    3676 cli_runner.go:162] docker container inspect functional-20211117223105-9504 --format={{.State.Status}} returned with exit code 1
	I1117 22:33:20.141722    3676 oci.go:670] temporary error verifying shutdown: unknown state "functional-20211117223105-9504": docker container inspect functional-20211117223105-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117223105-9504
	I1117 22:33:20.141722    3676 oci.go:672] temporary error: container functional-20211117223105-9504 status is  but expect it to be exited
	I1117 22:33:20.141722    3676 retry.go:31] will retry after 1.080381816s: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20211117223105-9504": docker container inspect functional-20211117223105-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117223105-9504
	I1117 22:33:21.225285    3676 cli_runner.go:115] Run: docker container inspect functional-20211117223105-9504 --format={{.State.Status}}
	W1117 22:33:21.316233    3676 cli_runner.go:162] docker container inspect functional-20211117223105-9504 --format={{.State.Status}} returned with exit code 1
	I1117 22:33:21.316233    3676 oci.go:670] temporary error verifying shutdown: unknown state "functional-20211117223105-9504": docker container inspect functional-20211117223105-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117223105-9504
	I1117 22:33:21.316233    3676 oci.go:672] temporary error: container functional-20211117223105-9504 status is  but expect it to be exited
	I1117 22:33:21.316233    3676 retry.go:31] will retry after 1.31013006s: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20211117223105-9504": docker container inspect functional-20211117223105-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117223105-9504
	I1117 22:33:22.631050    3676 cli_runner.go:115] Run: docker container inspect functional-20211117223105-9504 --format={{.State.Status}}
	W1117 22:33:22.719426    3676 cli_runner.go:162] docker container inspect functional-20211117223105-9504 --format={{.State.Status}} returned with exit code 1
	I1117 22:33:22.719426    3676 oci.go:670] temporary error verifying shutdown: unknown state "functional-20211117223105-9504": docker container inspect functional-20211117223105-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117223105-9504
	I1117 22:33:22.719426    3676 oci.go:672] temporary error: container functional-20211117223105-9504 status is  but expect it to be exited
	I1117 22:33:22.719426    3676 retry.go:31] will retry after 1.582392691s: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20211117223105-9504": docker container inspect functional-20211117223105-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117223105-9504
	I1117 22:33:24.305804    3676 cli_runner.go:115] Run: docker container inspect functional-20211117223105-9504 --format={{.State.Status}}
	W1117 22:33:24.399866    3676 cli_runner.go:162] docker container inspect functional-20211117223105-9504 --format={{.State.Status}} returned with exit code 1
	I1117 22:33:24.399866    3676 oci.go:670] temporary error verifying shutdown: unknown state "functional-20211117223105-9504": docker container inspect functional-20211117223105-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117223105-9504
	I1117 22:33:24.399866    3676 oci.go:672] temporary error: container functional-20211117223105-9504 status is  but expect it to be exited
	I1117 22:33:24.399866    3676 retry.go:31] will retry after 2.340488664s: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20211117223105-9504": docker container inspect functional-20211117223105-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117223105-9504
	I1117 22:33:26.744350    3676 cli_runner.go:115] Run: docker container inspect functional-20211117223105-9504 --format={{.State.Status}}
	W1117 22:33:26.834047    3676 cli_runner.go:162] docker container inspect functional-20211117223105-9504 --format={{.State.Status}} returned with exit code 1
	I1117 22:33:26.834047    3676 oci.go:670] temporary error verifying shutdown: unknown state "functional-20211117223105-9504": docker container inspect functional-20211117223105-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117223105-9504
	I1117 22:33:26.834047    3676 oci.go:672] temporary error: container functional-20211117223105-9504 status is  but expect it to be exited
	I1117 22:33:26.834047    3676 retry.go:31] will retry after 4.506218855s: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20211117223105-9504": docker container inspect functional-20211117223105-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117223105-9504
	I1117 22:33:31.344468    3676 cli_runner.go:115] Run: docker container inspect functional-20211117223105-9504 --format={{.State.Status}}
	W1117 22:33:31.438942    3676 cli_runner.go:162] docker container inspect functional-20211117223105-9504 --format={{.State.Status}} returned with exit code 1
	I1117 22:33:31.438942    3676 oci.go:670] temporary error verifying shutdown: unknown state "functional-20211117223105-9504": docker container inspect functional-20211117223105-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117223105-9504
	I1117 22:33:31.438942    3676 oci.go:672] temporary error: container functional-20211117223105-9504 status is  but expect it to be exited
	I1117 22:33:31.438942    3676 retry.go:31] will retry after 3.221479586s: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20211117223105-9504": docker container inspect functional-20211117223105-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117223105-9504
	I1117 22:33:34.663787    3676 cli_runner.go:115] Run: docker container inspect functional-20211117223105-9504 --format={{.State.Status}}
	W1117 22:33:34.758987    3676 cli_runner.go:162] docker container inspect functional-20211117223105-9504 --format={{.State.Status}} returned with exit code 1
	I1117 22:33:34.758987    3676 oci.go:670] temporary error verifying shutdown: unknown state "functional-20211117223105-9504": docker container inspect functional-20211117223105-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117223105-9504
	I1117 22:33:34.758987    3676 oci.go:672] temporary error: container functional-20211117223105-9504 status is  but expect it to be exited
	I1117 22:33:34.758987    3676 oci.go:87] couldn't shut down functional-20211117223105-9504 (might be okay): verify shutdown: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20211117223105-9504": docker container inspect functional-20211117223105-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117223105-9504
	 
	I1117 22:33:34.762930    3676 cli_runner.go:115] Run: docker rm -f -v functional-20211117223105-9504
	W1117 22:33:34.849478    3676 cli_runner.go:162] docker rm -f -v functional-20211117223105-9504 returned with exit code 1
	W1117 22:33:34.850437    3676 delete.go:139] delete failed (probably ok) <nil>
	I1117 22:33:34.850437    3676 fix.go:120] Sleeping 1 second for extra luck!
	I1117 22:33:35.851190    3676 start.go:126] createHost starting for "" (driver="docker")
	I1117 22:33:35.855661    3676 out.go:203] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I1117 22:33:35.856029    3676 start.go:160] libmachine.API.Create for "functional-20211117223105-9504" (driver="docker")
	I1117 22:33:35.856029    3676 client.go:168] LocalClient.Create starting
	I1117 22:33:35.856029    3676 main.go:130] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I1117 22:33:35.856029    3676 main.go:130] libmachine: Decoding PEM data...
	I1117 22:33:35.856029    3676 main.go:130] libmachine: Parsing certificate...
	I1117 22:33:35.857059    3676 main.go:130] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I1117 22:33:35.857059    3676 main.go:130] libmachine: Decoding PEM data...
	I1117 22:33:35.857059    3676 main.go:130] libmachine: Parsing certificate...
	I1117 22:33:35.863117    3676 cli_runner.go:115] Run: docker network inspect functional-20211117223105-9504 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1117 22:33:35.948175    3676 network_create.go:67] Found existing network {name:functional-20211117223105-9504 subnet:0xc001246e40 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 168 49 1] mtu:1500}
	I1117 22:33:35.948175    3676 kic.go:106] calculated static IP "192.168.49.2" for the "functional-20211117223105-9504" container
	I1117 22:33:35.955393    3676 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
	I1117 22:33:36.047866    3676 cli_runner.go:115] Run: docker volume create functional-20211117223105-9504 --label name.minikube.sigs.k8s.io=functional-20211117223105-9504 --label created_by.minikube.sigs.k8s.io=true
	I1117 22:33:36.135034    3676 oci.go:102] Successfully created a docker volume functional-20211117223105-9504
	I1117 22:33:36.138032    3676 cli_runner.go:115] Run: docker run --rm --name functional-20211117223105-9504-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=functional-20211117223105-9504 --entrypoint /usr/bin/test -v functional-20211117223105-9504:/var gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -d /var/lib
	I1117 22:33:36.949034    3676 oci.go:106] Successfully prepared a docker volume functional-20211117223105-9504
	I1117 22:33:36.949034    3676 preload.go:132] Checking if preload exists for k8s version v1.22.3 and runtime docker
	I1117 22:33:36.949034    3676 kic.go:179] Starting extracting preloaded images to volume ...
	I1117 22:33:36.953049    3676 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v functional-20211117223105-9504:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir
	I1117 22:33:36.953049    3676 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	W1117 22:33:37.064063    3676 cli_runner.go:162] docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v functional-20211117223105-9504:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir returned with exit code 125
	I1117 22:33:37.064063    3676 kic.go:186] Unable to extract preloaded tarball to volume: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v functional-20211117223105-9504:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: status code not OK but 500: {"Message":"Unhandled exception: The notification platform is unavailable.\r\n\r\nThe notification platform is unavailable.\r\n","StackTrace":"   at Windows.UI.Notifications.ToastNotificationManager.CreateToastNotifier(String applicationId)\r\n   at Docker.WPF.PromptShareDirectory.<PromptUserAsync>d__0.MoveNext() in C:\\workspaces\\stable-2.3.x\\src\\github.com\\docker\\pinata\\win\\src\\Docker.WPF\\PromptShareDirectory.cs:line 25\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at Docker.ApiServices.Mounting.FileSharing.<DoShareAsync>d__6.MoveNext() in C:\\workspaces\\stable-2.3.x\\src\\github.com\\docker\\pinata\\win\\src\\Docker.ApiServices\\Mounting\\FileSharing.cs:line 80\r\n--- End of stack trace from previous location whe
re exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at Docker.ApiServices.Mounting.FileSharing.<ShareAsync>d__4.MoveNext() in C:\\workspaces\\stable-2.3.x\\src\\github.com\\docker\\pinata\\win\\src\\Docker.ApiServices\\Mounting\\FileSharing.cs:line 47\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at Docker.HttpApi.Controllers.FilesharingController.<ShareDirectory>d__2.MoveNext() in C:\\workspaces\\stable-2.3.x\\src\\github.com\\docker\\pinata\\win\\src\\Docker.HttpApi\\Controllers\\FilesharingController.cs:line 21\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.Exceptio
nDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at System.Threading.Tasks.TaskHelpersExtensions.<CastToObject>d__1`1.MoveNext()\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at System.Web.Http.Controllers.ApiControllerActionInvoker.<InvokeActionAsyncCore>d__1.MoveNext()\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at System.Web.Http.Controllers.ActionFilterResult.<ExecuteAsync>d__5.MoveNext()\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.Excepti
onServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at System.Web.Http.Dispatcher.HttpControllerDispatcher.<SendAsync>d__15.MoveNext()"}.
	See 'docker run --help'.
	I1117 22:33:37.310097    3676 info.go:263] docker info: {ID:L2W4:ZZCI:FAHF:ZZT6:NAWJ:ODFR:2ZBQ:YQ77:PYZZ:VR3E:WLUM:OBSV Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:41 OomKillDisable:true NGoroutines:53 SystemTime:2021-11-17 22:33:37.041173333 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:4.19.76-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://inde
x.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:6239399936 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:19.03.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7ad184331fa3e55e52b890ea95e65ba581ae3429 Expected:7ad184331fa3e55e52b890ea95e65ba581ae3429} RuncCommit:{ID:dc9208a3303feef5b3839f4323d9beb36df0a9dd Expected:dc9208a3303feef5b3839f4323d9beb36df0a9dd} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense:Community Engine Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	E1117 22:33:37.310097    3676 oci.go:197] error getting kernel modules path: Unable to locate kernel modules
	I1117 22:33:37.310097    3676 client.go:171] LocalClient.Create took 1.4540573s
	I1117 22:33:39.317046    3676 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1117 22:33:39.322572    3676 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117223105-9504
	W1117 22:33:39.414053    3676 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117223105-9504 returned with exit code 1
	I1117 22:33:39.414053    3676 retry.go:31] will retry after 149.242379ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20211117223105-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117223105-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117223105-9504
	I1117 22:33:39.568371    3676 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117223105-9504
	W1117 22:33:39.660652    3676 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117223105-9504 returned with exit code 1
	I1117 22:33:39.660652    3676 retry.go:31] will retry after 300.341948ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20211117223105-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117223105-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117223105-9504
	I1117 22:33:39.966704    3676 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117223105-9504
	W1117 22:33:40.051017    3676 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117223105-9504 returned with exit code 1
	I1117 22:33:40.051300    3676 retry.go:31] will retry after 571.057104ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20211117223105-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117223105-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117223105-9504
	I1117 22:33:40.627804    3676 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117223105-9504
	W1117 22:33:40.712565    3676 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117223105-9504 returned with exit code 1
	W1117 22:33:40.712565    3676 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20211117223105-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117223105-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117223105-9504
	
	W1117 22:33:40.712565    3676 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20211117223105-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117223105-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117223105-9504
	I1117 22:33:40.712565    3676 start.go:129] duration metric: createHost completed in 4.8613391s
	I1117 22:33:40.720233    3676 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1117 22:33:40.723599    3676 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117223105-9504
	W1117 22:33:40.811587    3676 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117223105-9504 returned with exit code 1
	I1117 22:33:40.811736    3676 retry.go:31] will retry after 178.565968ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20211117223105-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117223105-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117223105-9504
	I1117 22:33:40.995258    3676 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117223105-9504
	W1117 22:33:41.082946    3676 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117223105-9504 returned with exit code 1
	I1117 22:33:41.083049    3676 retry.go:31] will retry after 330.246446ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20211117223105-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117223105-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117223105-9504
	I1117 22:33:41.418919    3676 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117223105-9504
	W1117 22:33:41.532012    3676 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117223105-9504 returned with exit code 1
	I1117 22:33:41.532153    3676 retry.go:31] will retry after 460.157723ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20211117223105-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117223105-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117223105-9504
	I1117 22:33:41.997213    3676 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117223105-9504
	W1117 22:33:42.090358    3676 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117223105-9504 returned with exit code 1
	W1117 22:33:42.090569    3676 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20211117223105-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117223105-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117223105-9504
	
	W1117 22:33:42.090569    3676 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20211117223105-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117223105-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117223105-9504
	I1117 22:33:42.090569    3676 fix.go:57] fixHost completed within 24.2844333s
	I1117 22:33:42.090670    3676 start.go:80] releasing machines lock for "functional-20211117223105-9504", held for 24.284534s
	W1117 22:33:42.090754    3676 start.go:532] error starting host: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	W1117 22:33:42.090924    3676 out.go:241] ! StartHost failed, but will try again: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	I1117 22:33:42.091080    3676 start.go:547] Will try again in 5 seconds ...
	I1117 22:33:47.091465    3676 start.go:313] acquiring machines lock for functional-20211117223105-9504: {Name:mkf27125e9684350d3c166e137abc6f49434f9ae Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 22:33:47.091465    3676 start.go:317] acquired machines lock for "functional-20211117223105-9504" in 0s
	I1117 22:33:47.092035    3676 start.go:93] Skipping create...Using existing machine configuration
	I1117 22:33:47.092035    3676 fix.go:55] fixHost starting: 
	I1117 22:33:47.102292    3676 cli_runner.go:115] Run: docker container inspect functional-20211117223105-9504 --format={{.State.Status}}
	W1117 22:33:47.190636    3676 cli_runner.go:162] docker container inspect functional-20211117223105-9504 --format={{.State.Status}} returned with exit code 1
	I1117 22:33:47.190636    3676 fix.go:108] recreateIfNeeded on functional-20211117223105-9504: state= err=unknown state "functional-20211117223105-9504": docker container inspect functional-20211117223105-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117223105-9504
	I1117 22:33:47.190636    3676 fix.go:113] machineExists: false. err=machine does not exist
	I1117 22:33:47.195790    3676 out.go:176] * docker "functional-20211117223105-9504" container is missing, will recreate.
	I1117 22:33:47.195790    3676 delete.go:124] DEMOLISHING functional-20211117223105-9504 ...
	I1117 22:33:47.202758    3676 cli_runner.go:115] Run: docker container inspect functional-20211117223105-9504 --format={{.State.Status}}
	W1117 22:33:47.287509    3676 cli_runner.go:162] docker container inspect functional-20211117223105-9504 --format={{.State.Status}} returned with exit code 1
	W1117 22:33:47.287509    3676 stop.go:75] unable to get state: unknown state "functional-20211117223105-9504": docker container inspect functional-20211117223105-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117223105-9504
	I1117 22:33:47.287656    3676 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "functional-20211117223105-9504": docker container inspect functional-20211117223105-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117223105-9504
	I1117 22:33:47.296563    3676 cli_runner.go:115] Run: docker container inspect functional-20211117223105-9504 --format={{.State.Status}}
	W1117 22:33:47.386844    3676 cli_runner.go:162] docker container inspect functional-20211117223105-9504 --format={{.State.Status}} returned with exit code 1
	I1117 22:33:47.387057    3676 delete.go:82] Unable to get host status for functional-20211117223105-9504, assuming it has already been deleted: state: unknown state "functional-20211117223105-9504": docker container inspect functional-20211117223105-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117223105-9504
	I1117 22:33:47.392656    3676 cli_runner.go:115] Run: docker container inspect -f {{.Id}} functional-20211117223105-9504
	W1117 22:33:47.487253    3676 cli_runner.go:162] docker container inspect -f {{.Id}} functional-20211117223105-9504 returned with exit code 1
	I1117 22:33:47.487431    3676 kic.go:360] could not find the container functional-20211117223105-9504 to remove it. will try anyways
	I1117 22:33:47.491938    3676 cli_runner.go:115] Run: docker container inspect functional-20211117223105-9504 --format={{.State.Status}}
	W1117 22:33:47.579115    3676 cli_runner.go:162] docker container inspect functional-20211117223105-9504 --format={{.State.Status}} returned with exit code 1
	W1117 22:33:47.579115    3676 oci.go:83] error getting container status, will try to delete anyways: unknown state "functional-20211117223105-9504": docker container inspect functional-20211117223105-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117223105-9504
	I1117 22:33:47.584555    3676 cli_runner.go:115] Run: docker exec --privileged -t functional-20211117223105-9504 /bin/bash -c "sudo init 0"
	W1117 22:33:47.672448    3676 cli_runner.go:162] docker exec --privileged -t functional-20211117223105-9504 /bin/bash -c "sudo init 0" returned with exit code 1
	I1117 22:33:47.672448    3676 oci.go:658] error shutdown functional-20211117223105-9504: docker exec --privileged -t functional-20211117223105-9504 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: functional-20211117223105-9504
	I1117 22:33:48.678756    3676 cli_runner.go:115] Run: docker container inspect functional-20211117223105-9504 --format={{.State.Status}}
	W1117 22:33:48.766091    3676 cli_runner.go:162] docker container inspect functional-20211117223105-9504 --format={{.State.Status}} returned with exit code 1
	I1117 22:33:48.766091    3676 oci.go:670] temporary error verifying shutdown: unknown state "functional-20211117223105-9504": docker container inspect functional-20211117223105-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117223105-9504
	I1117 22:33:48.766091    3676 oci.go:672] temporary error: container functional-20211117223105-9504 status is  but expect it to be exited
	I1117 22:33:48.766091    3676 retry.go:31] will retry after 391.517075ms: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20211117223105-9504": docker container inspect functional-20211117223105-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117223105-9504
	I1117 22:33:49.163585    3676 cli_runner.go:115] Run: docker container inspect functional-20211117223105-9504 --format={{.State.Status}}
	W1117 22:33:49.251835    3676 cli_runner.go:162] docker container inspect functional-20211117223105-9504 --format={{.State.Status}} returned with exit code 1
	I1117 22:33:49.252036    3676 oci.go:670] temporary error verifying shutdown: unknown state "functional-20211117223105-9504": docker container inspect functional-20211117223105-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117223105-9504
	I1117 22:33:49.252036    3676 oci.go:672] temporary error: container functional-20211117223105-9504 status is  but expect it to be exited
	I1117 22:33:49.252130    3676 retry.go:31] will retry after 594.826393ms: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20211117223105-9504": docker container inspect functional-20211117223105-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117223105-9504
	I1117 22:33:49.851453    3676 cli_runner.go:115] Run: docker container inspect functional-20211117223105-9504 --format={{.State.Status}}
	W1117 22:33:49.938861    3676 cli_runner.go:162] docker container inspect functional-20211117223105-9504 --format={{.State.Status}} returned with exit code 1
	I1117 22:33:49.938861    3676 oci.go:670] temporary error verifying shutdown: unknown state "functional-20211117223105-9504": docker container inspect functional-20211117223105-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117223105-9504
	I1117 22:33:49.938861    3676 oci.go:672] temporary error: container functional-20211117223105-9504 status is  but expect it to be exited
	I1117 22:33:49.938861    3676 retry.go:31] will retry after 1.326470261s: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20211117223105-9504": docker container inspect functional-20211117223105-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117223105-9504
	I1117 22:33:51.270447    3676 cli_runner.go:115] Run: docker container inspect functional-20211117223105-9504 --format={{.State.Status}}
	W1117 22:33:51.355770    3676 cli_runner.go:162] docker container inspect functional-20211117223105-9504 --format={{.State.Status}} returned with exit code 1
	I1117 22:33:51.356068    3676 oci.go:670] temporary error verifying shutdown: unknown state "functional-20211117223105-9504": docker container inspect functional-20211117223105-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117223105-9504
	I1117 22:33:51.356068    3676 oci.go:672] temporary error: container functional-20211117223105-9504 status is  but expect it to be exited
	I1117 22:33:51.356155    3676 retry.go:31] will retry after 1.212558276s: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20211117223105-9504": docker container inspect functional-20211117223105-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117223105-9504
	I1117 22:33:52.574833    3676 cli_runner.go:115] Run: docker container inspect functional-20211117223105-9504 --format={{.State.Status}}
	W1117 22:33:52.662154    3676 cli_runner.go:162] docker container inspect functional-20211117223105-9504 --format={{.State.Status}} returned with exit code 1
	I1117 22:33:52.662370    3676 oci.go:670] temporary error verifying shutdown: unknown state "functional-20211117223105-9504": docker container inspect functional-20211117223105-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117223105-9504
	I1117 22:33:52.662370    3676 oci.go:672] temporary error: container functional-20211117223105-9504 status is  but expect it to be exited
	I1117 22:33:52.662370    3676 retry.go:31] will retry after 1.779941781s: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20211117223105-9504": docker container inspect functional-20211117223105-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117223105-9504
	I1117 22:33:54.447942    3676 cli_runner.go:115] Run: docker container inspect functional-20211117223105-9504 --format={{.State.Status}}
	W1117 22:33:54.540490    3676 cli_runner.go:162] docker container inspect functional-20211117223105-9504 --format={{.State.Status}} returned with exit code 1
	I1117 22:33:54.540644    3676 oci.go:670] temporary error verifying shutdown: unknown state "functional-20211117223105-9504": docker container inspect functional-20211117223105-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117223105-9504
	I1117 22:33:54.540644    3676 oci.go:672] temporary error: container functional-20211117223105-9504 status is  but expect it to be exited
	I1117 22:33:54.540644    3676 retry.go:31] will retry after 3.268621161s: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20211117223105-9504": docker container inspect functional-20211117223105-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117223105-9504
	I1117 22:33:57.814604    3676 cli_runner.go:115] Run: docker container inspect functional-20211117223105-9504 --format={{.State.Status}}
	W1117 22:33:57.899583    3676 cli_runner.go:162] docker container inspect functional-20211117223105-9504 --format={{.State.Status}} returned with exit code 1
	I1117 22:33:57.899583    3676 oci.go:670] temporary error verifying shutdown: unknown state "functional-20211117223105-9504": docker container inspect functional-20211117223105-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117223105-9504
	I1117 22:33:57.899583    3676 oci.go:672] temporary error: container functional-20211117223105-9504 status is  but expect it to be exited
	I1117 22:33:57.899583    3676 retry.go:31] will retry after 6.097818893s: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20211117223105-9504": docker container inspect functional-20211117223105-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117223105-9504
	I1117 22:34:04.003853    3676 cli_runner.go:115] Run: docker container inspect functional-20211117223105-9504 --format={{.State.Status}}
	W1117 22:34:04.106653    3676 cli_runner.go:162] docker container inspect functional-20211117223105-9504 --format={{.State.Status}} returned with exit code 1
	I1117 22:34:04.106895    3676 oci.go:670] temporary error verifying shutdown: unknown state "functional-20211117223105-9504": docker container inspect functional-20211117223105-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117223105-9504
	I1117 22:34:04.106895    3676 oci.go:672] temporary error: container functional-20211117223105-9504 status is  but expect it to be exited
	I1117 22:34:04.106895    3676 oci.go:87] couldn't shut down functional-20211117223105-9504 (might be okay): verify shutdown: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20211117223105-9504": docker container inspect functional-20211117223105-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117223105-9504
	 
	I1117 22:34:04.112987    3676 cli_runner.go:115] Run: docker rm -f -v functional-20211117223105-9504
	W1117 22:34:04.199073    3676 cli_runner.go:162] docker rm -f -v functional-20211117223105-9504 returned with exit code 1
	W1117 22:34:04.200913    3676 delete.go:139] delete failed (probably ok) <nil>
	I1117 22:34:04.200913    3676 fix.go:120] Sleeping 1 second for extra luck!
	I1117 22:34:05.201940    3676 start.go:126] createHost starting for "" (driver="docker")
	I1117 22:34:05.205306    3676 out.go:203] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I1117 22:34:05.205615    3676 start.go:160] libmachine.API.Create for "functional-20211117223105-9504" (driver="docker")
	I1117 22:34:05.205615    3676 client.go:168] LocalClient.Create starting
	I1117 22:34:05.206156    3676 main.go:130] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I1117 22:34:05.206320    3676 main.go:130] libmachine: Decoding PEM data...
	I1117 22:34:05.206320    3676 main.go:130] libmachine: Parsing certificate...
	I1117 22:34:05.206465    3676 main.go:130] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I1117 22:34:05.206636    3676 main.go:130] libmachine: Decoding PEM data...
	I1117 22:34:05.206705    3676 main.go:130] libmachine: Parsing certificate...
	I1117 22:34:05.210982    3676 cli_runner.go:115] Run: docker network inspect functional-20211117223105-9504 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1117 22:34:05.299269    3676 network_create.go:67] Found existing network {name:functional-20211117223105-9504 subnet:0xc000ed08d0 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 168 49 1] mtu:1500}
	I1117 22:34:05.299269    3676 kic.go:106] calculated static IP "192.168.49.2" for the "functional-20211117223105-9504" container
	I1117 22:34:05.307935    3676 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
	I1117 22:34:05.397335    3676 cli_runner.go:115] Run: docker volume create functional-20211117223105-9504 --label name.minikube.sigs.k8s.io=functional-20211117223105-9504 --label created_by.minikube.sigs.k8s.io=true
	I1117 22:34:05.482339    3676 oci.go:102] Successfully created a docker volume functional-20211117223105-9504
	I1117 22:34:05.487786    3676 cli_runner.go:115] Run: docker run --rm --name functional-20211117223105-9504-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=functional-20211117223105-9504 --entrypoint /usr/bin/test -v functional-20211117223105-9504:/var gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -d /var/lib
	I1117 22:34:06.325344    3676 oci.go:106] Successfully prepared a docker volume functional-20211117223105-9504
	I1117 22:34:06.325344    3676 preload.go:132] Checking if preload exists for k8s version v1.22.3 and runtime docker
	I1117 22:34:06.325344    3676 kic.go:179] Starting extracting preloaded images to volume ...
	I1117 22:34:06.330172    3676 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I1117 22:34:06.330172    3676 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v functional-20211117223105-9504:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir
	W1117 22:34:06.437568    3676 cli_runner.go:162] docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v functional-20211117223105-9504:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir returned with exit code 125
	I1117 22:34:06.437854    3676 kic.go:186] Unable to extract preloaded tarball to volume: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v functional-20211117223105-9504:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: status code not OK but 500: {"Message":"Unhandled exception: The notification platform is unavailable.\r\n\r\nThe notification platform is unavailable.\r\n","StackTrace":"   at Windows.UI.Notifications.ToastNotificationManager.CreateToastNotifier(String applicationId)\r\n   at Docker.WPF.PromptShareDirectory.<PromptUserAsync>d__0.MoveNext() in C:\\workspaces\\stable-2.3.x\\src\\github.com\\docker\\pinata\\win\\src\\Docker.WPF\\PromptShareDirectory.cs:line 25\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at Docker.ApiServices.Mounting.FileSharing.<DoShareAsync>d__6.MoveNext() in C:\\workspaces\\stable-2.3.x\\src\\github.com\\docker\\pinata\\win\\src\\Docker.ApiServices\\Mounting\\FileSharing.cs:line 80\r\n--- End of stack trace from previous location whe
re exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at Docker.ApiServices.Mounting.FileSharing.<ShareAsync>d__4.MoveNext() in C:\\workspaces\\stable-2.3.x\\src\\github.com\\docker\\pinata\\win\\src\\Docker.ApiServices\\Mounting\\FileSharing.cs:line 47\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at Docker.HttpApi.Controllers.FilesharingController.<ShareDirectory>d__2.MoveNext() in C:\\workspaces\\stable-2.3.x\\src\\github.com\\docker\\pinata\\win\\src\\Docker.HttpApi\\Controllers\\FilesharingController.cs:line 21\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.Exceptio
nDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at System.Threading.Tasks.TaskHelpersExtensions.<CastToObject>d__1`1.MoveNext()\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at System.Web.Http.Controllers.ApiControllerActionInvoker.<InvokeActionAsyncCore>d__1.MoveNext()\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at System.Web.Http.Controllers.ActionFilterResult.<ExecuteAsync>d__5.MoveNext()\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.Excepti
onServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at System.Web.Http.Dispatcher.HttpControllerDispatcher.<SendAsync>d__15.MoveNext()"}.
	See 'docker run --help'.
	I1117 22:34:06.692991    3676 info.go:263] docker info: {ID:L2W4:ZZCI:FAHF:ZZT6:NAWJ:ODFR:2ZBQ:YQ77:PYZZ:VR3E:WLUM:OBSV Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:41 OomKillDisable:true NGoroutines:53 SystemTime:2021-11-17 22:34:06.421261579 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:4.19.76-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://inde
x.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:6239399936 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:19.03.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7ad184331fa3e55e52b890ea95e65ba581ae3429 Expected:7ad184331fa3e55e52b890ea95e65ba581ae3429} RuncCommit:{ID:dc9208a3303feef5b3839f4323d9beb36df0a9dd Expected:dc9208a3303feef5b3839f4323d9beb36df0a9dd} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense:Community Engine Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	E1117 22:34:06.693318    3676 oci.go:197] error getting kernel modules path: Unable to locate kernel modules
	I1117 22:34:06.693392    3676 client.go:171] LocalClient.Create took 1.4877655s
	I1117 22:34:08.701656    3676 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1117 22:34:08.705034    3676 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117223105-9504
	W1117 22:34:08.794357    3676 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117223105-9504 returned with exit code 1
	I1117 22:34:08.794426    3676 retry.go:31] will retry after 198.278561ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20211117223105-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117223105-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117223105-9504
	I1117 22:34:08.998173    3676 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117223105-9504
	W1117 22:34:09.085657    3676 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117223105-9504 returned with exit code 1
	I1117 22:34:09.085930    3676 retry.go:31] will retry after 298.905961ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20211117223105-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117223105-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117223105-9504
	I1117 22:34:09.389217    3676 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117223105-9504
	W1117 22:34:09.476874    3676 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117223105-9504 returned with exit code 1
	I1117 22:34:09.476874    3676 retry.go:31] will retry after 704.572333ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20211117223105-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117223105-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117223105-9504
	I1117 22:34:10.186135    3676 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117223105-9504
	W1117 22:34:10.274749    3676 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117223105-9504 returned with exit code 1
	W1117 22:34:10.274827    3676 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20211117223105-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117223105-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117223105-9504
	
	W1117 22:34:10.274916    3676 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20211117223105-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117223105-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117223105-9504
	I1117 22:34:10.274916    3676 start.go:129] duration metric: createHost completed in 5.0729382s
	I1117 22:34:10.281508    3676 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1117 22:34:10.285148    3676 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117223105-9504
	W1117 22:34:10.368828    3676 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117223105-9504 returned with exit code 1
	I1117 22:34:10.368828    3676 retry.go:31] will retry after 341.333754ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20211117223105-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117223105-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117223105-9504
	I1117 22:34:10.713471    3676 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117223105-9504
	W1117 22:34:10.805324    3676 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117223105-9504 returned with exit code 1
	I1117 22:34:10.805715    3676 retry.go:31] will retry after 448.769687ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20211117223105-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117223105-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117223105-9504
	I1117 22:34:11.259396    3676 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117223105-9504
	W1117 22:34:11.348588    3676 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117223105-9504 returned with exit code 1
	I1117 22:34:11.348693    3676 retry.go:31] will retry after 575.898922ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20211117223105-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117223105-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117223105-9504
	I1117 22:34:11.930497    3676 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117223105-9504
	W1117 22:34:12.020895    3676 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117223105-9504 returned with exit code 1
	W1117 22:34:12.021153    3676 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20211117223105-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117223105-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117223105-9504
	
	W1117 22:34:12.021153    3676 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20211117223105-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117223105-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117223105-9504
	I1117 22:34:12.021153    3676 fix.go:57] fixHost completed within 24.9289315s
	I1117 22:34:12.021244    3676 start.go:80] releasing machines lock for "functional-20211117223105-9504", held for 24.9295018s
	W1117 22:34:12.021707    3676 out.go:241] * Failed to start docker container. Running "minikube delete -p functional-20211117223105-9504" may fix it: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	I1117 22:34:12.025975    3676 out.go:176] 
	W1117 22:34:12.025975    3676 out.go:241] X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	W1117 22:34:12.025975    3676 out.go:241] * 
	W1117 22:34:12.027458    3676 out.go:241] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	
	* 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "functional-20211117223105-9504": docker container inspect functional-20211117223105-9504 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117223105-9504
	
	* 
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                       │
	│    * If the above advice does not help, please let us know:                                                           │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                         │
	│                                                                                                                       │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                              │
	│    * Please also attach the following file to the GitHub issue:                                                       │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_logs_80bd2298da0c083373823443180fffe8ad701919_42.log    │
	│                                                                                                                       │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1175: out/minikube-windows-amd64.exe -p functional-20211117223105-9504 logs failed: exit status 80
functional_test.go:1165: expected minikube logs to include word: -"Linux"- but got 
**** 
* ==> Audit <==
* |---------|----------------------------------------------------------|-------------------------------------|-------------------|---------|-------------------------------|-------------------------------|
| Command |                           Args                           |               Profile               |       User        | Version |          Start Time           |           End Time            |
|---------|----------------------------------------------------------|-------------------------------------|-------------------|---------|-------------------------------|-------------------------------|
| delete  | --all                                                    | download-only-20211117222633-9504   | minikube2\jenkins | v1.24.0 | Wed, 17 Nov 2021 22:27:15 GMT | Wed, 17 Nov 2021 22:27:18 GMT |
| delete  | -p                                                       | download-only-20211117222633-9504   | minikube2\jenkins | v1.24.0 | Wed, 17 Nov 2021 22:27:18 GMT | Wed, 17 Nov 2021 22:27:20 GMT |
|         | download-only-20211117222633-9504                        |                                     |                   |         |                               |                               |
| delete  | -p                                                       | download-only-20211117222633-9504   | minikube2\jenkins | v1.24.0 | Wed, 17 Nov 2021 22:27:20 GMT | Wed, 17 Nov 2021 22:27:22 GMT |
|         | download-only-20211117222633-9504                        |                                     |                   |         |                               |                               |
| delete  | -p                                                       | download-docker-20211117222722-9504 | minikube2\jenkins | v1.24.0 | Wed, 17 Nov 2021 22:27:43 GMT | Wed, 17 Nov 2021 22:27:46 GMT |
|         | download-docker-20211117222722-9504                      |                                     |                   |         |                               |                               |
| delete  | -p addons-20211117222746-9504                            | addons-20211117222746-9504          | minikube2\jenkins | v1.24.0 | Wed, 17 Nov 2021 22:29:11 GMT | Wed, 17 Nov 2021 22:29:14 GMT |
| delete  | -p nospam-20211117222914-9504                            | nospam-20211117222914-9504          | minikube2\jenkins | v1.24.0 | Wed, 17 Nov 2021 22:31:03 GMT | Wed, 17 Nov 2021 22:31:05 GMT |
| -p      | functional-20211117223105-9504 cache add                 | functional-20211117223105-9504      | minikube2\jenkins | v1.24.0 | Wed, 17 Nov 2021 22:32:54 GMT | Wed, 17 Nov 2021 22:32:56 GMT |
|         | minikube-local-cache-test:functional-20211117223105-9504 |                                     |                   |         |                               |                               |
| -p      | functional-20211117223105-9504 cache delete              | functional-20211117223105-9504      | minikube2\jenkins | v1.24.0 | Wed, 17 Nov 2021 22:32:56 GMT | Wed, 17 Nov 2021 22:32:56 GMT |
|         | minikube-local-cache-test:functional-20211117223105-9504 |                                     |                   |         |                               |                               |
| cache   | list                                                     | minikube                            | minikube2\jenkins | v1.24.0 | Wed, 17 Nov 2021 22:32:57 GMT | Wed, 17 Nov 2021 22:32:57 GMT |
| -p      | functional-20211117223105-9504                           | functional-20211117223105-9504      | minikube2\jenkins | v1.24.0 | Wed, 17 Nov 2021 22:33:02 GMT | Wed, 17 Nov 2021 22:33:02 GMT |
|         | cache reload                                             |                                     |                   |         |                               |                               |
|---------|----------------------------------------------------------|-------------------------------------|-------------------|---------|-------------------------------|-------------------------------|

                                                
                                                
* 
* ==> Last Start <==
* Log file created at: 2021/11/17 22:33:15
Running on machine: minikube2
Binary: Built with gc go1.17.3 for windows/amd64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I1117 22:33:15.247898    3676 out.go:297] Setting OutFile to fd 1004 ...
I1117 22:33:15.325035    3676 out.go:344] TERM=,COLORTERM=, which probably does not support color
I1117 22:33:15.325035    3676 out.go:310] Setting ErrFile to fd 1008...
I1117 22:33:15.325035    3676 out.go:344] TERM=,COLORTERM=, which probably does not support color
I1117 22:33:15.336661    3676 out.go:304] Setting JSON to false
I1117 22:33:15.338820    3676 start.go:112] hostinfo: {"hostname":"minikube2","uptime":77711,"bootTime":1637110684,"procs":127,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.18363 Build 18363","kernelVersion":"10.0.18363 Build 18363","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"29dced62-21fb-45d8-a34e-472b66ced002"}
W1117 22:33:15.338820    3676 start.go:120] gopshost.Virtualization returned error: not implemented yet
I1117 22:33:15.344752    3676 out.go:176] * [functional-20211117223105-9504] minikube v1.24.0 on Microsoft Windows 10 Enterprise N 10.0.18363 Build 18363
I1117 22:33:15.344914    3676 notify.go:174] Checking for updates...
I1117 22:33:15.347868    3676 out.go:176]   - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
I1117 22:33:15.350107    3676 out.go:176]   - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
I1117 22:33:15.352155    3676 out.go:176]   - MINIKUBE_LOCATION=12739
I1117 22:33:15.352155    3676 config.go:176] Loaded profile config "functional-20211117223105-9504": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.3
I1117 22:33:15.353103    3676 driver.go:343] Setting default libvirt URI to qemu:///system
I1117 22:33:16.938081    3676 docker.go:132] docker version: linux-19.03.12
I1117 22:33:16.941082    3676 cli_runner.go:115] Run: docker system info --format "{{json .}}"
I1117 22:33:17.299819    3676 info.go:263] docker info: {ID:L2W4:ZZCI:FAHF:ZZT6:NAWJ:ODFR:2ZBQ:YQ77:PYZZ:VR3E:WLUM:OBSV Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:43 OomKillDisable:true NGoroutines:52 SystemTime:2021-11-17 22:33:17.02211151 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:4.19.76-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://index.
docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:6239399936 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:19.03.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7ad184331fa3e55e52b890ea95e65ba581ae3429 Expected:7ad184331fa3e55e52b890ea95e65ba581ae3429} RuncCommit:{ID:dc9208a3303feef5b3839f4323d9beb36df0a9dd Expected:dc9208a3303feef5b3839f4323d9beb36df0a9dd} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense:Community Engine Warnings:<nil> Se
rverErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
I1117 22:33:17.310098    3676 out.go:176] * Using the docker driver based on existing profile
I1117 22:33:17.310098    3676 start.go:280] selected driver: docker
I1117 22:33:17.310098    3676 start.go:775] validating driver "docker" against &{Name:functional-20211117223105-9504 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.3 ClusterName:functional-20211117223105-9504 Namespace:default APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.22.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host}
I1117 22:33:17.310661    3676 start.go:786] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
I1117 22:33:17.322221    3676 cli_runner.go:115] Run: docker system info --format "{{json .}}"
I1117 22:33:17.650902    3676 info.go:263] docker info: {ID:L2W4:ZZCI:FAHF:ZZT6:NAWJ:ODFR:2ZBQ:YQ77:PYZZ:VR3E:WLUM:OBSV Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:43 OomKillDisable:true NGoroutines:52 SystemTime:2021-11-17 22:33:17.40195517 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:4.19.76-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://index.
docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:6239399936 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:19.03.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7ad184331fa3e55e52b890ea95e65ba581ae3429 Expected:7ad184331fa3e55e52b890ea95e65ba581ae3429} RuncCommit:{ID:dc9208a3303feef5b3839f4323d9beb36df0a9dd Expected:dc9208a3303feef5b3839f4323d9beb36df0a9dd} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense:Community Engine Warnings:<nil> Se
rverErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
I1117 22:33:17.699844    3676 start_flags.go:758] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I1117 22:33:17.699844    3676 cni.go:93] Creating CNI manager for ""
I1117 22:33:17.699844    3676 cni.go:167] CNI unnecessary in this configuration, recommending no CNI
I1117 22:33:17.699844    3676 start_flags.go:282] config:
{Name:functional-20211117223105-9504 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.3 ClusterName:functional-20211117223105-9504 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocke
t: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.22.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host}
I1117 22:33:17.704189    3676 out.go:176] * Starting control plane node functional-20211117223105-9504 in cluster functional-20211117223105-9504
I1117 22:33:17.704189    3676 cache.go:118] Beginning downloading kic base image for docker with docker
I1117 22:33:17.712248    3676 out.go:176] * Pulling base image ...
I1117 22:33:17.712872    3676 preload.go:132] Checking if preload exists for k8s version v1.22.3 and runtime docker
I1117 22:33:17.712939    3676 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c in local docker daemon
I1117 22:33:17.713048    3676 preload.go:148] Found local preload: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4
I1117 22:33:17.713184    3676 cache.go:57] Caching tarball of preloaded images
I1117 22:33:17.713371    3676 preload.go:174] Found C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
I1117 22:33:17.713371    3676 cache.go:60] Finished verifying existence of preloaded tar for  v1.22.3 on docker
I1117 22:33:17.713901    3676 profile.go:147] Saving config to C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\functional-20211117223105-9504\config.json ...
I1117 22:33:17.805953    3676 image.go:79] Found gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c in local docker daemon, skipping pull
I1117 22:33:17.805953    3676 cache.go:140] gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c exists in daemon, skipping load
I1117 22:33:17.805953    3676 cache.go:206] Successfully downloaded all kic artifacts
I1117 22:33:17.805953    3676 start.go:313] acquiring machines lock for functional-20211117223105-9504: {Name:mkf27125e9684350d3c166e137abc6f49434f9ae Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1117 22:33:17.805953    3676 start.go:317] acquired machines lock for "functional-20211117223105-9504" in 0s
I1117 22:33:17.805953    3676 start.go:93] Skipping create...Using existing machine configuration
I1117 22:33:17.805953    3676 fix.go:55] fixHost starting: 
I1117 22:33:17.811906    3676 cli_runner.go:115] Run: docker container inspect functional-20211117223105-9504 --format={{.State.Status}}
W1117 22:33:17.901111    3676 cli_runner.go:162] docker container inspect functional-20211117223105-9504 --format={{.State.Status}} returned with exit code 1
I1117 22:33:17.901111    3676 fix.go:108] recreateIfNeeded on functional-20211117223105-9504: state= err=unknown state "functional-20211117223105-9504": docker container inspect functional-20211117223105-9504 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117223105-9504
I1117 22:33:17.901111    3676 fix.go:113] machineExists: false. err=machine does not exist
I1117 22:33:17.907114    3676 out.go:176] * docker "functional-20211117223105-9504" container is missing, will recreate.
I1117 22:33:17.907114    3676 delete.go:124] DEMOLISHING functional-20211117223105-9504 ...
I1117 22:33:17.914099    3676 cli_runner.go:115] Run: docker container inspect functional-20211117223105-9504 --format={{.State.Status}}
W1117 22:33:18.008073    3676 cli_runner.go:162] docker container inspect functional-20211117223105-9504 --format={{.State.Status}} returned with exit code 1
W1117 22:33:18.008073    3676 stop.go:75] unable to get state: unknown state "functional-20211117223105-9504": docker container inspect functional-20211117223105-9504 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117223105-9504
I1117 22:33:18.008073    3676 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "functional-20211117223105-9504": docker container inspect functional-20211117223105-9504 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117223105-9504
I1117 22:33:18.015050    3676 cli_runner.go:115] Run: docker container inspect functional-20211117223105-9504 --format={{.State.Status}}
W1117 22:33:18.108865    3676 cli_runner.go:162] docker container inspect functional-20211117223105-9504 --format={{.State.Status}} returned with exit code 1
I1117 22:33:18.108865    3676 delete.go:82] Unable to get host status for functional-20211117223105-9504, assuming it has already been deleted: state: unknown state "functional-20211117223105-9504": docker container inspect functional-20211117223105-9504 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117223105-9504
I1117 22:33:18.111875    3676 cli_runner.go:115] Run: docker container inspect -f {{.Id}} functional-20211117223105-9504
W1117 22:33:18.211502    3676 cli_runner.go:162] docker container inspect -f {{.Id}} functional-20211117223105-9504 returned with exit code 1
I1117 22:33:18.211502    3676 kic.go:360] could not find the container functional-20211117223105-9504 to remove it. will try anyways
I1117 22:33:18.215477    3676 cli_runner.go:115] Run: docker container inspect functional-20211117223105-9504 --format={{.State.Status}}
W1117 22:33:18.308688    3676 cli_runner.go:162] docker container inspect functional-20211117223105-9504 --format={{.State.Status}} returned with exit code 1
W1117 22:33:18.308688    3676 oci.go:83] error getting container status, will try to delete anyways: unknown state "functional-20211117223105-9504": docker container inspect functional-20211117223105-9504 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117223105-9504
I1117 22:33:18.311735    3676 cli_runner.go:115] Run: docker exec --privileged -t functional-20211117223105-9504 /bin/bash -c "sudo init 0"
W1117 22:33:18.399243    3676 cli_runner.go:162] docker exec --privileged -t functional-20211117223105-9504 /bin/bash -c "sudo init 0" returned with exit code 1
I1117 22:33:18.399243    3676 oci.go:658] error shutdown functional-20211117223105-9504: docker exec --privileged -t functional-20211117223105-9504 /bin/bash -c "sudo init 0": exit status 1
stdout:

                                                
                                                
stderr:
Error: No such container: functional-20211117223105-9504
I1117 22:33:19.402929    3676 cli_runner.go:115] Run: docker container inspect functional-20211117223105-9504 --format={{.State.Status}}
W1117 22:33:19.488280    3676 cli_runner.go:162] docker container inspect functional-20211117223105-9504 --format={{.State.Status}} returned with exit code 1
I1117 22:33:19.488280    3676 oci.go:670] temporary error verifying shutdown: unknown state "functional-20211117223105-9504": docker container inspect functional-20211117223105-9504 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117223105-9504
I1117 22:33:19.488280    3676 oci.go:672] temporary error: container functional-20211117223105-9504 status is  but expect it to be exited
I1117 22:33:19.488280    3676 retry.go:31] will retry after 552.330144ms: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20211117223105-9504": docker container inspect functional-20211117223105-9504 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117223105-9504
I1117 22:33:20.044614    3676 cli_runner.go:115] Run: docker container inspect functional-20211117223105-9504 --format={{.State.Status}}
W1117 22:33:20.141722    3676 cli_runner.go:162] docker container inspect functional-20211117223105-9504 --format={{.State.Status}} returned with exit code 1
I1117 22:33:20.141722    3676 oci.go:670] temporary error verifying shutdown: unknown state "functional-20211117223105-9504": docker container inspect functional-20211117223105-9504 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117223105-9504
I1117 22:33:20.141722    3676 oci.go:672] temporary error: container functional-20211117223105-9504 status is  but expect it to be exited
I1117 22:33:20.141722    3676 retry.go:31] will retry after 1.080381816s: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20211117223105-9504": docker container inspect functional-20211117223105-9504 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117223105-9504
I1117 22:33:21.225285    3676 cli_runner.go:115] Run: docker container inspect functional-20211117223105-9504 --format={{.State.Status}}
W1117 22:33:21.316233    3676 cli_runner.go:162] docker container inspect functional-20211117223105-9504 --format={{.State.Status}} returned with exit code 1
I1117 22:33:21.316233    3676 oci.go:670] temporary error verifying shutdown: unknown state "functional-20211117223105-9504": docker container inspect functional-20211117223105-9504 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117223105-9504
I1117 22:33:21.316233    3676 oci.go:672] temporary error: container functional-20211117223105-9504 status is  but expect it to be exited
I1117 22:33:21.316233    3676 retry.go:31] will retry after 1.31013006s: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20211117223105-9504": docker container inspect functional-20211117223105-9504 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117223105-9504
I1117 22:33:22.631050    3676 cli_runner.go:115] Run: docker container inspect functional-20211117223105-9504 --format={{.State.Status}}
W1117 22:33:22.719426    3676 cli_runner.go:162] docker container inspect functional-20211117223105-9504 --format={{.State.Status}} returned with exit code 1
I1117 22:33:22.719426    3676 oci.go:670] temporary error verifying shutdown: unknown state "functional-20211117223105-9504": docker container inspect functional-20211117223105-9504 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117223105-9504
I1117 22:33:22.719426    3676 oci.go:672] temporary error: container functional-20211117223105-9504 status is  but expect it to be exited
I1117 22:33:22.719426    3676 retry.go:31] will retry after 1.582392691s: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20211117223105-9504": docker container inspect functional-20211117223105-9504 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117223105-9504
I1117 22:33:24.305804    3676 cli_runner.go:115] Run: docker container inspect functional-20211117223105-9504 --format={{.State.Status}}
W1117 22:33:24.399866    3676 cli_runner.go:162] docker container inspect functional-20211117223105-9504 --format={{.State.Status}} returned with exit code 1
I1117 22:33:24.399866    3676 oci.go:670] temporary error verifying shutdown: unknown state "functional-20211117223105-9504": docker container inspect functional-20211117223105-9504 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117223105-9504
I1117 22:33:24.399866    3676 oci.go:672] temporary error: container functional-20211117223105-9504 status is  but expect it to be exited
I1117 22:33:24.399866    3676 retry.go:31] will retry after 2.340488664s: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20211117223105-9504": docker container inspect functional-20211117223105-9504 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117223105-9504
I1117 22:33:26.744350    3676 cli_runner.go:115] Run: docker container inspect functional-20211117223105-9504 --format={{.State.Status}}
W1117 22:33:26.834047    3676 cli_runner.go:162] docker container inspect functional-20211117223105-9504 --format={{.State.Status}} returned with exit code 1
I1117 22:33:26.834047    3676 oci.go:670] temporary error verifying shutdown: unknown state "functional-20211117223105-9504": docker container inspect functional-20211117223105-9504 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117223105-9504
I1117 22:33:26.834047    3676 oci.go:672] temporary error: container functional-20211117223105-9504 status is  but expect it to be exited
I1117 22:33:26.834047    3676 retry.go:31] will retry after 4.506218855s: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20211117223105-9504": docker container inspect functional-20211117223105-9504 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117223105-9504
I1117 22:33:31.344468    3676 cli_runner.go:115] Run: docker container inspect functional-20211117223105-9504 --format={{.State.Status}}
W1117 22:33:31.438942    3676 cli_runner.go:162] docker container inspect functional-20211117223105-9504 --format={{.State.Status}} returned with exit code 1
I1117 22:33:31.438942    3676 oci.go:670] temporary error verifying shutdown: unknown state "functional-20211117223105-9504": docker container inspect functional-20211117223105-9504 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117223105-9504
I1117 22:33:31.438942    3676 oci.go:672] temporary error: container functional-20211117223105-9504 status is  but expect it to be exited
I1117 22:33:31.438942    3676 retry.go:31] will retry after 3.221479586s: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20211117223105-9504": docker container inspect functional-20211117223105-9504 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117223105-9504
I1117 22:33:34.663787    3676 cli_runner.go:115] Run: docker container inspect functional-20211117223105-9504 --format={{.State.Status}}
W1117 22:33:34.758987    3676 cli_runner.go:162] docker container inspect functional-20211117223105-9504 --format={{.State.Status}} returned with exit code 1
I1117 22:33:34.758987    3676 oci.go:670] temporary error verifying shutdown: unknown state "functional-20211117223105-9504": docker container inspect functional-20211117223105-9504 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117223105-9504
I1117 22:33:34.758987    3676 oci.go:672] temporary error: container functional-20211117223105-9504 status is  but expect it to be exited
I1117 22:33:34.758987    3676 oci.go:87] couldn't shut down functional-20211117223105-9504 (might be okay): verify shutdown: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20211117223105-9504": docker container inspect functional-20211117223105-9504 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117223105-9504

                                                
                                                
I1117 22:33:34.762930    3676 cli_runner.go:115] Run: docker rm -f -v functional-20211117223105-9504
W1117 22:33:34.849478    3676 cli_runner.go:162] docker rm -f -v functional-20211117223105-9504 returned with exit code 1
W1117 22:33:34.850437    3676 delete.go:139] delete failed (probably ok) <nil>
I1117 22:33:34.850437    3676 fix.go:120] Sleeping 1 second for extra luck!
I1117 22:33:35.851190    3676 start.go:126] createHost starting for "" (driver="docker")
I1117 22:33:35.855661    3676 out.go:203] * Creating docker container (CPUs=2, Memory=4000MB) ...
I1117 22:33:35.856029    3676 start.go:160] libmachine.API.Create for "functional-20211117223105-9504" (driver="docker")
I1117 22:33:35.856029    3676 client.go:168] LocalClient.Create starting
I1117 22:33:35.856029    3676 main.go:130] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
I1117 22:33:35.856029    3676 main.go:130] libmachine: Decoding PEM data...
I1117 22:33:35.856029    3676 main.go:130] libmachine: Parsing certificate...
I1117 22:33:35.857059    3676 main.go:130] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
I1117 22:33:35.857059    3676 main.go:130] libmachine: Decoding PEM data...
I1117 22:33:35.857059    3676 main.go:130] libmachine: Parsing certificate...
I1117 22:33:35.863117    3676 cli_runner.go:115] Run: docker network inspect functional-20211117223105-9504 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1117 22:33:35.948175    3676 network_create.go:67] Found existing network {name:functional-20211117223105-9504 subnet:0xc001246e40 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 168 49 1] mtu:1500}
I1117 22:33:35.948175    3676 kic.go:106] calculated static IP "192.168.49.2" for the "functional-20211117223105-9504" container
I1117 22:33:35.955393    3676 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
I1117 22:33:36.047866    3676 cli_runner.go:115] Run: docker volume create functional-20211117223105-9504 --label name.minikube.sigs.k8s.io=functional-20211117223105-9504 --label created_by.minikube.sigs.k8s.io=true
I1117 22:33:36.135034    3676 oci.go:102] Successfully created a docker volume functional-20211117223105-9504
I1117 22:33:36.138032    3676 cli_runner.go:115] Run: docker run --rm --name functional-20211117223105-9504-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=functional-20211117223105-9504 --entrypoint /usr/bin/test -v functional-20211117223105-9504:/var gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -d /var/lib
I1117 22:33:36.949034    3676 oci.go:106] Successfully prepared a docker volume functional-20211117223105-9504
I1117 22:33:36.949034    3676 preload.go:132] Checking if preload exists for k8s version v1.22.3 and runtime docker
I1117 22:33:36.949034    3676 kic.go:179] Starting extracting preloaded images to volume ...
I1117 22:33:36.953049    3676 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v functional-20211117223105-9504:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir
I1117 22:33:36.953049    3676 cli_runner.go:115] Run: docker system info --format "{{json .}}"
W1117 22:33:37.064063    3676 cli_runner.go:162] docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v functional-20211117223105-9504:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir returned with exit code 125
I1117 22:33:37.064063    3676 kic.go:186] Unable to extract preloaded tarball to volume: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v functional-20211117223105-9504:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir: exit status 125
stdout:

                                                
                                                
stderr:
docker: Error response from daemon: status code not OK but 500: {"Message":"Unhandled exception: The notification platform is unavailable.\r\n\r\nThe notification platform is unavailable.\r\n","StackTrace":"   at Windows.UI.Notifications.ToastNotificationManager.CreateToastNotifier(String applicationId)\r\n   at Docker.WPF.PromptShareDirectory.<PromptUserAsync>d__0.MoveNext() in C:\\workspaces\\stable-2.3.x\\src\\github.com\\docker\\pinata\\win\\src\\Docker.WPF\\PromptShareDirectory.cs:line 25\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at Docker.ApiServices.Mounting.FileSharing.<DoShareAsync>d__6.MoveNext() in C:\\workspaces\\stable-2.3.x\\src\\github.com\\docker\\pinata\\win\\src\\Docker.ApiServices\\Mounting\\FileSharing.cs:line 80\r\n--- End of stack trace from previous location wher
e exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at Docker.ApiServices.Mounting.FileSharing.<ShareAsync>d__4.MoveNext() in C:\\workspaces\\stable-2.3.x\\src\\github.com\\docker\\pinata\\win\\src\\Docker.ApiServices\\Mounting\\FileSharing.cs:line 47\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at Docker.HttpApi.Controllers.FilesharingController.<ShareDirectory>d__2.MoveNext() in C:\\workspaces\\stable-2.3.x\\src\\github.com\\docker\\pinata\\win\\src\\Docker.HttpApi\\Controllers\\FilesharingController.cs:line 21\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.Exception
DispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at System.Threading.Tasks.TaskHelpersExtensions.<CastToObject>d__1`1.MoveNext()\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at System.Web.Http.Controllers.ApiControllerActionInvoker.<InvokeActionAsyncCore>d__1.MoveNext()\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at System.Web.Http.Controllers.ActionFilterResult.<ExecuteAsync>d__5.MoveNext()\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.Exceptio
nServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at System.Web.Http.Dispatcher.HttpControllerDispatcher.<SendAsync>d__15.MoveNext()"}.
See 'docker run --help'.
I1117 22:33:37.310097    3676 info.go:263] docker info: {ID:L2W4:ZZCI:FAHF:ZZT6:NAWJ:ODFR:2ZBQ:YQ77:PYZZ:VR3E:WLUM:OBSV Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:41 OomKillDisable:true NGoroutines:53 SystemTime:2021-11-17 22:33:37.041173333 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:4.19.76-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://index
.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:6239399936 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:19.03.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7ad184331fa3e55e52b890ea95e65ba581ae3429 Expected:7ad184331fa3e55e52b890ea95e65ba581ae3429} RuncCommit:{ID:dc9208a3303feef5b3839f4323d9beb36df0a9dd Expected:dc9208a3303feef5b3839f4323d9beb36df0a9dd} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense:Community Engine Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
E1117 22:33:37.310097    3676 oci.go:197] error getting kernel modules path: Unable to locate kernel modules
I1117 22:33:37.310097    3676 client.go:171] LocalClient.Create took 1.4540573s
I1117 22:33:39.317046    3676 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I1117 22:33:39.322572    3676 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117223105-9504
W1117 22:33:39.414053    3676 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117223105-9504 returned with exit code 1
I1117 22:33:39.414053    3676 retry.go:31] will retry after 149.242379ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20211117223105-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117223105-9504: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117223105-9504
I1117 22:33:39.568371    3676 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117223105-9504
W1117 22:33:39.660652    3676 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117223105-9504 returned with exit code 1
I1117 22:33:39.660652    3676 retry.go:31] will retry after 300.341948ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20211117223105-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117223105-9504: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117223105-9504
I1117 22:33:39.966704    3676 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117223105-9504
W1117 22:33:40.051017    3676 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117223105-9504 returned with exit code 1
I1117 22:33:40.051300    3676 retry.go:31] will retry after 571.057104ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20211117223105-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117223105-9504: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117223105-9504
I1117 22:33:40.627804    3676 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117223105-9504
W1117 22:33:40.712565    3676 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117223105-9504 returned with exit code 1
W1117 22:33:40.712565    3676 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20211117223105-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117223105-9504: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117223105-9504

                                                
                                                
W1117 22:33:40.712565    3676 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20211117223105-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117223105-9504: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117223105-9504
I1117 22:33:40.712565    3676 start.go:129] duration metric: createHost completed in 4.8613391s
I1117 22:33:40.720233    3676 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I1117 22:33:40.723599    3676 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117223105-9504
W1117 22:33:40.811587    3676 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117223105-9504 returned with exit code 1
I1117 22:33:40.811736    3676 retry.go:31] will retry after 178.565968ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20211117223105-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117223105-9504: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117223105-9504
I1117 22:33:40.995258    3676 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117223105-9504
W1117 22:33:41.082946    3676 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117223105-9504 returned with exit code 1
I1117 22:33:41.083049    3676 retry.go:31] will retry after 330.246446ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20211117223105-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117223105-9504: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117223105-9504
I1117 22:33:41.418919    3676 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117223105-9504
W1117 22:33:41.532012    3676 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117223105-9504 returned with exit code 1
I1117 22:33:41.532153    3676 retry.go:31] will retry after 460.157723ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20211117223105-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117223105-9504: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117223105-9504
I1117 22:33:41.997213    3676 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117223105-9504
W1117 22:33:42.090358    3676 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117223105-9504 returned with exit code 1
W1117 22:33:42.090569    3676 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20211117223105-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117223105-9504: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117223105-9504

                                                
                                                
W1117 22:33:42.090569    3676 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20211117223105-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117223105-9504: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117223105-9504
I1117 22:33:42.090569    3676 fix.go:57] fixHost completed within 24.2844333s
I1117 22:33:42.090670    3676 start.go:80] releasing machines lock for "functional-20211117223105-9504", held for 24.284534s
W1117 22:33:42.090754    3676 start.go:532] error starting host: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
W1117 22:33:42.090924    3676 out.go:241] ! StartHost failed, but will try again: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
I1117 22:33:42.091080    3676 start.go:547] Will try again in 5 seconds ...
I1117 22:33:47.091465    3676 start.go:313] acquiring machines lock for functional-20211117223105-9504: {Name:mkf27125e9684350d3c166e137abc6f49434f9ae Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1117 22:33:47.091465    3676 start.go:317] acquired machines lock for "functional-20211117223105-9504" in 0s
I1117 22:33:47.092035    3676 start.go:93] Skipping create...Using existing machine configuration
I1117 22:33:47.092035    3676 fix.go:55] fixHost starting: 
I1117 22:33:47.102292    3676 cli_runner.go:115] Run: docker container inspect functional-20211117223105-9504 --format={{.State.Status}}
W1117 22:33:47.190636    3676 cli_runner.go:162] docker container inspect functional-20211117223105-9504 --format={{.State.Status}} returned with exit code 1
I1117 22:33:47.190636    3676 fix.go:108] recreateIfNeeded on functional-20211117223105-9504: state= err=unknown state "functional-20211117223105-9504": docker container inspect functional-20211117223105-9504 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117223105-9504
I1117 22:33:47.190636    3676 fix.go:113] machineExists: false. err=machine does not exist
I1117 22:33:47.195790    3676 out.go:176] * docker "functional-20211117223105-9504" container is missing, will recreate.
I1117 22:33:47.195790    3676 delete.go:124] DEMOLISHING functional-20211117223105-9504 ...
I1117 22:33:47.202758    3676 cli_runner.go:115] Run: docker container inspect functional-20211117223105-9504 --format={{.State.Status}}
W1117 22:33:47.287509    3676 cli_runner.go:162] docker container inspect functional-20211117223105-9504 --format={{.State.Status}} returned with exit code 1
W1117 22:33:47.287509    3676 stop.go:75] unable to get state: unknown state "functional-20211117223105-9504": docker container inspect functional-20211117223105-9504 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117223105-9504
I1117 22:33:47.287656    3676 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "functional-20211117223105-9504": docker container inspect functional-20211117223105-9504 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117223105-9504
I1117 22:33:47.296563    3676 cli_runner.go:115] Run: docker container inspect functional-20211117223105-9504 --format={{.State.Status}}
W1117 22:33:47.386844    3676 cli_runner.go:162] docker container inspect functional-20211117223105-9504 --format={{.State.Status}} returned with exit code 1
I1117 22:33:47.387057    3676 delete.go:82] Unable to get host status for functional-20211117223105-9504, assuming it has already been deleted: state: unknown state "functional-20211117223105-9504": docker container inspect functional-20211117223105-9504 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117223105-9504
I1117 22:33:47.392656    3676 cli_runner.go:115] Run: docker container inspect -f {{.Id}} functional-20211117223105-9504
W1117 22:33:47.487253    3676 cli_runner.go:162] docker container inspect -f {{.Id}} functional-20211117223105-9504 returned with exit code 1
I1117 22:33:47.487431    3676 kic.go:360] could not find the container functional-20211117223105-9504 to remove it. will try anyways
I1117 22:33:47.491938    3676 cli_runner.go:115] Run: docker container inspect functional-20211117223105-9504 --format={{.State.Status}}
W1117 22:33:47.579115    3676 cli_runner.go:162] docker container inspect functional-20211117223105-9504 --format={{.State.Status}} returned with exit code 1
W1117 22:33:47.579115    3676 oci.go:83] error getting container status, will try to delete anyways: unknown state "functional-20211117223105-9504": docker container inspect functional-20211117223105-9504 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117223105-9504
I1117 22:33:47.584555    3676 cli_runner.go:115] Run: docker exec --privileged -t functional-20211117223105-9504 /bin/bash -c "sudo init 0"
W1117 22:33:47.672448    3676 cli_runner.go:162] docker exec --privileged -t functional-20211117223105-9504 /bin/bash -c "sudo init 0" returned with exit code 1
I1117 22:33:47.672448    3676 oci.go:658] error shutdown functional-20211117223105-9504: docker exec --privileged -t functional-20211117223105-9504 /bin/bash -c "sudo init 0": exit status 1
stdout:

                                                
                                                
stderr:
Error: No such container: functional-20211117223105-9504
I1117 22:33:48.678756    3676 cli_runner.go:115] Run: docker container inspect functional-20211117223105-9504 --format={{.State.Status}}
W1117 22:33:48.766091    3676 cli_runner.go:162] docker container inspect functional-20211117223105-9504 --format={{.State.Status}} returned with exit code 1
I1117 22:33:48.766091    3676 oci.go:670] temporary error verifying shutdown: unknown state "functional-20211117223105-9504": docker container inspect functional-20211117223105-9504 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117223105-9504
I1117 22:33:48.766091    3676 oci.go:672] temporary error: container functional-20211117223105-9504 status is  but expect it to be exited
I1117 22:33:48.766091    3676 retry.go:31] will retry after 391.517075ms: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20211117223105-9504": docker container inspect functional-20211117223105-9504 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117223105-9504
I1117 22:33:49.163585    3676 cli_runner.go:115] Run: docker container inspect functional-20211117223105-9504 --format={{.State.Status}}
W1117 22:33:49.251835    3676 cli_runner.go:162] docker container inspect functional-20211117223105-9504 --format={{.State.Status}} returned with exit code 1
I1117 22:33:49.252036    3676 oci.go:670] temporary error verifying shutdown: unknown state "functional-20211117223105-9504": docker container inspect functional-20211117223105-9504 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117223105-9504
I1117 22:33:49.252036    3676 oci.go:672] temporary error: container functional-20211117223105-9504 status is  but expect it to be exited
I1117 22:33:49.252130    3676 retry.go:31] will retry after 594.826393ms: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20211117223105-9504": docker container inspect functional-20211117223105-9504 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117223105-9504
I1117 22:33:49.851453    3676 cli_runner.go:115] Run: docker container inspect functional-20211117223105-9504 --format={{.State.Status}}
W1117 22:33:49.938861    3676 cli_runner.go:162] docker container inspect functional-20211117223105-9504 --format={{.State.Status}} returned with exit code 1
I1117 22:33:49.938861    3676 oci.go:670] temporary error verifying shutdown: unknown state "functional-20211117223105-9504": docker container inspect functional-20211117223105-9504 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117223105-9504
I1117 22:33:49.938861    3676 oci.go:672] temporary error: container functional-20211117223105-9504 status is  but expect it to be exited
I1117 22:33:49.938861    3676 retry.go:31] will retry after 1.326470261s: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20211117223105-9504": docker container inspect functional-20211117223105-9504 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117223105-9504
I1117 22:33:51.270447    3676 cli_runner.go:115] Run: docker container inspect functional-20211117223105-9504 --format={{.State.Status}}
W1117 22:33:51.355770    3676 cli_runner.go:162] docker container inspect functional-20211117223105-9504 --format={{.State.Status}} returned with exit code 1
I1117 22:33:51.356068    3676 oci.go:670] temporary error verifying shutdown: unknown state "functional-20211117223105-9504": docker container inspect functional-20211117223105-9504 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117223105-9504
I1117 22:33:51.356068    3676 oci.go:672] temporary error: container functional-20211117223105-9504 status is  but expect it to be exited
I1117 22:33:51.356155    3676 retry.go:31] will retry after 1.212558276s: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20211117223105-9504": docker container inspect functional-20211117223105-9504 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117223105-9504
I1117 22:33:52.574833    3676 cli_runner.go:115] Run: docker container inspect functional-20211117223105-9504 --format={{.State.Status}}
W1117 22:33:52.662154    3676 cli_runner.go:162] docker container inspect functional-20211117223105-9504 --format={{.State.Status}} returned with exit code 1
I1117 22:33:52.662370    3676 oci.go:670] temporary error verifying shutdown: unknown state "functional-20211117223105-9504": docker container inspect functional-20211117223105-9504 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117223105-9504
I1117 22:33:52.662370    3676 oci.go:672] temporary error: container functional-20211117223105-9504 status is  but expect it to be exited
I1117 22:33:52.662370    3676 retry.go:31] will retry after 1.779941781s: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20211117223105-9504": docker container inspect functional-20211117223105-9504 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117223105-9504
I1117 22:33:54.447942    3676 cli_runner.go:115] Run: docker container inspect functional-20211117223105-9504 --format={{.State.Status}}
W1117 22:33:54.540490    3676 cli_runner.go:162] docker container inspect functional-20211117223105-9504 --format={{.State.Status}} returned with exit code 1
I1117 22:33:54.540644    3676 oci.go:670] temporary error verifying shutdown: unknown state "functional-20211117223105-9504": docker container inspect functional-20211117223105-9504 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117223105-9504
I1117 22:33:54.540644    3676 oci.go:672] temporary error: container functional-20211117223105-9504 status is  but expect it to be exited
I1117 22:33:54.540644    3676 retry.go:31] will retry after 3.268621161s: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20211117223105-9504": docker container inspect functional-20211117223105-9504 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117223105-9504
I1117 22:33:57.814604    3676 cli_runner.go:115] Run: docker container inspect functional-20211117223105-9504 --format={{.State.Status}}
W1117 22:33:57.899583    3676 cli_runner.go:162] docker container inspect functional-20211117223105-9504 --format={{.State.Status}} returned with exit code 1
I1117 22:33:57.899583    3676 oci.go:670] temporary error verifying shutdown: unknown state "functional-20211117223105-9504": docker container inspect functional-20211117223105-9504 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117223105-9504
I1117 22:33:57.899583    3676 oci.go:672] temporary error: container functional-20211117223105-9504 status is  but expect it to be exited
I1117 22:33:57.899583    3676 retry.go:31] will retry after 6.097818893s: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20211117223105-9504": docker container inspect functional-20211117223105-9504 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117223105-9504
I1117 22:34:04.003853    3676 cli_runner.go:115] Run: docker container inspect functional-20211117223105-9504 --format={{.State.Status}}
W1117 22:34:04.106653    3676 cli_runner.go:162] docker container inspect functional-20211117223105-9504 --format={{.State.Status}} returned with exit code 1
I1117 22:34:04.106895    3676 oci.go:670] temporary error verifying shutdown: unknown state "functional-20211117223105-9504": docker container inspect functional-20211117223105-9504 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117223105-9504
I1117 22:34:04.106895    3676 oci.go:672] temporary error: container functional-20211117223105-9504 status is  but expect it to be exited
I1117 22:34:04.106895    3676 oci.go:87] couldn't shut down functional-20211117223105-9504 (might be okay): verify shutdown: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20211117223105-9504": docker container inspect functional-20211117223105-9504 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117223105-9504

                                                
                                                
I1117 22:34:04.112987    3676 cli_runner.go:115] Run: docker rm -f -v functional-20211117223105-9504
W1117 22:34:04.199073    3676 cli_runner.go:162] docker rm -f -v functional-20211117223105-9504 returned with exit code 1
W1117 22:34:04.200913    3676 delete.go:139] delete failed (probably ok) <nil>
I1117 22:34:04.200913    3676 fix.go:120] Sleeping 1 second for extra luck!
I1117 22:34:05.201940    3676 start.go:126] createHost starting for "" (driver="docker")
I1117 22:34:05.205306    3676 out.go:203] * Creating docker container (CPUs=2, Memory=4000MB) ...
I1117 22:34:05.205615    3676 start.go:160] libmachine.API.Create for "functional-20211117223105-9504" (driver="docker")
I1117 22:34:05.205615    3676 client.go:168] LocalClient.Create starting
I1117 22:34:05.206156    3676 main.go:130] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
I1117 22:34:05.206320    3676 main.go:130] libmachine: Decoding PEM data...
I1117 22:34:05.206320    3676 main.go:130] libmachine: Parsing certificate...
I1117 22:34:05.206465    3676 main.go:130] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
I1117 22:34:05.206636    3676 main.go:130] libmachine: Decoding PEM data...
I1117 22:34:05.206705    3676 main.go:130] libmachine: Parsing certificate...
I1117 22:34:05.210982    3676 cli_runner.go:115] Run: docker network inspect functional-20211117223105-9504 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1117 22:34:05.299269    3676 network_create.go:67] Found existing network {name:functional-20211117223105-9504 subnet:0xc000ed08d0 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 168 49 1] mtu:1500}
I1117 22:34:05.299269    3676 kic.go:106] calculated static IP "192.168.49.2" for the "functional-20211117223105-9504" container
I1117 22:34:05.307935    3676 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
I1117 22:34:05.397335    3676 cli_runner.go:115] Run: docker volume create functional-20211117223105-9504 --label name.minikube.sigs.k8s.io=functional-20211117223105-9504 --label created_by.minikube.sigs.k8s.io=true
I1117 22:34:05.482339    3676 oci.go:102] Successfully created a docker volume functional-20211117223105-9504
I1117 22:34:05.487786    3676 cli_runner.go:115] Run: docker run --rm --name functional-20211117223105-9504-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=functional-20211117223105-9504 --entrypoint /usr/bin/test -v functional-20211117223105-9504:/var gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -d /var/lib
I1117 22:34:06.325344    3676 oci.go:106] Successfully prepared a docker volume functional-20211117223105-9504
I1117 22:34:06.325344    3676 preload.go:132] Checking if preload exists for k8s version v1.22.3 and runtime docker
I1117 22:34:06.325344    3676 kic.go:179] Starting extracting preloaded images to volume ...
I1117 22:34:06.330172    3676 cli_runner.go:115] Run: docker system info --format "{{json .}}"
I1117 22:34:06.330172    3676 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v functional-20211117223105-9504:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir
W1117 22:34:06.437568    3676 cli_runner.go:162] docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v functional-20211117223105-9504:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir returned with exit code 125
I1117 22:34:06.437854    3676 kic.go:186] Unable to extract preloaded tarball to volume: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v functional-20211117223105-9504:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir: exit status 125
stdout:

                                                
                                                
stderr:
docker: Error response from daemon: status code not OK but 500: {"Message":"Unhandled exception: The notification platform is unavailable.\r\n\r\nThe notification platform is unavailable.\r\n","StackTrace":"   at Windows.UI.Notifications.ToastNotificationManager.CreateToastNotifier(String applicationId)\r\n   at Docker.WPF.PromptShareDirectory.<PromptUserAsync>d__0.MoveNext() in C:\\workspaces\\stable-2.3.x\\src\\github.com\\docker\\pinata\\win\\src\\Docker.WPF\\PromptShareDirectory.cs:line 25\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at Docker.ApiServices.Mounting.FileSharing.<DoShareAsync>d__6.MoveNext() in C:\\workspaces\\stable-2.3.x\\src\\github.com\\docker\\pinata\\win\\src\\Docker.ApiServices\\Mounting\\FileSharing.cs:line 80\r\n--- End of stack trace from previous location wher
e exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at Docker.ApiServices.Mounting.FileSharing.<ShareAsync>d__4.MoveNext() in C:\\workspaces\\stable-2.3.x\\src\\github.com\\docker\\pinata\\win\\src\\Docker.ApiServices\\Mounting\\FileSharing.cs:line 47\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at Docker.HttpApi.Controllers.FilesharingController.<ShareDirectory>d__2.MoveNext() in C:\\workspaces\\stable-2.3.x\\src\\github.com\\docker\\pinata\\win\\src\\Docker.HttpApi\\Controllers\\FilesharingController.cs:line 21\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.Exception
DispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at System.Threading.Tasks.TaskHelpersExtensions.<CastToObject>d__1`1.MoveNext()\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at System.Web.Http.Controllers.ApiControllerActionInvoker.<InvokeActionAsyncCore>d__1.MoveNext()\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at System.Web.Http.Controllers.ActionFilterResult.<ExecuteAsync>d__5.MoveNext()\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.Exceptio
nServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at System.Web.Http.Dispatcher.HttpControllerDispatcher.<SendAsync>d__15.MoveNext()"}.
See 'docker run --help'.
I1117 22:34:06.692991    3676 info.go:263] docker info: {ID:L2W4:ZZCI:FAHF:ZZT6:NAWJ:ODFR:2ZBQ:YQ77:PYZZ:VR3E:WLUM:OBSV Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:41 OomKillDisable:true NGoroutines:53 SystemTime:2021-11-17 22:34:06.421261579 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:4.19.76-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://index
.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:6239399936 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:19.03.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7ad184331fa3e55e52b890ea95e65ba581ae3429 Expected:7ad184331fa3e55e52b890ea95e65ba581ae3429} RuncCommit:{ID:dc9208a3303feef5b3839f4323d9beb36df0a9dd Expected:dc9208a3303feef5b3839f4323d9beb36df0a9dd} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense:Community Engine Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
E1117 22:34:06.693318    3676 oci.go:197] error getting kernel modules path: Unable to locate kernel modules
I1117 22:34:06.693392    3676 client.go:171] LocalClient.Create took 1.4877655s
I1117 22:34:08.701656    3676 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I1117 22:34:08.705034    3676 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117223105-9504
W1117 22:34:08.794357    3676 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117223105-9504 returned with exit code 1
I1117 22:34:08.794426    3676 retry.go:31] will retry after 198.278561ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20211117223105-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117223105-9504: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117223105-9504
I1117 22:34:08.998173    3676 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117223105-9504
W1117 22:34:09.085657    3676 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117223105-9504 returned with exit code 1
I1117 22:34:09.085930    3676 retry.go:31] will retry after 298.905961ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20211117223105-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117223105-9504: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117223105-9504
I1117 22:34:09.389217    3676 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117223105-9504
W1117 22:34:09.476874    3676 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117223105-9504 returned with exit code 1
I1117 22:34:09.476874    3676 retry.go:31] will retry after 704.572333ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20211117223105-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117223105-9504: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117223105-9504
I1117 22:34:10.186135    3676 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117223105-9504
W1117 22:34:10.274749    3676 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117223105-9504 returned with exit code 1
W1117 22:34:10.274827    3676 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20211117223105-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117223105-9504: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117223105-9504

                                                
                                                
W1117 22:34:10.274916    3676 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20211117223105-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117223105-9504: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117223105-9504
I1117 22:34:10.274916    3676 start.go:129] duration metric: createHost completed in 5.0729382s
I1117 22:34:10.281508    3676 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I1117 22:34:10.285148    3676 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117223105-9504
W1117 22:34:10.368828    3676 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117223105-9504 returned with exit code 1
I1117 22:34:10.368828    3676 retry.go:31] will retry after 341.333754ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20211117223105-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117223105-9504: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117223105-9504
I1117 22:34:10.713471    3676 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117223105-9504
W1117 22:34:10.805324    3676 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117223105-9504 returned with exit code 1
I1117 22:34:10.805715    3676 retry.go:31] will retry after 448.769687ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20211117223105-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117223105-9504: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117223105-9504
I1117 22:34:11.259396    3676 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117223105-9504
W1117 22:34:11.348588    3676 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117223105-9504 returned with exit code 1
I1117 22:34:11.348693    3676 retry.go:31] will retry after 575.898922ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20211117223105-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117223105-9504: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117223105-9504
I1117 22:34:11.930497    3676 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117223105-9504
W1117 22:34:12.020895    3676 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117223105-9504 returned with exit code 1
W1117 22:34:12.021153    3676 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20211117223105-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117223105-9504: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117223105-9504

                                                
                                                
W1117 22:34:12.021153    3676 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20211117223105-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117223105-9504: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117223105-9504
I1117 22:34:12.021153    3676 fix.go:57] fixHost completed within 24.9289315s
I1117 22:34:12.021244    3676 start.go:80] releasing machines lock for "functional-20211117223105-9504", held for 24.9295018s
W1117 22:34:12.021707    3676 out.go:241] * Failed to start docker container. Running "minikube delete -p functional-20211117223105-9504" may fix it: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
I1117 22:34:12.025975    3676 out.go:176] 
W1117 22:34:12.025975    3676 out.go:241] X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
W1117 22:34:12.025975    3676 out.go:241] * 
W1117 22:34:12.027458    3676 out.go:241] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
* 

                                                
                                                

                                                
                                                
***
--- FAIL: TestFunctional/serial/LogsCmd (2.22s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.93s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1190: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20211117223105-9504 logs --file C:\Users\jenkins.minikube2\AppData\Local\Temp\functional-20211117223105-950455227277\logs.txt
functional_test.go:1190: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-20211117223105-9504 logs --file C:\Users\jenkins.minikube2\AppData\Local\Temp\functional-20211117223105-950455227277\logs.txt: exit status 80 (1.8354784s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "functional-20211117223105-9504": docker container inspect functional-20211117223105-9504 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117223105-9504
	
	* 
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                       │
	│    * If the above advice does not help, please let us know:                                                           │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                         │
	│                                                                                                                       │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                              │
	│    * Please also attach the following file to the GitHub issue:                                                       │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_logs_80bd2298da0c083373823443180fffe8ad701919_42.log    │
	│                                                                                                                       │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1192: out/minikube-windows-amd64.exe -p functional-20211117223105-9504 logs --file C:\Users\jenkins.minikube2\AppData\Local\Temp\functional-20211117223105-950455227277\logs.txt failed: exit status 80
functional_test.go:1195: expected empty minikube logs output, but got: 
***
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "functional-20211117223105-9504": docker container inspect functional-20211117223105-9504 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117223105-9504
	
	* 
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                       │
	│    * If the above advice does not help, please let us know:                                                           │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                         │
	│                                                                                                                       │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                              │
	│    * Please also attach the following file to the GitHub issue:                                                       │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_logs_80bd2298da0c083373823443180fffe8ad701919_42.log    │
	│                                                                                                                       │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr *****
functional_test.go:1165: expected minikube logs to include word: -"Linux"- but got 
**** 
* ==> Audit <==
* |---------|----------------------------------------------------------|-------------------------------------|-------------------|---------|-------------------------------|-------------------------------|
| Command |                           Args                           |               Profile               |       User        | Version |          Start Time           |           End Time            |
|---------|----------------------------------------------------------|-------------------------------------|-------------------|---------|-------------------------------|-------------------------------|
| delete  | --all                                                    | download-only-20211117222633-9504   | minikube2\jenkins | v1.24.0 | Wed, 17 Nov 2021 22:27:15 GMT | Wed, 17 Nov 2021 22:27:18 GMT |
| delete  | -p                                                       | download-only-20211117222633-9504   | minikube2\jenkins | v1.24.0 | Wed, 17 Nov 2021 22:27:18 GMT | Wed, 17 Nov 2021 22:27:20 GMT |
|         | download-only-20211117222633-9504                        |                                     |                   |         |                               |                               |
| delete  | -p                                                       | download-only-20211117222633-9504   | minikube2\jenkins | v1.24.0 | Wed, 17 Nov 2021 22:27:20 GMT | Wed, 17 Nov 2021 22:27:22 GMT |
|         | download-only-20211117222633-9504                        |                                     |                   |         |                               |                               |
| delete  | -p                                                       | download-docker-20211117222722-9504 | minikube2\jenkins | v1.24.0 | Wed, 17 Nov 2021 22:27:43 GMT | Wed, 17 Nov 2021 22:27:46 GMT |
|         | download-docker-20211117222722-9504                      |                                     |                   |         |                               |                               |
| delete  | -p addons-20211117222746-9504                            | addons-20211117222746-9504          | minikube2\jenkins | v1.24.0 | Wed, 17 Nov 2021 22:29:11 GMT | Wed, 17 Nov 2021 22:29:14 GMT |
| delete  | -p nospam-20211117222914-9504                            | nospam-20211117222914-9504          | minikube2\jenkins | v1.24.0 | Wed, 17 Nov 2021 22:31:03 GMT | Wed, 17 Nov 2021 22:31:05 GMT |
| -p      | functional-20211117223105-9504 cache add                 | functional-20211117223105-9504      | minikube2\jenkins | v1.24.0 | Wed, 17 Nov 2021 22:32:54 GMT | Wed, 17 Nov 2021 22:32:56 GMT |
|         | minikube-local-cache-test:functional-20211117223105-9504 |                                     |                   |         |                               |                               |
| -p      | functional-20211117223105-9504 cache delete              | functional-20211117223105-9504      | minikube2\jenkins | v1.24.0 | Wed, 17 Nov 2021 22:32:56 GMT | Wed, 17 Nov 2021 22:32:56 GMT |
|         | minikube-local-cache-test:functional-20211117223105-9504 |                                     |                   |         |                               |                               |
| cache   | list                                                     | minikube                            | minikube2\jenkins | v1.24.0 | Wed, 17 Nov 2021 22:32:57 GMT | Wed, 17 Nov 2021 22:32:57 GMT |
| -p      | functional-20211117223105-9504                           | functional-20211117223105-9504      | minikube2\jenkins | v1.24.0 | Wed, 17 Nov 2021 22:33:02 GMT | Wed, 17 Nov 2021 22:33:02 GMT |
|         | cache reload                                             |                                     |                   |         |                               |                               |
|---------|----------------------------------------------------------|-------------------------------------|-------------------|---------|-------------------------------|-------------------------------|

                                                
                                                
* 
* ==> Last Start <==
* Log file created at: 2021/11/17 22:33:15
Running on machine: minikube2
Binary: Built with gc go1.17.3 for windows/amd64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I1117 22:33:15.247898    3676 out.go:297] Setting OutFile to fd 1004 ...
I1117 22:33:15.325035    3676 out.go:344] TERM=,COLORTERM=, which probably does not support color
I1117 22:33:15.325035    3676 out.go:310] Setting ErrFile to fd 1008...
I1117 22:33:15.325035    3676 out.go:344] TERM=,COLORTERM=, which probably does not support color
I1117 22:33:15.336661    3676 out.go:304] Setting JSON to false
I1117 22:33:15.338820    3676 start.go:112] hostinfo: {"hostname":"minikube2","uptime":77711,"bootTime":1637110684,"procs":127,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.18363 Build 18363","kernelVersion":"10.0.18363 Build 18363","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"29dced62-21fb-45d8-a34e-472b66ced002"}
W1117 22:33:15.338820    3676 start.go:120] gopshost.Virtualization returned error: not implemented yet
I1117 22:33:15.344752    3676 out.go:176] * [functional-20211117223105-9504] minikube v1.24.0 on Microsoft Windows 10 Enterprise N 10.0.18363 Build 18363
I1117 22:33:15.344914    3676 notify.go:174] Checking for updates...
I1117 22:33:15.347868    3676 out.go:176]   - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
I1117 22:33:15.350107    3676 out.go:176]   - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
I1117 22:33:15.352155    3676 out.go:176]   - MINIKUBE_LOCATION=12739
I1117 22:33:15.352155    3676 config.go:176] Loaded profile config "functional-20211117223105-9504": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.3
I1117 22:33:15.353103    3676 driver.go:343] Setting default libvirt URI to qemu:///system
I1117 22:33:16.938081    3676 docker.go:132] docker version: linux-19.03.12
I1117 22:33:16.941082    3676 cli_runner.go:115] Run: docker system info --format "{{json .}}"
I1117 22:33:17.299819    3676 info.go:263] docker info: {ID:L2W4:ZZCI:FAHF:ZZT6:NAWJ:ODFR:2ZBQ:YQ77:PYZZ:VR3E:WLUM:OBSV Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:43 OomKillDisable:true NGoroutines:52 SystemTime:2021-11-17 22:33:17.02211151 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:4.19.76-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://index.
docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:6239399936 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:19.03.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7ad184331fa3e55e52b890ea95e65ba581ae3429 Expected:7ad184331fa3e55e52b890ea95e65ba581ae3429} RuncCommit:{ID:dc9208a3303feef5b3839f4323d9beb36df0a9dd Expected:dc9208a3303feef5b3839f4323d9beb36df0a9dd} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense:Community Engine Warnings:<nil> Se
rverErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
I1117 22:33:17.310098    3676 out.go:176] * Using the docker driver based on existing profile
I1117 22:33:17.310098    3676 start.go:280] selected driver: docker
I1117 22:33:17.310098    3676 start.go:775] validating driver "docker" against &{Name:functional-20211117223105-9504 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.3 ClusterName:functional-20211117223105-9504 Namespace:default APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.22.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host}
I1117 22:33:17.310661    3676 start.go:786] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
I1117 22:33:17.322221    3676 cli_runner.go:115] Run: docker system info --format "{{json .}}"
I1117 22:33:17.650902    3676 info.go:263] docker info: {ID:L2W4:ZZCI:FAHF:ZZT6:NAWJ:ODFR:2ZBQ:YQ77:PYZZ:VR3E:WLUM:OBSV Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:43 OomKillDisable:true NGoroutines:52 SystemTime:2021-11-17 22:33:17.40195517 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:4.19.76-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://index.
docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:6239399936 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:19.03.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7ad184331fa3e55e52b890ea95e65ba581ae3429 Expected:7ad184331fa3e55e52b890ea95e65ba581ae3429} RuncCommit:{ID:dc9208a3303feef5b3839f4323d9beb36df0a9dd Expected:dc9208a3303feef5b3839f4323d9beb36df0a9dd} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense:Community Engine Warnings:<nil> Se
rverErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
I1117 22:33:17.699844    3676 start_flags.go:758] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I1117 22:33:17.699844    3676 cni.go:93] Creating CNI manager for ""
I1117 22:33:17.699844    3676 cni.go:167] CNI unnecessary in this configuration, recommending no CNI
I1117 22:33:17.699844    3676 start_flags.go:282] config:
{Name:functional-20211117223105-9504 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.3 ClusterName:functional-20211117223105-9504 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocke
t: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.22.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host}
I1117 22:33:17.704189    3676 out.go:176] * Starting control plane node functional-20211117223105-9504 in cluster functional-20211117223105-9504
I1117 22:33:17.704189    3676 cache.go:118] Beginning downloading kic base image for docker with docker
I1117 22:33:17.712248    3676 out.go:176] * Pulling base image ...
I1117 22:33:17.712872    3676 preload.go:132] Checking if preload exists for k8s version v1.22.3 and runtime docker
I1117 22:33:17.712939    3676 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c in local docker daemon
I1117 22:33:17.713048    3676 preload.go:148] Found local preload: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4
I1117 22:33:17.713184    3676 cache.go:57] Caching tarball of preloaded images
I1117 22:33:17.713371    3676 preload.go:174] Found C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
I1117 22:33:17.713371    3676 cache.go:60] Finished verifying existence of preloaded tar for  v1.22.3 on docker
I1117 22:33:17.713901    3676 profile.go:147] Saving config to C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\functional-20211117223105-9504\config.json ...
I1117 22:33:17.805953    3676 image.go:79] Found gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c in local docker daemon, skipping pull
I1117 22:33:17.805953    3676 cache.go:140] gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c exists in daemon, skipping load
I1117 22:33:17.805953    3676 cache.go:206] Successfully downloaded all kic artifacts
I1117 22:33:17.805953    3676 start.go:313] acquiring machines lock for functional-20211117223105-9504: {Name:mkf27125e9684350d3c166e137abc6f49434f9ae Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1117 22:33:17.805953    3676 start.go:317] acquired machines lock for "functional-20211117223105-9504" in 0s
I1117 22:33:17.805953    3676 start.go:93] Skipping create...Using existing machine configuration
I1117 22:33:17.805953    3676 fix.go:55] fixHost starting: 
I1117 22:33:17.811906    3676 cli_runner.go:115] Run: docker container inspect functional-20211117223105-9504 --format={{.State.Status}}
W1117 22:33:17.901111    3676 cli_runner.go:162] docker container inspect functional-20211117223105-9504 --format={{.State.Status}} returned with exit code 1
I1117 22:33:17.901111    3676 fix.go:108] recreateIfNeeded on functional-20211117223105-9504: state= err=unknown state "functional-20211117223105-9504": docker container inspect functional-20211117223105-9504 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117223105-9504
I1117 22:33:17.901111    3676 fix.go:113] machineExists: false. err=machine does not exist
I1117 22:33:17.907114    3676 out.go:176] * docker "functional-20211117223105-9504" container is missing, will recreate.
I1117 22:33:17.907114    3676 delete.go:124] DEMOLISHING functional-20211117223105-9504 ...
I1117 22:33:17.914099    3676 cli_runner.go:115] Run: docker container inspect functional-20211117223105-9504 --format={{.State.Status}}
W1117 22:33:18.008073    3676 cli_runner.go:162] docker container inspect functional-20211117223105-9504 --format={{.State.Status}} returned with exit code 1
W1117 22:33:18.008073    3676 stop.go:75] unable to get state: unknown state "functional-20211117223105-9504": docker container inspect functional-20211117223105-9504 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117223105-9504
I1117 22:33:18.008073    3676 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "functional-20211117223105-9504": docker container inspect functional-20211117223105-9504 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117223105-9504
I1117 22:33:18.015050    3676 cli_runner.go:115] Run: docker container inspect functional-20211117223105-9504 --format={{.State.Status}}
W1117 22:33:18.108865    3676 cli_runner.go:162] docker container inspect functional-20211117223105-9504 --format={{.State.Status}} returned with exit code 1
I1117 22:33:18.108865    3676 delete.go:82] Unable to get host status for functional-20211117223105-9504, assuming it has already been deleted: state: unknown state "functional-20211117223105-9504": docker container inspect functional-20211117223105-9504 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117223105-9504
I1117 22:33:18.111875    3676 cli_runner.go:115] Run: docker container inspect -f {{.Id}} functional-20211117223105-9504
W1117 22:33:18.211502    3676 cli_runner.go:162] docker container inspect -f {{.Id}} functional-20211117223105-9504 returned with exit code 1
I1117 22:33:18.211502    3676 kic.go:360] could not find the container functional-20211117223105-9504 to remove it. will try anyways
I1117 22:33:18.215477    3676 cli_runner.go:115] Run: docker container inspect functional-20211117223105-9504 --format={{.State.Status}}
W1117 22:33:18.308688    3676 cli_runner.go:162] docker container inspect functional-20211117223105-9504 --format={{.State.Status}} returned with exit code 1
W1117 22:33:18.308688    3676 oci.go:83] error getting container status, will try to delete anyways: unknown state "functional-20211117223105-9504": docker container inspect functional-20211117223105-9504 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117223105-9504
I1117 22:33:18.311735    3676 cli_runner.go:115] Run: docker exec --privileged -t functional-20211117223105-9504 /bin/bash -c "sudo init 0"
W1117 22:33:18.399243    3676 cli_runner.go:162] docker exec --privileged -t functional-20211117223105-9504 /bin/bash -c "sudo init 0" returned with exit code 1
I1117 22:33:18.399243    3676 oci.go:658] error shutdown functional-20211117223105-9504: docker exec --privileged -t functional-20211117223105-9504 /bin/bash -c "sudo init 0": exit status 1
stdout:

                                                
                                                
stderr:
Error: No such container: functional-20211117223105-9504
I1117 22:33:19.402929    3676 cli_runner.go:115] Run: docker container inspect functional-20211117223105-9504 --format={{.State.Status}}
W1117 22:33:19.488280    3676 cli_runner.go:162] docker container inspect functional-20211117223105-9504 --format={{.State.Status}} returned with exit code 1
I1117 22:33:19.488280    3676 oci.go:670] temporary error verifying shutdown: unknown state "functional-20211117223105-9504": docker container inspect functional-20211117223105-9504 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117223105-9504
I1117 22:33:19.488280    3676 oci.go:672] temporary error: container functional-20211117223105-9504 status is  but expect it to be exited
I1117 22:33:19.488280    3676 retry.go:31] will retry after 552.330144ms: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20211117223105-9504": docker container inspect functional-20211117223105-9504 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117223105-9504
I1117 22:33:20.044614    3676 cli_runner.go:115] Run: docker container inspect functional-20211117223105-9504 --format={{.State.Status}}
W1117 22:33:20.141722    3676 cli_runner.go:162] docker container inspect functional-20211117223105-9504 --format={{.State.Status}} returned with exit code 1
I1117 22:33:20.141722    3676 oci.go:670] temporary error verifying shutdown: unknown state "functional-20211117223105-9504": docker container inspect functional-20211117223105-9504 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117223105-9504
I1117 22:33:20.141722    3676 oci.go:672] temporary error: container functional-20211117223105-9504 status is  but expect it to be exited
I1117 22:33:20.141722    3676 retry.go:31] will retry after 1.080381816s: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20211117223105-9504": docker container inspect functional-20211117223105-9504 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117223105-9504
I1117 22:33:21.225285    3676 cli_runner.go:115] Run: docker container inspect functional-20211117223105-9504 --format={{.State.Status}}
W1117 22:33:21.316233    3676 cli_runner.go:162] docker container inspect functional-20211117223105-9504 --format={{.State.Status}} returned with exit code 1
I1117 22:33:21.316233    3676 oci.go:670] temporary error verifying shutdown: unknown state "functional-20211117223105-9504": docker container inspect functional-20211117223105-9504 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117223105-9504
I1117 22:33:21.316233    3676 oci.go:672] temporary error: container functional-20211117223105-9504 status is  but expect it to be exited
I1117 22:33:21.316233    3676 retry.go:31] will retry after 1.31013006s: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20211117223105-9504": docker container inspect functional-20211117223105-9504 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117223105-9504
I1117 22:33:22.631050    3676 cli_runner.go:115] Run: docker container inspect functional-20211117223105-9504 --format={{.State.Status}}
W1117 22:33:22.719426    3676 cli_runner.go:162] docker container inspect functional-20211117223105-9504 --format={{.State.Status}} returned with exit code 1
I1117 22:33:22.719426    3676 oci.go:670] temporary error verifying shutdown: unknown state "functional-20211117223105-9504": docker container inspect functional-20211117223105-9504 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117223105-9504
I1117 22:33:22.719426    3676 oci.go:672] temporary error: container functional-20211117223105-9504 status is  but expect it to be exited
I1117 22:33:22.719426    3676 retry.go:31] will retry after 1.582392691s: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20211117223105-9504": docker container inspect functional-20211117223105-9504 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117223105-9504
I1117 22:33:24.305804    3676 cli_runner.go:115] Run: docker container inspect functional-20211117223105-9504 --format={{.State.Status}}
W1117 22:33:24.399866    3676 cli_runner.go:162] docker container inspect functional-20211117223105-9504 --format={{.State.Status}} returned with exit code 1
I1117 22:33:24.399866    3676 oci.go:670] temporary error verifying shutdown: unknown state "functional-20211117223105-9504": docker container inspect functional-20211117223105-9504 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117223105-9504
I1117 22:33:24.399866    3676 oci.go:672] temporary error: container functional-20211117223105-9504 status is  but expect it to be exited
I1117 22:33:24.399866    3676 retry.go:31] will retry after 2.340488664s: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20211117223105-9504": docker container inspect functional-20211117223105-9504 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117223105-9504
I1117 22:33:26.744350    3676 cli_runner.go:115] Run: docker container inspect functional-20211117223105-9504 --format={{.State.Status}}
W1117 22:33:26.834047    3676 cli_runner.go:162] docker container inspect functional-20211117223105-9504 --format={{.State.Status}} returned with exit code 1
I1117 22:33:26.834047    3676 oci.go:670] temporary error verifying shutdown: unknown state "functional-20211117223105-9504": docker container inspect functional-20211117223105-9504 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117223105-9504
I1117 22:33:26.834047    3676 oci.go:672] temporary error: container functional-20211117223105-9504 status is  but expect it to be exited
I1117 22:33:26.834047    3676 retry.go:31] will retry after 4.506218855s: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20211117223105-9504": docker container inspect functional-20211117223105-9504 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117223105-9504
I1117 22:33:31.344468    3676 cli_runner.go:115] Run: docker container inspect functional-20211117223105-9504 --format={{.State.Status}}
W1117 22:33:31.438942    3676 cli_runner.go:162] docker container inspect functional-20211117223105-9504 --format={{.State.Status}} returned with exit code 1
I1117 22:33:31.438942    3676 oci.go:670] temporary error verifying shutdown: unknown state "functional-20211117223105-9504": docker container inspect functional-20211117223105-9504 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117223105-9504
I1117 22:33:31.438942    3676 oci.go:672] temporary error: container functional-20211117223105-9504 status is  but expect it to be exited
I1117 22:33:31.438942    3676 retry.go:31] will retry after 3.221479586s: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20211117223105-9504": docker container inspect functional-20211117223105-9504 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117223105-9504
I1117 22:33:34.663787    3676 cli_runner.go:115] Run: docker container inspect functional-20211117223105-9504 --format={{.State.Status}}
W1117 22:33:34.758987    3676 cli_runner.go:162] docker container inspect functional-20211117223105-9504 --format={{.State.Status}} returned with exit code 1
I1117 22:33:34.758987    3676 oci.go:670] temporary error verifying shutdown: unknown state "functional-20211117223105-9504": docker container inspect functional-20211117223105-9504 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117223105-9504
I1117 22:33:34.758987    3676 oci.go:672] temporary error: container functional-20211117223105-9504 status is  but expect it to be exited
I1117 22:33:34.758987    3676 oci.go:87] couldn't shut down functional-20211117223105-9504 (might be okay): verify shutdown: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20211117223105-9504": docker container inspect functional-20211117223105-9504 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117223105-9504

                                                
                                                
I1117 22:33:34.762930    3676 cli_runner.go:115] Run: docker rm -f -v functional-20211117223105-9504
W1117 22:33:34.849478    3676 cli_runner.go:162] docker rm -f -v functional-20211117223105-9504 returned with exit code 1
W1117 22:33:34.850437    3676 delete.go:139] delete failed (probably ok) <nil>
I1117 22:33:34.850437    3676 fix.go:120] Sleeping 1 second for extra luck!
I1117 22:33:35.851190    3676 start.go:126] createHost starting for "" (driver="docker")
I1117 22:33:35.855661    3676 out.go:203] * Creating docker container (CPUs=2, Memory=4000MB) ...
I1117 22:33:35.856029    3676 start.go:160] libmachine.API.Create for "functional-20211117223105-9504" (driver="docker")
I1117 22:33:35.856029    3676 client.go:168] LocalClient.Create starting
I1117 22:33:35.856029    3676 main.go:130] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
I1117 22:33:35.856029    3676 main.go:130] libmachine: Decoding PEM data...
I1117 22:33:35.856029    3676 main.go:130] libmachine: Parsing certificate...
I1117 22:33:35.857059    3676 main.go:130] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
I1117 22:33:35.857059    3676 main.go:130] libmachine: Decoding PEM data...
I1117 22:33:35.857059    3676 main.go:130] libmachine: Parsing certificate...
I1117 22:33:35.863117    3676 cli_runner.go:115] Run: docker network inspect functional-20211117223105-9504 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1117 22:33:35.948175    3676 network_create.go:67] Found existing network {name:functional-20211117223105-9504 subnet:0xc001246e40 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 168 49 1] mtu:1500}
I1117 22:33:35.948175    3676 kic.go:106] calculated static IP "192.168.49.2" for the "functional-20211117223105-9504" container
I1117 22:33:35.955393    3676 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
I1117 22:33:36.047866    3676 cli_runner.go:115] Run: docker volume create functional-20211117223105-9504 --label name.minikube.sigs.k8s.io=functional-20211117223105-9504 --label created_by.minikube.sigs.k8s.io=true
I1117 22:33:36.135034    3676 oci.go:102] Successfully created a docker volume functional-20211117223105-9504
I1117 22:33:36.138032    3676 cli_runner.go:115] Run: docker run --rm --name functional-20211117223105-9504-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=functional-20211117223105-9504 --entrypoint /usr/bin/test -v functional-20211117223105-9504:/var gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -d /var/lib
I1117 22:33:36.949034    3676 oci.go:106] Successfully prepared a docker volume functional-20211117223105-9504
I1117 22:33:36.949034    3676 preload.go:132] Checking if preload exists for k8s version v1.22.3 and runtime docker
I1117 22:33:36.949034    3676 kic.go:179] Starting extracting preloaded images to volume ...
I1117 22:33:36.953049    3676 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v functional-20211117223105-9504:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir
I1117 22:33:36.953049    3676 cli_runner.go:115] Run: docker system info --format "{{json .}}"
W1117 22:33:37.064063    3676 cli_runner.go:162] docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v functional-20211117223105-9504:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir returned with exit code 125
I1117 22:33:37.064063    3676 kic.go:186] Unable to extract preloaded tarball to volume: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v functional-20211117223105-9504:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir: exit status 125
stdout:

                                                
                                                
stderr:
docker: Error response from daemon: status code not OK but 500: {"Message":"Unhandled exception: The notification platform is unavailable.\r\n\r\nThe notification platform is unavailable.\r\n","StackTrace":"   at Windows.UI.Notifications.ToastNotificationManager.CreateToastNotifier(String applicationId)\r\n   at Docker.WPF.PromptShareDirectory.<PromptUserAsync>d__0.MoveNext() in C:\\workspaces\\stable-2.3.x\\src\\github.com\\docker\\pinata\\win\\src\\Docker.WPF\\PromptShareDirectory.cs:line 25\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at Docker.ApiServices.Mounting.FileSharing.<DoShareAsync>d__6.MoveNext() in C:\\workspaces\\stable-2.3.x\\src\\github.com\\docker\\pinata\\win\\src\\Docker.ApiServices\\Mounting\\FileSharing.cs:line 80\r\n--- End of stack trace from previous location wher
e exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at Docker.ApiServices.Mounting.FileSharing.<ShareAsync>d__4.MoveNext() in C:\\workspaces\\stable-2.3.x\\src\\github.com\\docker\\pinata\\win\\src\\Docker.ApiServices\\Mounting\\FileSharing.cs:line 47\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at Docker.HttpApi.Controllers.FilesharingController.<ShareDirectory>d__2.MoveNext() in C:\\workspaces\\stable-2.3.x\\src\\github.com\\docker\\pinata\\win\\src\\Docker.HttpApi\\Controllers\\FilesharingController.cs:line 21\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.Exception
DispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at System.Threading.Tasks.TaskHelpersExtensions.<CastToObject>d__1`1.MoveNext()\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at System.Web.Http.Controllers.ApiControllerActionInvoker.<InvokeActionAsyncCore>d__1.MoveNext()\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at System.Web.Http.Controllers.ActionFilterResult.<ExecuteAsync>d__5.MoveNext()\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.Exceptio
nServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at System.Web.Http.Dispatcher.HttpControllerDispatcher.<SendAsync>d__15.MoveNext()"}.
See 'docker run --help'.
I1117 22:33:37.310097    3676 info.go:263] docker info: {ID:L2W4:ZZCI:FAHF:ZZT6:NAWJ:ODFR:2ZBQ:YQ77:PYZZ:VR3E:WLUM:OBSV Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:41 OomKillDisable:true NGoroutines:53 SystemTime:2021-11-17 22:33:37.041173333 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:4.19.76-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://index
.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:6239399936 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:19.03.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7ad184331fa3e55e52b890ea95e65ba581ae3429 Expected:7ad184331fa3e55e52b890ea95e65ba581ae3429} RuncCommit:{ID:dc9208a3303feef5b3839f4323d9beb36df0a9dd Expected:dc9208a3303feef5b3839f4323d9beb36df0a9dd} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense:Community Engine Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
E1117 22:33:37.310097    3676 oci.go:197] error getting kernel modules path: Unable to locate kernel modules
I1117 22:33:37.310097    3676 client.go:171] LocalClient.Create took 1.4540573s
I1117 22:33:39.317046    3676 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I1117 22:33:39.322572    3676 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117223105-9504
W1117 22:33:39.414053    3676 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117223105-9504 returned with exit code 1
I1117 22:33:39.414053    3676 retry.go:31] will retry after 149.242379ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20211117223105-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117223105-9504: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117223105-9504
I1117 22:33:39.568371    3676 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117223105-9504
W1117 22:33:39.660652    3676 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117223105-9504 returned with exit code 1
I1117 22:33:39.660652    3676 retry.go:31] will retry after 300.341948ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20211117223105-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117223105-9504: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117223105-9504
I1117 22:33:39.966704    3676 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117223105-9504
W1117 22:33:40.051017    3676 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117223105-9504 returned with exit code 1
I1117 22:33:40.051300    3676 retry.go:31] will retry after 571.057104ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20211117223105-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117223105-9504: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117223105-9504
I1117 22:33:40.627804    3676 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117223105-9504
W1117 22:33:40.712565    3676 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117223105-9504 returned with exit code 1
W1117 22:33:40.712565    3676 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20211117223105-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117223105-9504: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117223105-9504

                                                
                                                
W1117 22:33:40.712565    3676 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20211117223105-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117223105-9504: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117223105-9504
I1117 22:33:40.712565    3676 start.go:129] duration metric: createHost completed in 4.8613391s
I1117 22:33:40.720233    3676 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I1117 22:33:40.723599    3676 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117223105-9504
W1117 22:33:40.811587    3676 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117223105-9504 returned with exit code 1
I1117 22:33:40.811736    3676 retry.go:31] will retry after 178.565968ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20211117223105-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117223105-9504: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117223105-9504
I1117 22:33:40.995258    3676 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117223105-9504
W1117 22:33:41.082946    3676 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117223105-9504 returned with exit code 1
I1117 22:33:41.083049    3676 retry.go:31] will retry after 330.246446ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20211117223105-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117223105-9504: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117223105-9504
I1117 22:33:41.418919    3676 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117223105-9504
W1117 22:33:41.532012    3676 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117223105-9504 returned with exit code 1
I1117 22:33:41.532153    3676 retry.go:31] will retry after 460.157723ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20211117223105-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117223105-9504: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117223105-9504
I1117 22:33:41.997213    3676 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117223105-9504
W1117 22:33:42.090358    3676 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117223105-9504 returned with exit code 1
W1117 22:33:42.090569    3676 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20211117223105-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117223105-9504: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117223105-9504

                                                
                                                
W1117 22:33:42.090569    3676 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20211117223105-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117223105-9504: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117223105-9504
I1117 22:33:42.090569    3676 fix.go:57] fixHost completed within 24.2844333s
I1117 22:33:42.090670    3676 start.go:80] releasing machines lock for "functional-20211117223105-9504", held for 24.284534s
W1117 22:33:42.090754    3676 start.go:532] error starting host: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
W1117 22:33:42.090924    3676 out.go:241] ! StartHost failed, but will try again: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
I1117 22:33:42.091080    3676 start.go:547] Will try again in 5 seconds ...
I1117 22:33:47.091465    3676 start.go:313] acquiring machines lock for functional-20211117223105-9504: {Name:mkf27125e9684350d3c166e137abc6f49434f9ae Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1117 22:33:47.091465    3676 start.go:317] acquired machines lock for "functional-20211117223105-9504" in 0s
I1117 22:33:47.092035    3676 start.go:93] Skipping create...Using existing machine configuration
I1117 22:33:47.092035    3676 fix.go:55] fixHost starting: 
I1117 22:33:47.102292    3676 cli_runner.go:115] Run: docker container inspect functional-20211117223105-9504 --format={{.State.Status}}
W1117 22:33:47.190636    3676 cli_runner.go:162] docker container inspect functional-20211117223105-9504 --format={{.State.Status}} returned with exit code 1
I1117 22:33:47.190636    3676 fix.go:108] recreateIfNeeded on functional-20211117223105-9504: state= err=unknown state "functional-20211117223105-9504": docker container inspect functional-20211117223105-9504 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117223105-9504
I1117 22:33:47.190636    3676 fix.go:113] machineExists: false. err=machine does not exist
I1117 22:33:47.195790    3676 out.go:176] * docker "functional-20211117223105-9504" container is missing, will recreate.
I1117 22:33:47.195790    3676 delete.go:124] DEMOLISHING functional-20211117223105-9504 ...
I1117 22:33:47.202758    3676 cli_runner.go:115] Run: docker container inspect functional-20211117223105-9504 --format={{.State.Status}}
W1117 22:33:47.287509    3676 cli_runner.go:162] docker container inspect functional-20211117223105-9504 --format={{.State.Status}} returned with exit code 1
W1117 22:33:47.287509    3676 stop.go:75] unable to get state: unknown state "functional-20211117223105-9504": docker container inspect functional-20211117223105-9504 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117223105-9504
I1117 22:33:47.287656    3676 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "functional-20211117223105-9504": docker container inspect functional-20211117223105-9504 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117223105-9504
I1117 22:33:47.296563    3676 cli_runner.go:115] Run: docker container inspect functional-20211117223105-9504 --format={{.State.Status}}
W1117 22:33:47.386844    3676 cli_runner.go:162] docker container inspect functional-20211117223105-9504 --format={{.State.Status}} returned with exit code 1
I1117 22:33:47.387057    3676 delete.go:82] Unable to get host status for functional-20211117223105-9504, assuming it has already been deleted: state: unknown state "functional-20211117223105-9504": docker container inspect functional-20211117223105-9504 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117223105-9504
I1117 22:33:47.392656    3676 cli_runner.go:115] Run: docker container inspect -f {{.Id}} functional-20211117223105-9504
W1117 22:33:47.487253    3676 cli_runner.go:162] docker container inspect -f {{.Id}} functional-20211117223105-9504 returned with exit code 1
I1117 22:33:47.487431    3676 kic.go:360] could not find the container functional-20211117223105-9504 to remove it. will try anyways
I1117 22:33:47.491938    3676 cli_runner.go:115] Run: docker container inspect functional-20211117223105-9504 --format={{.State.Status}}
W1117 22:33:47.579115    3676 cli_runner.go:162] docker container inspect functional-20211117223105-9504 --format={{.State.Status}} returned with exit code 1
W1117 22:33:47.579115    3676 oci.go:83] error getting container status, will try to delete anyways: unknown state "functional-20211117223105-9504": docker container inspect functional-20211117223105-9504 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117223105-9504
I1117 22:33:47.584555    3676 cli_runner.go:115] Run: docker exec --privileged -t functional-20211117223105-9504 /bin/bash -c "sudo init 0"
W1117 22:33:47.672448    3676 cli_runner.go:162] docker exec --privileged -t functional-20211117223105-9504 /bin/bash -c "sudo init 0" returned with exit code 1
I1117 22:33:47.672448    3676 oci.go:658] error shutdown functional-20211117223105-9504: docker exec --privileged -t functional-20211117223105-9504 /bin/bash -c "sudo init 0": exit status 1
stdout:

                                                
                                                
stderr:
Error: No such container: functional-20211117223105-9504
I1117 22:33:48.678756    3676 cli_runner.go:115] Run: docker container inspect functional-20211117223105-9504 --format={{.State.Status}}
W1117 22:33:48.766091    3676 cli_runner.go:162] docker container inspect functional-20211117223105-9504 --format={{.State.Status}} returned with exit code 1
I1117 22:33:48.766091    3676 oci.go:670] temporary error verifying shutdown: unknown state "functional-20211117223105-9504": docker container inspect functional-20211117223105-9504 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117223105-9504
I1117 22:33:48.766091    3676 oci.go:672] temporary error: container functional-20211117223105-9504 status is  but expect it to be exited
I1117 22:33:48.766091    3676 retry.go:31] will retry after 391.517075ms: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20211117223105-9504": docker container inspect functional-20211117223105-9504 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117223105-9504
I1117 22:33:49.163585    3676 cli_runner.go:115] Run: docker container inspect functional-20211117223105-9504 --format={{.State.Status}}
W1117 22:33:49.251835    3676 cli_runner.go:162] docker container inspect functional-20211117223105-9504 --format={{.State.Status}} returned with exit code 1
I1117 22:33:49.252036    3676 oci.go:670] temporary error verifying shutdown: unknown state "functional-20211117223105-9504": docker container inspect functional-20211117223105-9504 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117223105-9504
I1117 22:33:49.252036    3676 oci.go:672] temporary error: container functional-20211117223105-9504 status is  but expect it to be exited
I1117 22:33:49.252130    3676 retry.go:31] will retry after 594.826393ms: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20211117223105-9504": docker container inspect functional-20211117223105-9504 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117223105-9504
I1117 22:33:49.851453    3676 cli_runner.go:115] Run: docker container inspect functional-20211117223105-9504 --format={{.State.Status}}
W1117 22:33:49.938861    3676 cli_runner.go:162] docker container inspect functional-20211117223105-9504 --format={{.State.Status}} returned with exit code 1
I1117 22:33:49.938861    3676 oci.go:670] temporary error verifying shutdown: unknown state "functional-20211117223105-9504": docker container inspect functional-20211117223105-9504 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117223105-9504
I1117 22:33:49.938861    3676 oci.go:672] temporary error: container functional-20211117223105-9504 status is  but expect it to be exited
I1117 22:33:49.938861    3676 retry.go:31] will retry after 1.326470261s: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20211117223105-9504": docker container inspect functional-20211117223105-9504 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117223105-9504
I1117 22:33:51.270447    3676 cli_runner.go:115] Run: docker container inspect functional-20211117223105-9504 --format={{.State.Status}}
W1117 22:33:51.355770    3676 cli_runner.go:162] docker container inspect functional-20211117223105-9504 --format={{.State.Status}} returned with exit code 1
I1117 22:33:51.356068    3676 oci.go:670] temporary error verifying shutdown: unknown state "functional-20211117223105-9504": docker container inspect functional-20211117223105-9504 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117223105-9504
I1117 22:33:51.356068    3676 oci.go:672] temporary error: container functional-20211117223105-9504 status is  but expect it to be exited
I1117 22:33:51.356155    3676 retry.go:31] will retry after 1.212558276s: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20211117223105-9504": docker container inspect functional-20211117223105-9504 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117223105-9504
I1117 22:33:52.574833    3676 cli_runner.go:115] Run: docker container inspect functional-20211117223105-9504 --format={{.State.Status}}
W1117 22:33:52.662154    3676 cli_runner.go:162] docker container inspect functional-20211117223105-9504 --format={{.State.Status}} returned with exit code 1
I1117 22:33:52.662370    3676 oci.go:670] temporary error verifying shutdown: unknown state "functional-20211117223105-9504": docker container inspect functional-20211117223105-9504 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117223105-9504
I1117 22:33:52.662370    3676 oci.go:672] temporary error: container functional-20211117223105-9504 status is  but expect it to be exited
I1117 22:33:52.662370    3676 retry.go:31] will retry after 1.779941781s: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20211117223105-9504": docker container inspect functional-20211117223105-9504 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117223105-9504
I1117 22:33:54.447942    3676 cli_runner.go:115] Run: docker container inspect functional-20211117223105-9504 --format={{.State.Status}}
W1117 22:33:54.540490    3676 cli_runner.go:162] docker container inspect functional-20211117223105-9504 --format={{.State.Status}} returned with exit code 1
I1117 22:33:54.540644    3676 oci.go:670] temporary error verifying shutdown: unknown state "functional-20211117223105-9504": docker container inspect functional-20211117223105-9504 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117223105-9504
I1117 22:33:54.540644    3676 oci.go:672] temporary error: container functional-20211117223105-9504 status is  but expect it to be exited
I1117 22:33:54.540644    3676 retry.go:31] will retry after 3.268621161s: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20211117223105-9504": docker container inspect functional-20211117223105-9504 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117223105-9504
I1117 22:33:57.814604    3676 cli_runner.go:115] Run: docker container inspect functional-20211117223105-9504 --format={{.State.Status}}
W1117 22:33:57.899583    3676 cli_runner.go:162] docker container inspect functional-20211117223105-9504 --format={{.State.Status}} returned with exit code 1
I1117 22:33:57.899583    3676 oci.go:670] temporary error verifying shutdown: unknown state "functional-20211117223105-9504": docker container inspect functional-20211117223105-9504 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117223105-9504
I1117 22:33:57.899583    3676 oci.go:672] temporary error: container functional-20211117223105-9504 status is  but expect it to be exited
I1117 22:33:57.899583    3676 retry.go:31] will retry after 6.097818893s: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20211117223105-9504": docker container inspect functional-20211117223105-9504 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117223105-9504
I1117 22:34:04.003853    3676 cli_runner.go:115] Run: docker container inspect functional-20211117223105-9504 --format={{.State.Status}}
W1117 22:34:04.106653    3676 cli_runner.go:162] docker container inspect functional-20211117223105-9504 --format={{.State.Status}} returned with exit code 1
I1117 22:34:04.106895    3676 oci.go:670] temporary error verifying shutdown: unknown state "functional-20211117223105-9504": docker container inspect functional-20211117223105-9504 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117223105-9504
I1117 22:34:04.106895    3676 oci.go:672] temporary error: container functional-20211117223105-9504 status is  but expect it to be exited
I1117 22:34:04.106895    3676 oci.go:87] couldn't shut down functional-20211117223105-9504 (might be okay): verify shutdown: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20211117223105-9504": docker container inspect functional-20211117223105-9504 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117223105-9504

                                                
                                                
I1117 22:34:04.112987    3676 cli_runner.go:115] Run: docker rm -f -v functional-20211117223105-9504
W1117 22:34:04.199073    3676 cli_runner.go:162] docker rm -f -v functional-20211117223105-9504 returned with exit code 1
W1117 22:34:04.200913    3676 delete.go:139] delete failed (probably ok) <nil>
I1117 22:34:04.200913    3676 fix.go:120] Sleeping 1 second for extra luck!
I1117 22:34:05.201940    3676 start.go:126] createHost starting for "" (driver="docker")
I1117 22:34:05.205306    3676 out.go:203] * Creating docker container (CPUs=2, Memory=4000MB) ...
I1117 22:34:05.205615    3676 start.go:160] libmachine.API.Create for "functional-20211117223105-9504" (driver="docker")
I1117 22:34:05.205615    3676 client.go:168] LocalClient.Create starting
I1117 22:34:05.206156    3676 main.go:130] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
I1117 22:34:05.206320    3676 main.go:130] libmachine: Decoding PEM data...
I1117 22:34:05.206320    3676 main.go:130] libmachine: Parsing certificate...
I1117 22:34:05.206465    3676 main.go:130] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
I1117 22:34:05.206636    3676 main.go:130] libmachine: Decoding PEM data...
I1117 22:34:05.206705    3676 main.go:130] libmachine: Parsing certificate...
I1117 22:34:05.210982    3676 cli_runner.go:115] Run: docker network inspect functional-20211117223105-9504 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1117 22:34:05.299269    3676 network_create.go:67] Found existing network {name:functional-20211117223105-9504 subnet:0xc000ed08d0 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 168 49 1] mtu:1500}
I1117 22:34:05.299269    3676 kic.go:106] calculated static IP "192.168.49.2" for the "functional-20211117223105-9504" container
I1117 22:34:05.307935    3676 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
I1117 22:34:05.397335    3676 cli_runner.go:115] Run: docker volume create functional-20211117223105-9504 --label name.minikube.sigs.k8s.io=functional-20211117223105-9504 --label created_by.minikube.sigs.k8s.io=true
I1117 22:34:05.482339    3676 oci.go:102] Successfully created a docker volume functional-20211117223105-9504
I1117 22:34:05.487786    3676 cli_runner.go:115] Run: docker run --rm --name functional-20211117223105-9504-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=functional-20211117223105-9504 --entrypoint /usr/bin/test -v functional-20211117223105-9504:/var gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -d /var/lib
I1117 22:34:06.325344    3676 oci.go:106] Successfully prepared a docker volume functional-20211117223105-9504
I1117 22:34:06.325344    3676 preload.go:132] Checking if preload exists for k8s version v1.22.3 and runtime docker
I1117 22:34:06.325344    3676 kic.go:179] Starting extracting preloaded images to volume ...
I1117 22:34:06.330172    3676 cli_runner.go:115] Run: docker system info --format "{{json .}}"
I1117 22:34:06.330172    3676 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v functional-20211117223105-9504:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir
W1117 22:34:06.437568    3676 cli_runner.go:162] docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v functional-20211117223105-9504:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir returned with exit code 125
I1117 22:34:06.437854    3676 kic.go:186] Unable to extract preloaded tarball to volume: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v functional-20211117223105-9504:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir: exit status 125
stdout:

                                                
                                                
stderr:
docker: Error response from daemon: status code not OK but 500: {"Message":"Unhandled exception: The notification platform is unavailable.\r\n\r\nThe notification platform is unavailable.\r\n","StackTrace":"   at Windows.UI.Notifications.ToastNotificationManager.CreateToastNotifier(String applicationId)\r\n   at Docker.WPF.PromptShareDirectory.<PromptUserAsync>d__0.MoveNext() in C:\\workspaces\\stable-2.3.x\\src\\github.com\\docker\\pinata\\win\\src\\Docker.WPF\\PromptShareDirectory.cs:line 25\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at Docker.ApiServices.Mounting.FileSharing.<DoShareAsync>d__6.MoveNext() in C:\\workspaces\\stable-2.3.x\\src\\github.com\\docker\\pinata\\win\\src\\Docker.ApiServices\\Mounting\\FileSharing.cs:line 80\r\n--- End of stack trace from previous location wher
e exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at Docker.ApiServices.Mounting.FileSharing.<ShareAsync>d__4.MoveNext() in C:\\workspaces\\stable-2.3.x\\src\\github.com\\docker\\pinata\\win\\src\\Docker.ApiServices\\Mounting\\FileSharing.cs:line 47\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at Docker.HttpApi.Controllers.FilesharingController.<ShareDirectory>d__2.MoveNext() in C:\\workspaces\\stable-2.3.x\\src\\github.com\\docker\\pinata\\win\\src\\Docker.HttpApi\\Controllers\\FilesharingController.cs:line 21\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.Exception
DispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at System.Threading.Tasks.TaskHelpersExtensions.<CastToObject>d__1`1.MoveNext()\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at System.Web.Http.Controllers.ApiControllerActionInvoker.<InvokeActionAsyncCore>d__1.MoveNext()\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at System.Web.Http.Controllers.ActionFilterResult.<ExecuteAsync>d__5.MoveNext()\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.Exceptio
nServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at System.Web.Http.Dispatcher.HttpControllerDispatcher.<SendAsync>d__15.MoveNext()"}.
See 'docker run --help'.
I1117 22:34:06.692991    3676 info.go:263] docker info: {ID:L2W4:ZZCI:FAHF:ZZT6:NAWJ:ODFR:2ZBQ:YQ77:PYZZ:VR3E:WLUM:OBSV Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:41 OomKillDisable:true NGoroutines:53 SystemTime:2021-11-17 22:34:06.421261579 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:4.19.76-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://index
.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:6239399936 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:19.03.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7ad184331fa3e55e52b890ea95e65ba581ae3429 Expected:7ad184331fa3e55e52b890ea95e65ba581ae3429} RuncCommit:{ID:dc9208a3303feef5b3839f4323d9beb36df0a9dd Expected:dc9208a3303feef5b3839f4323d9beb36df0a9dd} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense:Community Engine Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
E1117 22:34:06.693318    3676 oci.go:197] error getting kernel modules path: Unable to locate kernel modules
I1117 22:34:06.693392    3676 client.go:171] LocalClient.Create took 1.4877655s
I1117 22:34:08.701656    3676 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I1117 22:34:08.705034    3676 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117223105-9504
W1117 22:34:08.794357    3676 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117223105-9504 returned with exit code 1
I1117 22:34:08.794426    3676 retry.go:31] will retry after 198.278561ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20211117223105-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117223105-9504: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117223105-9504
I1117 22:34:08.998173    3676 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117223105-9504
W1117 22:34:09.085657    3676 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117223105-9504 returned with exit code 1
I1117 22:34:09.085930    3676 retry.go:31] will retry after 298.905961ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20211117223105-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117223105-9504: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117223105-9504
I1117 22:34:09.389217    3676 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117223105-9504
W1117 22:34:09.476874    3676 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117223105-9504 returned with exit code 1
I1117 22:34:09.476874    3676 retry.go:31] will retry after 704.572333ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20211117223105-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117223105-9504: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117223105-9504
I1117 22:34:10.186135    3676 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117223105-9504
W1117 22:34:10.274749    3676 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117223105-9504 returned with exit code 1
W1117 22:34:10.274827    3676 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20211117223105-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117223105-9504: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117223105-9504

                                                
                                                
W1117 22:34:10.274916    3676 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20211117223105-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117223105-9504: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117223105-9504
I1117 22:34:10.274916    3676 start.go:129] duration metric: createHost completed in 5.0729382s
I1117 22:34:10.281508    3676 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I1117 22:34:10.285148    3676 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117223105-9504
W1117 22:34:10.368828    3676 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117223105-9504 returned with exit code 1
I1117 22:34:10.368828    3676 retry.go:31] will retry after 341.333754ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20211117223105-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117223105-9504: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117223105-9504
I1117 22:34:10.713471    3676 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117223105-9504
W1117 22:34:10.805324    3676 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117223105-9504 returned with exit code 1
I1117 22:34:10.805715    3676 retry.go:31] will retry after 448.769687ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20211117223105-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117223105-9504: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117223105-9504
I1117 22:34:11.259396    3676 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117223105-9504
W1117 22:34:11.348588    3676 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117223105-9504 returned with exit code 1
I1117 22:34:11.348693    3676 retry.go:31] will retry after 575.898922ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20211117223105-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117223105-9504: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117223105-9504
I1117 22:34:11.930497    3676 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117223105-9504
W1117 22:34:12.020895    3676 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117223105-9504 returned with exit code 1
W1117 22:34:12.021153    3676 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20211117223105-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117223105-9504: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117223105-9504

                                                
                                                
W1117 22:34:12.021153    3676 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20211117223105-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117223105-9504: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117223105-9504
I1117 22:34:12.021153    3676 fix.go:57] fixHost completed within 24.9289315s
I1117 22:34:12.021244    3676 start.go:80] releasing machines lock for "functional-20211117223105-9504", held for 24.9295018s
W1117 22:34:12.021707    3676 out.go:241] * Failed to start docker container. Running "minikube delete -p functional-20211117223105-9504" may fix it: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
I1117 22:34:12.025975    3676 out.go:176] 
W1117 22:34:12.025975    3676 out.go:241] X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
W1117 22:34:12.025975    3676 out.go:241] * 
W1117 22:34:12.027458    3676 out.go:241] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
* 
***
--- FAIL: TestFunctional/serial/LogsFileCmd (1.93s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (7.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:796: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20211117223105-9504 status

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:796: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-20211117223105-9504 status: exit status 7 (1.8071647s)

                                                
                                                
-- stdout --
	functional-20211117223105-9504
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 22:34:34.018267   10276 status.go:258] status error: host: state: unknown state "functional-20211117223105-9504": docker container inspect functional-20211117223105-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117223105-9504
	E1117 22:34:34.018267   10276 status.go:261] The "functional-20211117223105-9504" host does not exist!

                                                
                                                
** /stderr **
functional_test.go:798: failed to run minikube status. args "out/minikube-windows-amd64.exe -p functional-20211117223105-9504 status" : exit status 7
functional_test.go:802: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20211117223105-9504 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:802: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-20211117223105-9504 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}: exit status 7 (1.8243413s)

                                                
                                                
-- stdout --
	host:Nonexistent,kublet:Nonexistent,apiserver:Nonexistent,kubeconfig:Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 22:34:35.843709    4208 status.go:258] status error: host: state: unknown state "functional-20211117223105-9504": docker container inspect functional-20211117223105-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117223105-9504
	E1117 22:34:35.843709    4208 status.go:261] The "functional-20211117223105-9504" host does not exist!

                                                
                                                
** /stderr **
functional_test.go:804: failed to run minikube status with custom format: args "out/minikube-windows-amd64.exe -p functional-20211117223105-9504 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}": exit status 7
functional_test.go:814: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20211117223105-9504 status -o json

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:814: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-20211117223105-9504 status -o json: exit status 7 (1.7892284s)

                                                
                                                
-- stdout --
	{"Name":"functional-20211117223105-9504","Host":"Nonexistent","Kubelet":"Nonexistent","APIServer":"Nonexistent","Kubeconfig":"Nonexistent","Worker":false}

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 22:34:37.632583    6768 status.go:258] status error: host: state: unknown state "functional-20211117223105-9504": docker container inspect functional-20211117223105-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117223105-9504
	E1117 22:34:37.632689    6768 status.go:261] The "functional-20211117223105-9504" host does not exist!

                                                
                                                
** /stderr **
functional_test.go:816: failed to run minikube status with json output. args "out/minikube-windows-amd64.exe -p functional-20211117223105-9504 status -o json" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestFunctional/parallel/StatusCmd]: docker inspect <======

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
helpers_test.go:231: (dbg) Run:  docker inspect functional-20211117223105-9504

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
helpers_test.go:235: (dbg) docker inspect functional-20211117223105-9504:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "functional-20211117223105-9504",
	        "Id": "72218bd966d51a7f89406ef733cbb0e5b7382c2eca4e22dcf492d153f6d7a483",
	        "Created": "2021-11-17T22:31:08.713523089Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.49.0/24",
	                    "Gateway": "192.168.49.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-20211117223105-9504 -n functional-20211117223105-9504

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-20211117223105-9504 -n functional-20211117223105-9504: exit status 7 (1.8197526s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 22:34:39.566497    6564 status.go:247] status error: host: state: unknown state "functional-20211117223105-9504": docker container inspect functional-20211117223105-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117223105-9504

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-20211117223105-9504" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestFunctional/parallel/StatusCmd (7.36s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd (3.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd
=== PAUSE TestFunctional/parallel/ServiceCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1372: (dbg) Run:  kubectl --context functional-20211117223105-9504 create deployment hello-node --image=k8s.gcr.io/echoserver:1.8
functional_test.go:1372: (dbg) Non-zero exit: kubectl --context functional-20211117223105-9504 create deployment hello-node --image=k8s.gcr.io/echoserver:1.8: exit status 1 (271.7469ms)

                                                
                                                
** stderr ** 
	W1117 22:34:30.609994   11684 loader.go:223] Config not found: C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	error: context "functional-20211117223105-9504" does not exist

                                                
                                                
** /stderr **
functional_test.go:1376: failed to create hello-node deployment with this command "kubectl --context functional-20211117223105-9504 create deployment hello-node --image=k8s.gcr.io/echoserver:1.8": exit status 1.
functional_test.go:1341: service test failed - dumping debug information
functional_test.go:1342: -----------------------service failure post-mortem--------------------------------
functional_test.go:1345: (dbg) Run:  kubectl --context functional-20211117223105-9504 describe po hello-node
functional_test.go:1345: (dbg) Non-zero exit: kubectl --context functional-20211117223105-9504 describe po hello-node: exit status 1 (278.9036ms)

                                                
                                                
** stderr ** 
	W1117 22:34:30.894620   10736 loader.go:223] Config not found: C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	Error in configuration: 
	* context was not found for specified context: functional-20211117223105-9504
	* cluster has no server defined

                                                
                                                
** /stderr **
functional_test.go:1347: "kubectl --context functional-20211117223105-9504 describe po hello-node" failed: exit status 1
functional_test.go:1349: hello-node pod describe:
functional_test.go:1351: (dbg) Run:  kubectl --context functional-20211117223105-9504 logs -l app=hello-node
functional_test.go:1351: (dbg) Non-zero exit: kubectl --context functional-20211117223105-9504 logs -l app=hello-node: exit status 1 (272.2091ms)

                                                
                                                
** stderr ** 
	W1117 22:34:31.168095    3832 loader.go:223] Config not found: C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	Error in configuration: 
	* context was not found for specified context: functional-20211117223105-9504
	* cluster has no server defined

                                                
                                                
** /stderr **
functional_test.go:1353: "kubectl --context functional-20211117223105-9504 logs -l app=hello-node" failed: exit status 1
functional_test.go:1355: hello-node logs:
functional_test.go:1357: (dbg) Run:  kubectl --context functional-20211117223105-9504 describe svc hello-node

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1357: (dbg) Non-zero exit: kubectl --context functional-20211117223105-9504 describe svc hello-node: exit status 1 (295.9796ms)

                                                
                                                
** stderr ** 
	W1117 22:34:31.454530   11268 loader.go:223] Config not found: C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	Error in configuration: 
	* context was not found for specified context: functional-20211117223105-9504
	* cluster has no server defined

                                                
                                                
** /stderr **
functional_test.go:1359: "kubectl --context functional-20211117223105-9504 describe svc hello-node" failed: exit status 1
functional_test.go:1361: hello-node svc describe:
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestFunctional/parallel/ServiceCmd]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect functional-20211117223105-9504
helpers_test.go:235: (dbg) docker inspect functional-20211117223105-9504:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "functional-20211117223105-9504",
	        "Id": "72218bd966d51a7f89406ef733cbb0e5b7382c2eca4e22dcf492d153f6d7a483",
	        "Created": "2021-11-17T22:31:08.713523089Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.49.0/24",
	                    "Gateway": "192.168.49.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-20211117223105-9504 -n functional-20211117223105-9504

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-20211117223105-9504 -n functional-20211117223105-9504: exit status 7 (1.8102202s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 22:34:33.450938    5420 status.go:247] status error: host: state: unknown state "functional-20211117223105-9504": docker container inspect functional-20211117223105-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117223105-9504

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-20211117223105-9504" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestFunctional/parallel/ServiceCmd (3.05s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (1.99s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:46: failed waiting for storage-provisioner: client config: context "functional-20211117223105-9504" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect functional-20211117223105-9504
helpers_test.go:235: (dbg) docker inspect functional-20211117223105-9504:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "functional-20211117223105-9504",
	        "Id": "72218bd966d51a7f89406ef733cbb0e5b7382c2eca4e22dcf492d153f6d7a483",
	        "Created": "2021-11-17T22:31:08.713523089Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.49.0/24",
	                    "Gateway": "192.168.49.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-20211117223105-9504 -n functional-20211117223105-9504

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-20211117223105-9504 -n functional-20211117223105-9504: exit status 7 (1.8664661s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 22:34:31.737714    9864 status.go:247] status error: host: state: unknown state "functional-20211117223105-9504": docker container inspect functional-20211117223105-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117223105-9504

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-20211117223105-9504" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestFunctional/parallel/PersistentVolumeClaim (1.99s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (5.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1517: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20211117223105-9504 ssh "echo hello"

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1517: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-20211117223105-9504 ssh "echo hello": exit status 80 (1.8576858s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "functional-20211117223105-9504": docker container inspect functional-20211117223105-9504 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117223105-9504
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                     │
	│    * If the above advice does not help, please let us know:                                                         │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                       │
	│                                                                                                                     │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                            │
	│    * Please also attach the following file to the GitHub issue:                                                     │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_ssh_19232f4b01a263c7fe4da55009757983856b4b95_0.log    │
	│                                                                                                                     │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1522: failed to run an ssh command. args "out/minikube-windows-amd64.exe -p functional-20211117223105-9504 ssh \"echo hello\"" : exit status 80
functional_test.go:1526: expected minikube ssh command output to be -"hello"- but got *"\n\n"*. args "out/minikube-windows-amd64.exe -p functional-20211117223105-9504 ssh \"echo hello\""
functional_test.go:1534: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20211117223105-9504 ssh "cat /etc/hostname"

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1534: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-20211117223105-9504 ssh "cat /etc/hostname": exit status 80 (1.8730461s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "functional-20211117223105-9504": docker container inspect functional-20211117223105-9504 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117223105-9504
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                     │
	│    * If the above advice does not help, please let us know:                                                         │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                       │
	│                                                                                                                     │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                            │
	│    * Please also attach the following file to the GitHub issue:                                                     │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_ssh_5f4fcbb456675d30b61ad2920d0002a45adaee9e_0.log    │
	│                                                                                                                     │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1540: failed to run an ssh command. args "out/minikube-windows-amd64.exe -p functional-20211117223105-9504 ssh \"cat /etc/hostname\"" : exit status 80
functional_test.go:1544: expected minikube ssh command output to be -"functional-20211117223105-9504"- but got *"\n\n"*. args "out/minikube-windows-amd64.exe -p functional-20211117223105-9504 ssh \"cat /etc/hostname\""
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestFunctional/parallel/SSHCmd]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect functional-20211117223105-9504
helpers_test.go:235: (dbg) docker inspect functional-20211117223105-9504:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "functional-20211117223105-9504",
	        "Id": "72218bd966d51a7f89406ef733cbb0e5b7382c2eca4e22dcf492d153f6d7a483",
	        "Created": "2021-11-17T22:31:08.713523089Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.49.0/24",
	                    "Gateway": "192.168.49.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-20211117223105-9504 -n functional-20211117223105-9504

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-20211117223105-9504 -n functional-20211117223105-9504: exit status 7 (1.8328763s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 22:34:29.636501    1468 status.go:247] status error: host: state: unknown state "functional-20211117223105-9504": docker container inspect functional-20211117223105-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117223105-9504

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-20211117223105-9504" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestFunctional/parallel/SSHCmd (5.68s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (3.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20211117223105-9504 cp testdata\cp-test.txt /home/docker/cp-test.txt

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:534: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-20211117223105-9504 cp testdata\cp-test.txt /home/docker/cp-test.txt: exit status 80 (1.9157158s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "functional-20211117223105-9504": docker container inspect functional-20211117223105-9504 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117223105-9504
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                     │
	│    * If the above advice does not help, please let us know:                                                         │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                       │
	│                                                                                                                     │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                            │
	│    * Please also attach the following file to the GitHub issue:                                                     │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_ssh_7d41a958b2f4d4f711b7a60d0e0341faef40f8ed_0.log    │
	│                                                                                                                     │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
helpers_test.go:539: failed to run an cp command. args "out/minikube-windows-amd64.exe -p functional-20211117223105-9504 cp testdata\\cp-test.txt /home/docker/cp-test.txt" : exit status 80
helpers_test.go:548: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20211117223105-9504 ssh "sudo cat /home/docker/cp-test.txt"

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:548: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-20211117223105-9504 ssh "sudo cat /home/docker/cp-test.txt": exit status 80 (1.8485087s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "functional-20211117223105-9504": docker container inspect functional-20211117223105-9504 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117223105-9504
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                     │
	│    * If the above advice does not help, please let us know:                                                         │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                       │
	│                                                                                                                     │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                            │
	│    * Please also attach the following file to the GitHub issue:                                                     │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_ssh_f9fbdc48f4e6e25fa352a85c2bd7e3c2c13fee65_0.log    │
	│                                                                                                                     │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
helpers_test.go:553: failed to run an cp command. args "out/minikube-windows-amd64.exe -p functional-20211117223105-9504 ssh \"sudo cat /home/docker/cp-test.txt\"" : exit status 80
helpers_test.go:562: /testdata/cp-test.txt content mismatch (-want +got):
string(
- 	"Test file for checking file cp process",
+ 	"\n\n",
)
--- FAIL: TestFunctional/parallel/CpCmd (3.76s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (2.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1571: (dbg) Run:  kubectl --context functional-20211117223105-9504 replace --force -f testdata\mysql.yaml
functional_test.go:1571: (dbg) Non-zero exit: kubectl --context functional-20211117223105-9504 replace --force -f testdata\mysql.yaml: exit status 1 (281.4683ms)

                                                
                                                
** stderr ** 
	W1117 22:34:33.666586   12072 loader.go:223] Config not found: C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	error: context "functional-20211117223105-9504" does not exist

                                                
                                                
** /stderr **
functional_test.go:1573: failed to kubectl replace mysql: args "kubectl --context functional-20211117223105-9504 replace --force -f testdata\\mysql.yaml" failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestFunctional/parallel/MySQL]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect functional-20211117223105-9504
helpers_test.go:235: (dbg) docker inspect functional-20211117223105-9504:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "functional-20211117223105-9504",
	        "Id": "72218bd966d51a7f89406ef733cbb0e5b7382c2eca4e22dcf492d153f6d7a483",
	        "Created": "2021-11-17T22:31:08.713523089Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.49.0/24",
	                    "Gateway": "192.168.49.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-20211117223105-9504 -n functional-20211117223105-9504

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-20211117223105-9504 -n functional-20211117223105-9504: exit status 7 (1.945717s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 22:34:35.771541    5208 status.go:247] status error: host: state: unknown state "functional-20211117223105-9504": docker container inspect functional-20211117223105-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117223105-9504

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-20211117223105-9504" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestFunctional/parallel/MySQL (2.33s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (3.92s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1707: Checking for existence of /etc/test/nested/copy/9504/hosts within VM
functional_test.go:1709: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20211117223105-9504 ssh "sudo cat /etc/test/nested/copy/9504/hosts"

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1709: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-20211117223105-9504 ssh "sudo cat /etc/test/nested/copy/9504/hosts": exit status 80 (1.9685572s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "functional-20211117223105-9504": docker container inspect functional-20211117223105-9504 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117223105-9504
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                     │
	│    * If the above advice does not help, please let us know:                                                         │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                       │
	│                                                                                                                     │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                            │
	│    * Please also attach the following file to the GitHub issue:                                                     │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_ssh_a7743c0c96e97dcf014f7ac0c46af24db7079011_0.log    │
	│                                                                                                                     │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1711: out/minikube-windows-amd64.exe -p functional-20211117223105-9504 ssh "sudo cat /etc/test/nested/copy/9504/hosts" failed: exit status 80
functional_test.go:1714: file sync test content: 

                                                
                                                
functional_test.go:1724: /etc/sync.test content mismatch (-want +got):
string(
- 	"Test file for checking file sync process",
+ 	"\n\n",
)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestFunctional/parallel/FileSync]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect functional-20211117223105-9504
helpers_test.go:235: (dbg) docker inspect functional-20211117223105-9504:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "functional-20211117223105-9504",
	        "Id": "72218bd966d51a7f89406ef733cbb0e5b7382c2eca4e22dcf492d153f6d7a483",
	        "Created": "2021-11-17T22:31:08.713523089Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.49.0/24",
	                    "Gateway": "192.168.49.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-20211117223105-9504 -n functional-20211117223105-9504

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-20211117223105-9504 -n functional-20211117223105-9504: exit status 7 (1.8547882s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 22:34:37.209920   11188 status.go:247] status error: host: state: unknown state "functional-20211117223105-9504": docker container inspect functional-20211117223105-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117223105-9504

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-20211117223105-9504" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestFunctional/parallel/FileSync (3.92s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (13.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1750: Checking for existence of /etc/ssl/certs/9504.pem within VM
functional_test.go:1751: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20211117223105-9504 ssh "sudo cat /etc/ssl/certs/9504.pem"

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1751: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-20211117223105-9504 ssh "sudo cat /etc/ssl/certs/9504.pem": exit status 80 (1.9110312s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "functional-20211117223105-9504": docker container inspect functional-20211117223105-9504 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117223105-9504
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                     │
	│    * If the above advice does not help, please let us know:                                                         │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                       │
	│                                                                                                                     │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                            │
	│    * Please also attach the following file to the GitHub issue:                                                     │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_ssh_7d41a958b2f4d4f711b7a60d0e0341faef40f8ed_0.log    │
	│                                                                                                                     │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1753: failed to check existence of "/etc/ssl/certs/9504.pem" inside minikube. args "out/minikube-windows-amd64.exe -p functional-20211117223105-9504 ssh \"sudo cat /etc/ssl/certs/9504.pem\"": exit status 80
functional_test.go:1759: failed verify pem file. minikube_test.pem -> /etc/ssl/certs/9504.pem mismatch (-want +got):
string(
- 	"-----BEGIN CERTIFICATE-----\nMIIDsDCCApgCCQD5n0OIsOYIjDANBgkqhkiG9w0BAQsFADCBmTELMAkGA1UEBhMC\nVVMxEzARBgNVBAgMCkNhbGlmb3JuaWExFjAUBgNVBAcMDVNhbiBGcmFuY2lzY28x\nETAPBgNVBAoMCG1pbmlrdWJlMRYwFAYDVQQLDA1QYXJ0eSBQYXJyb3RzMREwDwYD\nVQQDDAhtaW5pa3ViZTEfMB0GCSqGSIb3D"...,
+ 	"\n\n",
)
functional_test.go:1750: Checking for existence of /usr/share/ca-certificates/9504.pem within VM
functional_test.go:1751: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20211117223105-9504 ssh "sudo cat /usr/share/ca-certificates/9504.pem"

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1751: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-20211117223105-9504 ssh "sudo cat /usr/share/ca-certificates/9504.pem": exit status 80 (1.8548743s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "functional-20211117223105-9504": docker container inspect functional-20211117223105-9504 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117223105-9504
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                     │
	│    * If the above advice does not help, please let us know:                                                         │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                       │
	│                                                                                                                     │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                            │
	│    * Please also attach the following file to the GitHub issue:                                                     │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_ssh_f9fbdc48f4e6e25fa352a85c2bd7e3c2c13fee65_0.log    │
	│                                                                                                                     │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1753: failed to check existence of "/usr/share/ca-certificates/9504.pem" inside minikube. args "out/minikube-windows-amd64.exe -p functional-20211117223105-9504 ssh \"sudo cat /usr/share/ca-certificates/9504.pem\"": exit status 80
functional_test.go:1759: failed verify pem file. minikube_test.pem -> /usr/share/ca-certificates/9504.pem mismatch (-want +got):
string(
- 	"-----BEGIN CERTIFICATE-----\nMIIDsDCCApgCCQD5n0OIsOYIjDANBgkqhkiG9w0BAQsFADCBmTELMAkGA1UEBhMC\nVVMxEzARBgNVBAgMCkNhbGlmb3JuaWExFjAUBgNVBAcMDVNhbiBGcmFuY2lzY28x\nETAPBgNVBAoMCG1pbmlrdWJlMRYwFAYDVQQLDA1QYXJ0eSBQYXJyb3RzMREwDwYD\nVQQDDAhtaW5pa3ViZTEfMB0GCSqGSIb3D"...,
+ 	"\n\n",
)
functional_test.go:1750: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1751: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20211117223105-9504 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1751: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-20211117223105-9504 ssh "sudo cat /etc/ssl/certs/51391683.0": exit status 80 (1.8496187s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "functional-20211117223105-9504": docker container inspect functional-20211117223105-9504 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117223105-9504
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                     │
	│    * If the above advice does not help, please let us know:                                                         │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                       │
	│                                                                                                                     │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                            │
	│    * Please also attach the following file to the GitHub issue:                                                     │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_ssh_19232f4b01a263c7fe4da55009757983856b4b95_0.log    │
	│                                                                                                                     │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1753: failed to check existence of "/etc/ssl/certs/51391683.0" inside minikube. args "out/minikube-windows-amd64.exe -p functional-20211117223105-9504 ssh \"sudo cat /etc/ssl/certs/51391683.0\"": exit status 80
functional_test.go:1759: failed verify pem file. minikube_test.pem -> /etc/ssl/certs/51391683.0 mismatch (-want +got):
string(
- 	"-----BEGIN CERTIFICATE-----\nMIIDsDCCApgCCQD5n0OIsOYIjDANBgkqhkiG9w0BAQsFADCBmTELMAkGA1UEBhMC\nVVMxEzARBgNVBAgMCkNhbGlmb3JuaWExFjAUBgNVBAcMDVNhbiBGcmFuY2lzY28x\nETAPBgNVBAoMCG1pbmlrdWJlMRYwFAYDVQQLDA1QYXJ0eSBQYXJyb3RzMREwDwYD\nVQQDDAhtaW5pa3ViZTEfMB0GCSqGSIb3D"...,
+ 	"\n\n",
)
functional_test.go:1777: Checking for existence of /etc/ssl/certs/95042.pem within VM
functional_test.go:1778: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20211117223105-9504 ssh "sudo cat /etc/ssl/certs/95042.pem"

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1778: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-20211117223105-9504 ssh "sudo cat /etc/ssl/certs/95042.pem": exit status 80 (1.8557575s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "functional-20211117223105-9504": docker container inspect functional-20211117223105-9504 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117223105-9504
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                     │
	│    * If the above advice does not help, please let us know:                                                         │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                       │
	│                                                                                                                     │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                            │
	│    * Please also attach the following file to the GitHub issue:                                                     │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_ssh_5f4fcbb456675d30b61ad2920d0002a45adaee9e_0.log    │
	│                                                                                                                     │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1780: failed to check existence of "/etc/ssl/certs/95042.pem" inside minikube. args "out/minikube-windows-amd64.exe -p functional-20211117223105-9504 ssh \"sudo cat /etc/ssl/certs/95042.pem\"": exit status 80
functional_test.go:1786: failed verify pem file. minikube_test2.pem -> /etc/ssl/certs/95042.pem mismatch (-want +got):
string(
- 	"-----BEGIN CERTIFICATE-----\nMIIEwDCCAqgCCQCUeXrVemI4eTANBgkqhkiG9w0BAQsFADAiMQswCQYDVQQGEwJV\nUzETMBEGA1UECAwKQ2FsaWZvcm5pYTAeFw0yMTA3MjEyMDM4MDdaFw0yMTA4MjAy\nMDM4MDdaMCIxCzAJBgNVBAYTAlVTMRMwEQYDVQQIDApDYWxpZm9ybmlhMIICIjAN\nBgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCA"...,
+ 	"\n\n",
)
functional_test.go:1777: Checking for existence of /usr/share/ca-certificates/95042.pem within VM
functional_test.go:1778: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20211117223105-9504 ssh "sudo cat /usr/share/ca-certificates/95042.pem"

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1778: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-20211117223105-9504 ssh "sudo cat /usr/share/ca-certificates/95042.pem": exit status 80 (1.8542467s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "functional-20211117223105-9504": docker container inspect functional-20211117223105-9504 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117223105-9504
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                     │
	│    * If the above advice does not help, please let us know:                                                         │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                       │
	│                                                                                                                     │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                            │
	│    * Please also attach the following file to the GitHub issue:                                                     │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_ssh_ce95e183e73fd73de60ab3838891bdc87d01464d_0.log    │
	│                                                                                                                     │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1780: failed to check existence of "/usr/share/ca-certificates/95042.pem" inside minikube. args "out/minikube-windows-amd64.exe -p functional-20211117223105-9504 ssh \"sudo cat /usr/share/ca-certificates/95042.pem\"": exit status 80
functional_test.go:1786: failed verify pem file. minikube_test2.pem -> /usr/share/ca-certificates/95042.pem mismatch (-want +got):
string(
- 	"-----BEGIN CERTIFICATE-----\nMIIEwDCCAqgCCQCUeXrVemI4eTANBgkqhkiG9w0BAQsFADAiMQswCQYDVQQGEwJV\nUzETMBEGA1UECAwKQ2FsaWZvcm5pYTAeFw0yMTA3MjEyMDM4MDdaFw0yMTA4MjAy\nMDM4MDdaMCIxCzAJBgNVBAYTAlVTMRMwEQYDVQQIDApDYWxpZm9ybmlhMIICIjAN\nBgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCA"...,
+ 	"\n\n",
)
functional_test.go:1777: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1778: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20211117223105-9504 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1778: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-20211117223105-9504 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0": exit status 80 (1.8646415s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "functional-20211117223105-9504": docker container inspect functional-20211117223105-9504 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117223105-9504
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                     │
	│    * If the above advice does not help, please let us know:                                                         │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                       │
	│                                                                                                                     │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                            │
	│    * Please also attach the following file to the GitHub issue:                                                     │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_ssh_15a8ec4b54c4600ccdf64f589dd9f75cfe823832_0.log    │
	│                                                                                                                     │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1780: failed to check existence of "/etc/ssl/certs/3ec20f2e.0" inside minikube. args "out/minikube-windows-amd64.exe -p functional-20211117223105-9504 ssh \"sudo cat /etc/ssl/certs/3ec20f2e.0\"": exit status 80
functional_test.go:1786: failed verify pem file. minikube_test2.pem -> /etc/ssl/certs/3ec20f2e.0 mismatch (-want +got):
string(
- 	"-----BEGIN CERTIFICATE-----\nMIIEwDCCAqgCCQCUeXrVemI4eTANBgkqhkiG9w0BAQsFADAiMQswCQYDVQQGEwJV\nUzETMBEGA1UECAwKQ2FsaWZvcm5pYTAeFw0yMTA3MjEyMDM4MDdaFw0yMTA4MjAy\nMDM4MDdaMCIxCzAJBgNVBAYTAlVTMRMwEQYDVQQIDApDYWxpZm9ybmlhMIICIjAN\nBgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCA"...,
+ 	"\n\n",
)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestFunctional/parallel/CertSync]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect functional-20211117223105-9504
helpers_test.go:235: (dbg) docker inspect functional-20211117223105-9504:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "functional-20211117223105-9504",
	        "Id": "72218bd966d51a7f89406ef733cbb0e5b7382c2eca4e22dcf492d153f6d7a483",
	        "Created": "2021-11-17T22:31:08.713523089Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.49.0/24",
	                    "Gateway": "192.168.49.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-20211117223105-9504 -n functional-20211117223105-9504

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-20211117223105-9504 -n functional-20211117223105-9504: exit status 7 (1.8100875s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 22:34:33.294062    8520 status.go:247] status error: host: state: unknown state "functional-20211117223105-9504": docker container inspect functional-20211117223105-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117223105-9504

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-20211117223105-9504" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestFunctional/parallel/CertSync (13.11s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (2.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:213: (dbg) Run:  kubectl --context functional-20211117223105-9504 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
functional_test.go:213: (dbg) Non-zero exit: kubectl --context functional-20211117223105-9504 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": exit status 1 (281.5057ms)

                                                
                                                
** stderr ** 
	W1117 22:34:37.445856    9780 loader.go:223] Config not found: C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	Error in configuration: 
	* context was not found for specified context: functional-20211117223105-9504
	* cluster has no server defined

                                                
                                                
** /stderr **
functional_test.go:215: failed to 'kubectl get nodes' with args "kubectl --context functional-20211117223105-9504 get nodes --output=go-template \"--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'\"": exit status 1
functional_test.go:221: expected to have label "minikube.k8s.io/commit" in node labels but got : 
** stderr ** 
	W1117 22:34:37.445856    9780 loader.go:223] Config not found: C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	Error in configuration: 
	* context was not found for specified context: functional-20211117223105-9504
	* cluster has no server defined

                                                
                                                
** /stderr **
functional_test.go:221: expected to have label "minikube.k8s.io/version" in node labels but got : 
** stderr ** 
	W1117 22:34:37.445856    9780 loader.go:223] Config not found: C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	Error in configuration: 
	* context was not found for specified context: functional-20211117223105-9504
	* cluster has no server defined

                                                
                                                
** /stderr **
functional_test.go:221: expected to have label "minikube.k8s.io/updated_at" in node labels but got : 
** stderr ** 
	W1117 22:34:37.445856    9780 loader.go:223] Config not found: C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	Error in configuration: 
	* context was not found for specified context: functional-20211117223105-9504
	* cluster has no server defined

                                                
                                                
** /stderr **
functional_test.go:221: expected to have label "minikube.k8s.io/name" in node labels but got : 
** stderr ** 
	W1117 22:34:37.445856    9780 loader.go:223] Config not found: C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	Error in configuration: 
	* context was not found for specified context: functional-20211117223105-9504
	* cluster has no server defined

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestFunctional/parallel/NodeLabels]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect functional-20211117223105-9504

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
helpers_test.go:235: (dbg) docker inspect functional-20211117223105-9504:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "functional-20211117223105-9504",
	        "Id": "72218bd966d51a7f89406ef733cbb0e5b7382c2eca4e22dcf492d153f6d7a483",
	        "Created": "2021-11-17T22:31:08.713523089Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.49.0/24",
	                    "Gateway": "192.168.49.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-20211117223105-9504 -n functional-20211117223105-9504

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-20211117223105-9504 -n functional-20211117223105-9504: exit status 7 (1.8260936s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 22:34:39.449828   11228 status.go:247] status error: host: state: unknown state "functional-20211117223105-9504": docker container inspect functional-20211117223105-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117223105-9504

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-20211117223105-9504" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestFunctional/parallel/NodeLabels (2.24s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (1.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:1805: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20211117223105-9504 ssh "sudo systemctl is-active crio"

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:1805: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-20211117223105-9504 ssh "sudo systemctl is-active crio": exit status 80 (1.8724355s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "functional-20211117223105-9504": docker container inspect functional-20211117223105-9504 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117223105-9504
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                     │
	│    * If the above advice does not help, please let us know:                                                         │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                       │
	│                                                                                                                     │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                            │
	│    * Please also attach the following file to the GitHub issue:                                                     │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_ssh_4c116c6236290140afdbb5dcaafee8e0c3250b76_0.log    │
	│                                                                                                                     │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1808: output of 
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "functional-20211117223105-9504": docker container inspect functional-20211117223105-9504 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117223105-9504
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                     │
	│    * If the above advice does not help, please let us know:                                                         │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                       │
	│                                                                                                                     │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                            │
	│    * Please also attach the following file to the GitHub issue:                                                     │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_ssh_4c116c6236290140afdbb5dcaafee8e0c3250b76_0.log    │
	│                                                                                                                     │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **: exit status 80
functional_test.go:1811: For runtime "docker": expected "crio" to be inactive but got "\n\n" 
--- FAIL: TestFunctional/parallel/NonActiveRuntimeDisabled (1.87s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (2.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2051: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20211117223105-9504 version -o=json --components

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2051: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-20211117223105-9504 version -o=json --components: exit status 80 (2.1373605s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "functional-20211117223105-9504": docker container inspect functional-20211117223105-9504 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117223105-9504
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_version_584df66c7473738ba6bddab8b00bad09d875c20e_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2053: error version: exit status 80
functional_test.go:2058: expected to see "buildctl" in the minikube version --components but got:

                                                
                                                

                                                
                                                
functional_test.go:2058: expected to see "commit" in the minikube version --components but got:

                                                
                                                

                                                
                                                
functional_test.go:2058: expected to see "containerd" in the minikube version --components but got:

                                                
                                                

                                                
                                                
functional_test.go:2058: expected to see "crictl" in the minikube version --components but got:

                                                
                                                

                                                
                                                
functional_test.go:2058: expected to see "crio" in the minikube version --components but got:

                                                
                                                

                                                
                                                
functional_test.go:2058: expected to see "ctr" in the minikube version --components but got:

                                                
                                                

                                                
                                                
functional_test.go:2058: expected to see "docker" in the minikube version --components but got:

                                                
                                                

                                                
                                                
functional_test.go:2058: expected to see "minikubeVersion" in the minikube version --components but got:

                                                
                                                

                                                
                                                
functional_test.go:2058: expected to see "podman" in the minikube version --components but got:

                                                
                                                

                                                
                                                
functional_test.go:2058: expected to see "run" in the minikube version --components but got:

                                                
                                                

                                                
                                                
functional_test.go:2058: expected to see "crun" in the minikube version --components but got:

                                                
                                                

                                                
                                                
--- FAIL: TestFunctional/parallel/Version/components (2.14s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/powershell (7.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/powershell
functional_test.go:440: (dbg) Run:  powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-20211117223105-9504 docker-env | Invoke-Expression ; out/minikube-windows-amd64.exe status -p functional-20211117223105-9504"

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv/powershell
functional_test.go:440: (dbg) Non-zero exit: powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-20211117223105-9504 docker-env | Invoke-Expression ; out/minikube-windows-amd64.exe status -p functional-20211117223105-9504": exit status 1 (7.4456664s)

                                                
                                                
-- stdout --
	functional-20211117223105-9504
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "functional-20211117223105-9504": docker container inspect functional-20211117223105-9504 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117223105-9504
	
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                            │
	│    * If the above advice does not help, please let us know:                                                                │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                              │
	│                                                                                                                            │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                   │
	│    * Please also attach the following file to the GitHub issue:                                                            │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_docker-env_547776f721aba6dceba24106cb61c1127a06fa4f_0.log    │
	│                                                                                                                            │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	false : The term 'false' is not recognized as the name of a cmdlet, function, script file, or operable program. Check 
	the spelling of the name, or if a path was included, verify that the path is correct and try again.
	At line:1 char:1
	+ false exit code 80
	+ ~~~~~
	    + CategoryInfo          : ObjectNotFound: (false:String) [], CommandNotFoundException
	    + FullyQualifiedErrorId : CommandNotFoundException
	 
	E1117 22:34:27.981283    8216 status.go:258] status error: host: state: unknown state "functional-20211117223105-9504": docker container inspect functional-20211117223105-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117223105-9504
	E1117 22:34:27.981283    8216 status.go:261] The "functional-20211117223105-9504" host does not exist!

                                                
                                                
** /stderr **
functional_test.go:446: failed to do status after eval-ing docker-env. error: exit status 1
--- FAIL: TestFunctional/parallel/DockerEnv/powershell (7.45s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (1.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:1897: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20211117223105-9504 update-context --alsologtostderr -v=2
functional_test.go:1897: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-20211117223105-9504 update-context --alsologtostderr -v=2: exit status 80 (1.8893651s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1117 22:34:40.008767    9872 out.go:297] Setting OutFile to fd 792 ...
	I1117 22:34:40.088777    9872 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 22:34:40.088777    9872 out.go:310] Setting ErrFile to fd 892...
	I1117 22:34:40.088777    9872 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 22:34:40.098770    9872 mustload.go:65] Loading cluster: functional-20211117223105-9504
	I1117 22:34:40.099767    9872 config.go:176] Loaded profile config "functional-20211117223105-9504": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.3
	I1117 22:34:40.107778    9872 cli_runner.go:115] Run: docker container inspect functional-20211117223105-9504 --format={{.State.Status}}
	W1117 22:34:41.657412    9872 cli_runner.go:162] docker container inspect functional-20211117223105-9504 --format={{.State.Status}} returned with exit code 1
	I1117 22:34:41.657412    9872 cli_runner.go:168] Completed: docker container inspect functional-20211117223105-9504 --format={{.State.Status}}: (1.5496225s)
	I1117 22:34:41.660445    9872 out.go:176] 
	W1117 22:34:41.660445    9872 out.go:241] X Exiting due to GUEST_STATUS: state: unknown state "functional-20211117223105-9504": docker container inspect functional-20211117223105-9504 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117223105-9504
	
	X Exiting due to GUEST_STATUS: state: unknown state "functional-20211117223105-9504": docker container inspect functional-20211117223105-9504 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117223105-9504
	
	W1117 22:34:41.660445    9872 out.go:241] * 
	* 
	W1117 22:34:41.668464    9872 out.go:241] ╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                                │
	│    * If the above advice does not help, please let us know:                                                                    │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                                  │
	│                                                                                                                                │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                       │
	│    * Please also attach the following file to the GitHub issue:                                                                │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_update-context_9738a94781505e531269d5196158beef5ee79b06_0.log    │
	│                                                                                                                                │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                                │
	│    * If the above advice does not help, please let us know:                                                                    │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                                  │
	│                                                                                                                                │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                       │
	│    * Please also attach the following file to the GitHub issue:                                                                │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_update-context_9738a94781505e531269d5196158beef5ee79b06_0.log    │
	│                                                                                                                                │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I1117 22:34:41.671379    9872 out.go:176] 

                                                
                                                
** /stderr **
functional_test.go:1899: failed to run minikube update-context: args "out/minikube-windows-amd64.exe -p functional-20211117223105-9504 update-context --alsologtostderr -v=2": exit status 80
functional_test.go:1904: update-context: got="\n\n", want=*"No changes"*
--- FAIL: TestFunctional/parallel/UpdateContextCmd/no_changes (1.89s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (1.86s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:1897: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20211117223105-9504 update-context --alsologtostderr -v=2

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:1897: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-20211117223105-9504 update-context --alsologtostderr -v=2: exit status 80 (1.8589519s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1117 22:34:41.886732    2248 out.go:297] Setting OutFile to fd 676 ...
	I1117 22:34:41.945732    2248 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 22:34:41.945732    2248 out.go:310] Setting ErrFile to fd 900...
	I1117 22:34:41.945732    2248 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 22:34:41.956733    2248 mustload.go:65] Loading cluster: functional-20211117223105-9504
	I1117 22:34:41.956733    2248 config.go:176] Loaded profile config "functional-20211117223105-9504": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.3
	I1117 22:34:41.964741    2248 cli_runner.go:115] Run: docker container inspect functional-20211117223105-9504 --format={{.State.Status}}
	W1117 22:34:43.516069    2248 cli_runner.go:162] docker container inspect functional-20211117223105-9504 --format={{.State.Status}} returned with exit code 1
	I1117 22:34:43.516069    2248 cli_runner.go:168] Completed: docker container inspect functional-20211117223105-9504 --format={{.State.Status}}: (1.5513162s)
	I1117 22:34:43.519310    2248 out.go:176] 
	W1117 22:34:43.519588    2248 out.go:241] X Exiting due to GUEST_STATUS: state: unknown state "functional-20211117223105-9504": docker container inspect functional-20211117223105-9504 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117223105-9504
	
	X Exiting due to GUEST_STATUS: state: unknown state "functional-20211117223105-9504": docker container inspect functional-20211117223105-9504 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117223105-9504
	
	W1117 22:34:43.519588    2248 out.go:241] * 
	* 
	W1117 22:34:43.528381    2248 out.go:241] ╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                                │
	│    * If the above advice does not help, please let us know:                                                                    │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                                  │
	│                                                                                                                                │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                       │
	│    * Please also attach the following file to the GitHub issue:                                                                │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_update-context_9738a94781505e531269d5196158beef5ee79b06_0.log    │
	│                                                                                                                                │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                                │
	│    * If the above advice does not help, please let us know:                                                                    │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                                  │
	│                                                                                                                                │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                       │
	│    * Please also attach the following file to the GitHub issue:                                                                │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_update-context_9738a94781505e531269d5196158beef5ee79b06_0.log    │
	│                                                                                                                                │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I1117 22:34:43.530611    2248 out.go:176] 

                                                
                                                
** /stderr **
functional_test.go:1899: failed to run minikube update-context: args "out/minikube-windows-amd64.exe -p functional-20211117223105-9504 update-context --alsologtostderr -v=2": exit status 80
functional_test.go:1904: update-context: got="\n\n", want=*"context has been updated"*
--- FAIL: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (1.86s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (1.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:1897: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20211117223105-9504 update-context --alsologtostderr -v=2

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:1897: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-20211117223105-9504 update-context --alsologtostderr -v=2: exit status 80 (1.843293s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1117 22:34:43.750694    2132 out.go:297] Setting OutFile to fd 644 ...
	I1117 22:34:43.813190    2132 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 22:34:43.813190    2132 out.go:310] Setting ErrFile to fd 988...
	I1117 22:34:43.813190    2132 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 22:34:43.823548    2132 mustload.go:65] Loading cluster: functional-20211117223105-9504
	I1117 22:34:43.824962    2132 config.go:176] Loaded profile config "functional-20211117223105-9504": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.3
	I1117 22:34:43.833505    2132 cli_runner.go:115] Run: docker container inspect functional-20211117223105-9504 --format={{.State.Status}}
	W1117 22:34:45.351203    2132 cli_runner.go:162] docker container inspect functional-20211117223105-9504 --format={{.State.Status}} returned with exit code 1
	I1117 22:34:45.351203    2132 cli_runner.go:168] Completed: docker container inspect functional-20211117223105-9504 --format={{.State.Status}}: (1.5174238s)
	I1117 22:34:45.365830    2132 out.go:176] 
	W1117 22:34:45.366624    2132 out.go:241] X Exiting due to GUEST_STATUS: state: unknown state "functional-20211117223105-9504": docker container inspect functional-20211117223105-9504 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117223105-9504
	
	X Exiting due to GUEST_STATUS: state: unknown state "functional-20211117223105-9504": docker container inspect functional-20211117223105-9504 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117223105-9504
	
	W1117 22:34:45.366624    2132 out.go:241] * 
	* 
	W1117 22:34:45.374665    2132 out.go:241] ╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                                │
	│    * If the above advice does not help, please let us know:                                                                    │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                                  │
	│                                                                                                                                │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                       │
	│    * Please also attach the following file to the GitHub issue:                                                                │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_update-context_9738a94781505e531269d5196158beef5ee79b06_0.log    │
	│                                                                                                                                │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                                │
	│    * If the above advice does not help, please let us know:                                                                    │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                                  │
	│                                                                                                                                │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                       │
	│    * Please also attach the following file to the GitHub issue:                                                                │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_update-context_9738a94781505e531269d5196158beef5ee79b06_0.log    │
	│                                                                                                                                │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I1117 22:34:45.376747    2132 out.go:176] 

                                                
                                                
** /stderr **
functional_test.go:1899: failed to run minikube update-context: args "out/minikube-windows-amd64.exe -p functional-20211117223105-9504 update-context --alsologtostderr -v=2": exit status 80
functional_test.go:1904: update-context: got="\n\n", want=*"context has been updated"*
--- FAIL: TestFunctional/parallel/UpdateContextCmd/no_clusters (1.85s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:143: failed to get Kubernetes client for "functional-20211117223105-9504": client config: context "functional-20211117223105-9504" does not exist
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageList (1.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageList
=== PAUSE TestFunctional/parallel/ImageCommands/ImageList

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageList

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageList
functional_test.go:241: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20211117223105-9504 image ls

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageList
functional_test.go:241: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20211117223105-9504 image ls: (1.8049713s)
functional_test.go:255: expected k8s.gcr.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageList (1.81s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (5.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:264: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20211117223105-9504 ssh pgrep buildkitd

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:264: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-20211117223105-9504 ssh pgrep buildkitd: exit status 80 (1.81338s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "functional-20211117223105-9504": docker container inspect functional-20211117223105-9504 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117223105-9504
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                     │
	│    * If the above advice does not help, please let us know:                                                         │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                       │
	│                                                                                                                     │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                            │
	│    * Please also attach the following file to the GitHub issue:                                                     │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_ssh_f5578f3b7737bbd9a15ad6eab50db6197ebdaf5a_0.log    │
	│                                                                                                                     │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:271: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20211117223105-9504 image build -t localhost/my-image:functional-20211117223105-9504 testdata\build
functional_test.go:271: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20211117223105-9504 image build -t localhost/my-image:functional-20211117223105-9504 testdata\build: (1.7534136s)
functional_test.go:389: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20211117223105-9504 image ls
functional_test.go:389: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20211117223105-9504 image ls: (1.7470285s)
functional_test.go:384: expected "localhost/my-image:functional-20211117223105-9504" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageBuild (5.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (9.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20211117223105-9504 image load --daemon gcr.io/google-containers/addon-resizer:functional-20211117223105-9504

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20211117223105-9504 image load --daemon gcr.io/google-containers/addon-resizer:functional-20211117223105-9504: (7.2145724s)
functional_test.go:389: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20211117223105-9504 image ls

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:389: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20211117223105-9504 image ls: (1.83006s)
functional_test.go:384: expected "gcr.io/google-containers/addon-resizer:functional-20211117223105-9504" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (9.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:321: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20211117223105-9504 image save gcr.io/google-containers/addon-resizer:functional-20211117223105-9504 C:\jenkins\workspace\Docker_Windows_integration\addon-resizer-save.tar

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:321: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20211117223105-9504 image save gcr.io/google-containers/addon-resizer:functional-20211117223105-9504 C:\jenkins\workspace\Docker_Windows_integration\addon-resizer-save.tar: (1.7708248s)
functional_test.go:327: expected "C:\\jenkins\\workspace\\Docker_Windows_integration\\addon-resizer-save.tar" to exist after `image save`, but doesn't exist
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.77s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:350: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20211117223105-9504 image load C:\jenkins\workspace\Docker_Windows_integration\addon-resizer-save.tar
functional_test.go:350: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-20211117223105-9504 image load C:\jenkins\workspace\Docker_Windows_integration\addon-resizer-save.tar: exit status 80 (1.7738333s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_IMAGE_LOAD: save to dir: caching images: caching image "C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\cache\\images\\C_\\jenkins\\workspace\\Docker_Windows_integration\\addon-resizer-save.tar": parsing image ref name for C:\jenkins\workspace\Docker_Windows_integration\addon-resizer-save.tar: could not parse reference: C:\jenkins\workspace\Docker_Windows_integration\addon-resizer-save.tar
	* 
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                       │
	│    * If the above advice does not help, please let us know:                                                           │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                         │
	│                                                                                                                       │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                              │
	│    * Please also attach the following file to the GitHub issue:                                                       │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_image_4f97aa0f12ba576a16ca2b05292f7afcda49921e_0.log    │
	│                                                                                                                       │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:352: loading image into minikube from file: exit status 80

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_IMAGE_LOAD: save to dir: caching images: caching image "C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\cache\\images\\C_\\jenkins\\workspace\\Docker_Windows_integration\\addon-resizer-save.tar": parsing image ref name for C:\jenkins\workspace\Docker_Windows_integration\addon-resizer-save.tar: could not parse reference: C:\jenkins\workspace\Docker_Windows_integration\addon-resizer-save.tar
	* 
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                       │
	│    * If the above advice does not help, please let us know:                                                           │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                         │
	│                                                                                                                       │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                              │
	│    * Please also attach the following file to the GitHub issue:                                                       │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_image_4f97aa0f12ba576a16ca2b05292f7afcda49921e_0.log    │
	│                                                                                                                       │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.77s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (44.04s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:40: (dbg) Run:  out/minikube-windows-amd64.exe start -p ingress-addon-legacy-20211117223942-9504 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker
ingress_addon_legacy_test.go:40: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p ingress-addon-legacy-20211117223942-9504 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker: exit status 80 (43.9536124s)

                                                
                                                
-- stdout --
	* [ingress-addon-legacy-20211117223942-9504] minikube v1.24.0 on Microsoft Windows 10 Enterprise N 10.0.18363 Build 18363
	  - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=12739
	* Using the docker driver based on user configuration
	* Starting control plane node ingress-addon-legacy-20211117223942-9504 in cluster ingress-addon-legacy-20211117223942-9504
	* Pulling base image ...
	* Downloading Kubernetes v1.18.20 preload ...
	* Creating docker container (CPUs=2, Memory=4096MB) ...
	* docker "ingress-addon-legacy-20211117223942-9504" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=4096MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1117 22:39:42.907832    8828 out.go:297] Setting OutFile to fd 920 ...
	I1117 22:39:42.966827    8828 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 22:39:42.966827    8828 out.go:310] Setting ErrFile to fd 700...
	I1117 22:39:42.966827    8828 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 22:39:42.977833    8828 out.go:304] Setting JSON to false
	I1117 22:39:42.979824    8828 start.go:112] hostinfo: {"hostname":"minikube2","uptime":78098,"bootTime":1637110684,"procs":125,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.18363 Build 18363","kernelVersion":"10.0.18363 Build 18363","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"29dced62-21fb-45d8-a34e-472b66ced002"}
	W1117 22:39:42.980831    8828 start.go:120] gopshost.Virtualization returned error: not implemented yet
	I1117 22:39:42.984821    8828 out.go:176] * [ingress-addon-legacy-20211117223942-9504] minikube v1.24.0 on Microsoft Windows 10 Enterprise N 10.0.18363 Build 18363
	I1117 22:39:42.984821    8828 notify.go:174] Checking for updates...
	I1117 22:39:42.988834    8828 out.go:176]   - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	I1117 22:39:42.990821    8828 out.go:176]   - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	I1117 22:39:42.992827    8828 out.go:176]   - MINIKUBE_LOCATION=12739
	I1117 22:39:42.993858    8828 driver.go:343] Setting default libvirt URI to qemu:///system
	I1117 22:39:44.536708    8828 docker.go:132] docker version: linux-19.03.12
	I1117 22:39:44.541893    8828 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I1117 22:39:44.882208    8828 info.go:263] docker info: {ID:L2W4:ZZCI:FAHF:ZZT6:NAWJ:ODFR:2ZBQ:YQ77:PYZZ:VR3E:WLUM:OBSV Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:40 OomKillDisable:true NGoroutines:52 SystemTime:2021-11-17 22:39:44.615533459 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:4.19.76-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://inde
x.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:6239399936 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:19.03.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7ad184331fa3e55e52b890ea95e65ba581ae3429 Expected:7ad184331fa3e55e52b890ea95e65ba581ae3429} RuncCommit:{ID:dc9208a3303feef5b3839f4323d9beb36df0a9dd Expected:dc9208a3303feef5b3839f4323d9beb36df0a9dd} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense:Community Engine Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I1117 22:39:44.887679    8828 out.go:176] * Using the docker driver based on user configuration
	I1117 22:39:44.887679    8828 start.go:280] selected driver: docker
	I1117 22:39:44.887679    8828 start.go:775] validating driver "docker" against <nil>
	I1117 22:39:44.887679    8828 start.go:786] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I1117 22:39:44.951739    8828 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I1117 22:39:45.296225    8828 info.go:263] docker info: {ID:L2W4:ZZCI:FAHF:ZZT6:NAWJ:ODFR:2ZBQ:YQ77:PYZZ:VR3E:WLUM:OBSV Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:40 OomKillDisable:true NGoroutines:52 SystemTime:2021-11-17 22:39:45.030832961 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:4.19.76-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://inde
x.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:6239399936 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:19.03.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7ad184331fa3e55e52b890ea95e65ba581ae3429 Expected:7ad184331fa3e55e52b890ea95e65ba581ae3429} RuncCommit:{ID:dc9208a3303feef5b3839f4323d9beb36df0a9dd Expected:dc9208a3303feef5b3839f4323d9beb36df0a9dd} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense:Community Engine Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I1117 22:39:45.296225    8828 start_flags.go:268] no existing cluster config was found, will generate one from the flags 
	I1117 22:39:45.297019    8828 start_flags.go:758] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1117 22:39:45.297019    8828 cni.go:93] Creating CNI manager for ""
	I1117 22:39:45.297019    8828 cni.go:167] CNI unnecessary in this configuration, recommending no CNI
	I1117 22:39:45.297019    8828 start_flags.go:282] config:
	{Name:ingress-addon-legacy-20211117223942-9504 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-20211117223942-9504 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host}
	I1117 22:39:45.301125    8828 out.go:176] * Starting control plane node ingress-addon-legacy-20211117223942-9504 in cluster ingress-addon-legacy-20211117223942-9504
	I1117 22:39:45.301125    8828 cache.go:118] Beginning downloading kic base image for docker with docker
	I1117 22:39:45.304463    8828 out.go:176] * Pulling base image ...
	I1117 22:39:45.304726    8828 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I1117 22:39:45.304847    8828 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c in local docker daemon
	I1117 22:39:45.348216    8828 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/preloaded-images-k8s-v14-v1.18.20-docker-overlay2-amd64.tar.lz4
	I1117 22:39:45.348216    8828 cache.go:57] Caching tarball of preloaded images
	I1117 22:39:45.348760    8828 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I1117 22:39:45.351957    8828 out.go:176] * Downloading Kubernetes v1.18.20 preload ...
	I1117 22:39:45.352023    8828 preload.go:238] getting checksum for preloaded-images-k8s-v14-v1.18.20-docker-overlay2-amd64.tar.lz4 ...
	I1117 22:39:45.398574    8828 image.go:79] Found gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c in local docker daemon, skipping pull
	I1117 22:39:45.398574    8828 cache.go:140] gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c exists in daemon, skipping load
	I1117 22:39:45.416754    8828 download.go:100] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/preloaded-images-k8s-v14-v1.18.20-docker-overlay2-amd64.tar.lz4?checksum=md5:de306a65f7d728d77c3b068e74796a19 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v14-v1.18.20-docker-overlay2-amd64.tar.lz4
	I1117 22:39:49.520970    8828 preload.go:248] saving checksum for preloaded-images-k8s-v14-v1.18.20-docker-overlay2-amd64.tar.lz4 ...
	I1117 22:39:49.521749    8828 preload.go:255] verifying checksumm of C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v14-v1.18.20-docker-overlay2-amd64.tar.lz4 ...
	I1117 22:39:51.029814    8828 cache.go:60] Finished verifying existence of preloaded tar for  v1.18.20 on docker
	I1117 22:39:51.029814    8828 profile.go:147] Saving config to C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\ingress-addon-legacy-20211117223942-9504\config.json ...
	I1117 22:39:51.030790    8828 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\ingress-addon-legacy-20211117223942-9504\config.json: {Name:mkcdde4331bfe93ec4dcd90d4a4a428a28cc67b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1117 22:39:51.031790    8828 cache.go:206] Successfully downloaded all kic artifacts
	I1117 22:39:51.031790    8828 start.go:313] acquiring machines lock for ingress-addon-legacy-20211117223942-9504: {Name:mkc92e2bba0de4cc4adf37d127a284f386317a3d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 22:39:51.032424    8828 start.go:317] acquired machines lock for "ingress-addon-legacy-20211117223942-9504" in 115.7µs
	I1117 22:39:51.032625    8828 start.go:89] Provisioning new machine with config: &{Name:ingress-addon-legacy-20211117223942-9504 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-20211117223942-9504 Namespace:default APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host} &{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ControlPlane:true Worker:true}
	I1117 22:39:51.032660    8828 start.go:126] createHost starting for "" (driver="docker")
	I1117 22:39:51.616375    8828 out.go:203] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1117 22:39:51.617369    8828 start.go:160] libmachine.API.Create for "ingress-addon-legacy-20211117223942-9504" (driver="docker")
	I1117 22:39:51.617683    8828 client.go:168] LocalClient.Create starting
	I1117 22:39:51.618432    8828 main.go:130] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I1117 22:39:51.618432    8828 main.go:130] libmachine: Decoding PEM data...
	I1117 22:39:51.618432    8828 main.go:130] libmachine: Parsing certificate...
	I1117 22:39:51.618955    8828 main.go:130] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I1117 22:39:51.619199    8828 main.go:130] libmachine: Decoding PEM data...
	I1117 22:39:51.619199    8828 main.go:130] libmachine: Parsing certificate...
	I1117 22:39:51.625284    8828 cli_runner.go:115] Run: docker network inspect ingress-addon-legacy-20211117223942-9504 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1117 22:39:51.714128    8828 cli_runner.go:162] docker network inspect ingress-addon-legacy-20211117223942-9504 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1117 22:39:51.717838    8828 network_create.go:254] running [docker network inspect ingress-addon-legacy-20211117223942-9504] to gather additional debugging logs...
	I1117 22:39:51.717838    8828 cli_runner.go:115] Run: docker network inspect ingress-addon-legacy-20211117223942-9504
	W1117 22:39:51.809060    8828 cli_runner.go:162] docker network inspect ingress-addon-legacy-20211117223942-9504 returned with exit code 1
	I1117 22:39:51.809142    8828 network_create.go:257] error running [docker network inspect ingress-addon-legacy-20211117223942-9504]: docker network inspect ingress-addon-legacy-20211117223942-9504: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: ingress-addon-legacy-20211117223942-9504
	I1117 22:39:51.809142    8828 network_create.go:259] output of [docker network inspect ingress-addon-legacy-20211117223942-9504]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: ingress-addon-legacy-20211117223942-9504
	
	** /stderr **
	I1117 22:39:51.813122    8828 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1117 22:39:51.919649    8828 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc00010c278] misses:0}
	I1117 22:39:51.919773    8828 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1117 22:39:51.919773    8828 network_create.go:106] attempt to create docker network ingress-addon-legacy-20211117223942-9504 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1117 22:39:51.924197    8828 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true ingress-addon-legacy-20211117223942-9504
	I1117 22:39:52.389739    8828 network_create.go:90] docker network ingress-addon-legacy-20211117223942-9504 192.168.49.0/24 created
	I1117 22:39:52.389929    8828 kic.go:106] calculated static IP "192.168.49.2" for the "ingress-addon-legacy-20211117223942-9504" container
	I1117 22:39:52.398556    8828 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
	I1117 22:39:52.494315    8828 cli_runner.go:115] Run: docker volume create ingress-addon-legacy-20211117223942-9504 --label name.minikube.sigs.k8s.io=ingress-addon-legacy-20211117223942-9504 --label created_by.minikube.sigs.k8s.io=true
	I1117 22:39:52.588587    8828 oci.go:102] Successfully created a docker volume ingress-addon-legacy-20211117223942-9504
	I1117 22:39:52.592832    8828 cli_runner.go:115] Run: docker run --rm --name ingress-addon-legacy-20211117223942-9504-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-20211117223942-9504 --entrypoint /usr/bin/test -v ingress-addon-legacy-20211117223942-9504:/var gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -d /var/lib
	I1117 22:39:53.905687    8828 cli_runner.go:168] Completed: docker run --rm --name ingress-addon-legacy-20211117223942-9504-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-20211117223942-9504 --entrypoint /usr/bin/test -v ingress-addon-legacy-20211117223942-9504:/var gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -d /var/lib: (1.3128458s)
	I1117 22:39:53.905687    8828 oci.go:106] Successfully prepared a docker volume ingress-addon-legacy-20211117223942-9504
	I1117 22:39:53.905687    8828 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I1117 22:39:53.905687    8828 kic.go:179] Starting extracting preloaded images to volume ...
	I1117 22:39:53.911113    8828 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I1117 22:39:53.911113    8828 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v14-v1.18.20-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-20211117223942-9504:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir
	W1117 22:39:54.040788    8828 cli_runner.go:162] docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v14-v1.18.20-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-20211117223942-9504:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir returned with exit code 125
	I1117 22:39:54.040976    8828 kic.go:186] Unable to extract preloaded tarball to volume: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v14-v1.18.20-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-20211117223942-9504:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: status code not OK but 500: {"Message":"Unhandled exception: The notification platform is unavailable.\r\n\r\nThe notification platform is unavailable.\r\n","StackTrace":"   at Windows.UI.Notifications.ToastNotificationManager.CreateToastNotifier(String applicationId)\r\n   at Docker.WPF.PromptShareDirectory.<PromptUserAsync>d__0.MoveNext() in C:\\workspaces\\stable-2.3.x\\src\\github.com\\docker\\pinata\\win\\src\\Docker.WPF\\PromptShareDirectory.cs:line 25\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at Docker.ApiServices.Mounting.FileSharing.<DoShareAsync>d__6.MoveNext() in C:\\workspaces\\stable-2.3.x\\src\\github.com\\docker\\pinata\\win\\src\\Docker.ApiServices\\Mounting\\FileSharing.cs:line 80\r\n--- End of stack trace from previous location whe
re exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at Docker.ApiServices.Mounting.FileSharing.<ShareAsync>d__4.MoveNext() in C:\\workspaces\\stable-2.3.x\\src\\github.com\\docker\\pinata\\win\\src\\Docker.ApiServices\\Mounting\\FileSharing.cs:line 47\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at Docker.HttpApi.Controllers.FilesharingController.<ShareDirectory>d__2.MoveNext() in C:\\workspaces\\stable-2.3.x\\src\\github.com\\docker\\pinata\\win\\src\\Docker.HttpApi\\Controllers\\FilesharingController.cs:line 21\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.Exceptio
nDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at System.Threading.Tasks.TaskHelpersExtensions.<CastToObject>d__1`1.MoveNext()\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at System.Web.Http.Controllers.ApiControllerActionInvoker.<InvokeActionAsyncCore>d__1.MoveNext()\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at System.Web.Http.Controllers.ActionFilterResult.<ExecuteAsync>d__5.MoveNext()\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.Excepti
onServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at System.Web.Http.Dispatcher.HttpControllerDispatcher.<SendAsync>d__15.MoveNext()"}.
	See 'docker run --help'.
	I1117 22:39:54.241077    8828 info.go:263] docker info: {ID:L2W4:ZZCI:FAHF:ZZT6:NAWJ:ODFR:2ZBQ:YQ77:PYZZ:VR3E:WLUM:OBSV Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:40 OomKillDisable:true NGoroutines:52 SystemTime:2021-11-17 22:39:53.990774483 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:4.19.76-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://inde
x.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:6239399936 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:19.03.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7ad184331fa3e55e52b890ea95e65ba581ae3429 Expected:7ad184331fa3e55e52b890ea95e65ba581ae3429} RuncCommit:{ID:dc9208a3303feef5b3839f4323d9beb36df0a9dd Expected:dc9208a3303feef5b3839f4323d9beb36df0a9dd} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense:Community Engine Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	E1117 22:39:54.241459    8828 oci.go:197] error getting kernel modules path: Unable to locate kernel modules
	I1117 22:39:54.241594    8828 client.go:171] LocalClient.Create took 2.6238916s
	I1117 22:39:56.253104    8828 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1117 22:39:56.257842    8828 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20211117223942-9504
	W1117 22:39:56.359547    8828 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20211117223942-9504 returned with exit code 1
	I1117 22:39:56.359547    8828 retry.go:31] will retry after 276.165072ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "ingress-addon-legacy-20211117223942-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20211117223942-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: ingress-addon-legacy-20211117223942-9504
	I1117 22:39:56.641099    8828 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20211117223942-9504
	W1117 22:39:56.732217    8828 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20211117223942-9504 returned with exit code 1
	I1117 22:39:56.732435    8828 retry.go:31] will retry after 540.190908ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "ingress-addon-legacy-20211117223942-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20211117223942-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: ingress-addon-legacy-20211117223942-9504
	I1117 22:39:57.277921    8828 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20211117223942-9504
	W1117 22:39:57.359729    8828 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20211117223942-9504 returned with exit code 1
	I1117 22:39:57.359995    8828 retry.go:31] will retry after 655.06503ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "ingress-addon-legacy-20211117223942-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20211117223942-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: ingress-addon-legacy-20211117223942-9504
	I1117 22:39:58.021181    8828 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20211117223942-9504
	W1117 22:39:58.151300    8828 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20211117223942-9504 returned with exit code 1
	W1117 22:39:58.151651    8828 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "ingress-addon-legacy-20211117223942-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20211117223942-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: ingress-addon-legacy-20211117223942-9504
	
	W1117 22:39:58.151651    8828 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "ingress-addon-legacy-20211117223942-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20211117223942-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: ingress-addon-legacy-20211117223942-9504
	I1117 22:39:58.151651    8828 start.go:129] duration metric: createHost completed in 7.1189376s
	I1117 22:39:58.151651    8828 start.go:80] releasing machines lock for "ingress-addon-legacy-20211117223942-9504", held for 7.119174s
	W1117 22:39:58.151651    8828 start.go:532] error starting host: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	I1117 22:39:58.160530    8828 cli_runner.go:115] Run: docker container inspect ingress-addon-legacy-20211117223942-9504 --format={{.State.Status}}
	W1117 22:39:58.259384    8828 cli_runner.go:162] docker container inspect ingress-addon-legacy-20211117223942-9504 --format={{.State.Status}} returned with exit code 1
	I1117 22:39:58.259384    8828 delete.go:82] Unable to get host status for ingress-addon-legacy-20211117223942-9504, assuming it has already been deleted: state: unknown state "ingress-addon-legacy-20211117223942-9504": docker container inspect ingress-addon-legacy-20211117223942-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: ingress-addon-legacy-20211117223942-9504
	W1117 22:39:58.259384    8828 out.go:241] ! StartHost failed, but will try again: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	! StartHost failed, but will try again: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	I1117 22:39:58.259968    8828 start.go:547] Will try again in 5 seconds ...
	I1117 22:40:03.261176    8828 start.go:313] acquiring machines lock for ingress-addon-legacy-20211117223942-9504: {Name:mkc92e2bba0de4cc4adf37d127a284f386317a3d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 22:40:03.261176    8828 start.go:317] acquired machines lock for "ingress-addon-legacy-20211117223942-9504" in 0s
	I1117 22:40:03.261732    8828 start.go:93] Skipping create...Using existing machine configuration
	I1117 22:40:03.261824    8828 fix.go:55] fixHost starting: 
	I1117 22:40:03.269501    8828 cli_runner.go:115] Run: docker container inspect ingress-addon-legacy-20211117223942-9504 --format={{.State.Status}}
	W1117 22:40:03.363697    8828 cli_runner.go:162] docker container inspect ingress-addon-legacy-20211117223942-9504 --format={{.State.Status}} returned with exit code 1
	I1117 22:40:03.363697    8828 fix.go:108] recreateIfNeeded on ingress-addon-legacy-20211117223942-9504: state= err=unknown state "ingress-addon-legacy-20211117223942-9504": docker container inspect ingress-addon-legacy-20211117223942-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: ingress-addon-legacy-20211117223942-9504
	I1117 22:40:03.363697    8828 fix.go:113] machineExists: false. err=machine does not exist
	I1117 22:40:03.366541    8828 out.go:176] * docker "ingress-addon-legacy-20211117223942-9504" container is missing, will recreate.
	I1117 22:40:03.366541    8828 delete.go:124] DEMOLISHING ingress-addon-legacy-20211117223942-9504 ...
	I1117 22:40:03.374702    8828 cli_runner.go:115] Run: docker container inspect ingress-addon-legacy-20211117223942-9504 --format={{.State.Status}}
	W1117 22:40:03.464408    8828 cli_runner.go:162] docker container inspect ingress-addon-legacy-20211117223942-9504 --format={{.State.Status}} returned with exit code 1
	W1117 22:40:03.464453    8828 stop.go:75] unable to get state: unknown state "ingress-addon-legacy-20211117223942-9504": docker container inspect ingress-addon-legacy-20211117223942-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: ingress-addon-legacy-20211117223942-9504
	I1117 22:40:03.464529    8828 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "ingress-addon-legacy-20211117223942-9504": docker container inspect ingress-addon-legacy-20211117223942-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: ingress-addon-legacy-20211117223942-9504
	I1117 22:40:03.472652    8828 cli_runner.go:115] Run: docker container inspect ingress-addon-legacy-20211117223942-9504 --format={{.State.Status}}
	W1117 22:40:03.572061    8828 cli_runner.go:162] docker container inspect ingress-addon-legacy-20211117223942-9504 --format={{.State.Status}} returned with exit code 1
	I1117 22:40:03.572270    8828 delete.go:82] Unable to get host status for ingress-addon-legacy-20211117223942-9504, assuming it has already been deleted: state: unknown state "ingress-addon-legacy-20211117223942-9504": docker container inspect ingress-addon-legacy-20211117223942-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: ingress-addon-legacy-20211117223942-9504
	I1117 22:40:03.578074    8828 cli_runner.go:115] Run: docker container inspect -f {{.Id}} ingress-addon-legacy-20211117223942-9504
	W1117 22:40:03.686003    8828 cli_runner.go:162] docker container inspect -f {{.Id}} ingress-addon-legacy-20211117223942-9504 returned with exit code 1
	I1117 22:40:03.686003    8828 kic.go:360] could not find the container ingress-addon-legacy-20211117223942-9504 to remove it. will try anyways
	I1117 22:40:03.690185    8828 cli_runner.go:115] Run: docker container inspect ingress-addon-legacy-20211117223942-9504 --format={{.State.Status}}
	W1117 22:40:03.782075    8828 cli_runner.go:162] docker container inspect ingress-addon-legacy-20211117223942-9504 --format={{.State.Status}} returned with exit code 1
	W1117 22:40:03.782163    8828 oci.go:83] error getting container status, will try to delete anyways: unknown state "ingress-addon-legacy-20211117223942-9504": docker container inspect ingress-addon-legacy-20211117223942-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: ingress-addon-legacy-20211117223942-9504
	I1117 22:40:03.786070    8828 cli_runner.go:115] Run: docker exec --privileged -t ingress-addon-legacy-20211117223942-9504 /bin/bash -c "sudo init 0"
	W1117 22:40:03.888305    8828 cli_runner.go:162] docker exec --privileged -t ingress-addon-legacy-20211117223942-9504 /bin/bash -c "sudo init 0" returned with exit code 1
	I1117 22:40:03.888305    8828 oci.go:658] error shutdown ingress-addon-legacy-20211117223942-9504: docker exec --privileged -t ingress-addon-legacy-20211117223942-9504 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: ingress-addon-legacy-20211117223942-9504
	I1117 22:40:04.893469    8828 cli_runner.go:115] Run: docker container inspect ingress-addon-legacy-20211117223942-9504 --format={{.State.Status}}
	W1117 22:40:04.977499    8828 cli_runner.go:162] docker container inspect ingress-addon-legacy-20211117223942-9504 --format={{.State.Status}} returned with exit code 1
	I1117 22:40:04.977583    8828 oci.go:670] temporary error verifying shutdown: unknown state "ingress-addon-legacy-20211117223942-9504": docker container inspect ingress-addon-legacy-20211117223942-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: ingress-addon-legacy-20211117223942-9504
	I1117 22:40:04.977583    8828 oci.go:672] temporary error: container ingress-addon-legacy-20211117223942-9504 status is  but expect it to be exited
	I1117 22:40:04.977700    8828 retry.go:31] will retry after 462.318748ms: couldn't verify container is exited. %v: unknown state "ingress-addon-legacy-20211117223942-9504": docker container inspect ingress-addon-legacy-20211117223942-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: ingress-addon-legacy-20211117223942-9504
	I1117 22:40:05.445283    8828 cli_runner.go:115] Run: docker container inspect ingress-addon-legacy-20211117223942-9504 --format={{.State.Status}}
	W1117 22:40:05.529586    8828 cli_runner.go:162] docker container inspect ingress-addon-legacy-20211117223942-9504 --format={{.State.Status}} returned with exit code 1
	I1117 22:40:05.529747    8828 oci.go:670] temporary error verifying shutdown: unknown state "ingress-addon-legacy-20211117223942-9504": docker container inspect ingress-addon-legacy-20211117223942-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: ingress-addon-legacy-20211117223942-9504
	I1117 22:40:05.529862    8828 oci.go:672] temporary error: container ingress-addon-legacy-20211117223942-9504 status is  but expect it to be exited
	I1117 22:40:05.529862    8828 retry.go:31] will retry after 890.117305ms: couldn't verify container is exited. %v: unknown state "ingress-addon-legacy-20211117223942-9504": docker container inspect ingress-addon-legacy-20211117223942-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: ingress-addon-legacy-20211117223942-9504
	I1117 22:40:06.425785    8828 cli_runner.go:115] Run: docker container inspect ingress-addon-legacy-20211117223942-9504 --format={{.State.Status}}
	W1117 22:40:06.511531    8828 cli_runner.go:162] docker container inspect ingress-addon-legacy-20211117223942-9504 --format={{.State.Status}} returned with exit code 1
	I1117 22:40:06.511531    8828 oci.go:670] temporary error verifying shutdown: unknown state "ingress-addon-legacy-20211117223942-9504": docker container inspect ingress-addon-legacy-20211117223942-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: ingress-addon-legacy-20211117223942-9504
	I1117 22:40:06.511783    8828 oci.go:672] temporary error: container ingress-addon-legacy-20211117223942-9504 status is  but expect it to be exited
	I1117 22:40:06.511881    8828 retry.go:31] will retry after 636.341646ms: couldn't verify container is exited. %v: unknown state "ingress-addon-legacy-20211117223942-9504": docker container inspect ingress-addon-legacy-20211117223942-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: ingress-addon-legacy-20211117223942-9504
	I1117 22:40:07.153581    8828 cli_runner.go:115] Run: docker container inspect ingress-addon-legacy-20211117223942-9504 --format={{.State.Status}}
	W1117 22:40:07.237388    8828 cli_runner.go:162] docker container inspect ingress-addon-legacy-20211117223942-9504 --format={{.State.Status}} returned with exit code 1
	I1117 22:40:07.237542    8828 oci.go:670] temporary error verifying shutdown: unknown state "ingress-addon-legacy-20211117223942-9504": docker container inspect ingress-addon-legacy-20211117223942-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: ingress-addon-legacy-20211117223942-9504
	I1117 22:40:07.237542    8828 oci.go:672] temporary error: container ingress-addon-legacy-20211117223942-9504 status is  but expect it to be exited
	I1117 22:40:07.237542    8828 retry.go:31] will retry after 1.107876242s: couldn't verify container is exited. %v: unknown state "ingress-addon-legacy-20211117223942-9504": docker container inspect ingress-addon-legacy-20211117223942-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: ingress-addon-legacy-20211117223942-9504
	I1117 22:40:08.350119    8828 cli_runner.go:115] Run: docker container inspect ingress-addon-legacy-20211117223942-9504 --format={{.State.Status}}
	W1117 22:40:08.439494    8828 cli_runner.go:162] docker container inspect ingress-addon-legacy-20211117223942-9504 --format={{.State.Status}} returned with exit code 1
	I1117 22:40:08.439597    8828 oci.go:670] temporary error verifying shutdown: unknown state "ingress-addon-legacy-20211117223942-9504": docker container inspect ingress-addon-legacy-20211117223942-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: ingress-addon-legacy-20211117223942-9504
	I1117 22:40:08.439597    8828 oci.go:672] temporary error: container ingress-addon-legacy-20211117223942-9504 status is  but expect it to be exited
	I1117 22:40:08.439669    8828 retry.go:31] will retry after 1.511079094s: couldn't verify container is exited. %v: unknown state "ingress-addon-legacy-20211117223942-9504": docker container inspect ingress-addon-legacy-20211117223942-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: ingress-addon-legacy-20211117223942-9504
	I1117 22:40:09.957196    8828 cli_runner.go:115] Run: docker container inspect ingress-addon-legacy-20211117223942-9504 --format={{.State.Status}}
	W1117 22:40:10.045661    8828 cli_runner.go:162] docker container inspect ingress-addon-legacy-20211117223942-9504 --format={{.State.Status}} returned with exit code 1
	I1117 22:40:10.045733    8828 oci.go:670] temporary error verifying shutdown: unknown state "ingress-addon-legacy-20211117223942-9504": docker container inspect ingress-addon-legacy-20211117223942-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: ingress-addon-legacy-20211117223942-9504
	I1117 22:40:10.045803    8828 oci.go:672] temporary error: container ingress-addon-legacy-20211117223942-9504 status is  but expect it to be exited
	I1117 22:40:10.045803    8828 retry.go:31] will retry after 3.04096222s: couldn't verify container is exited. %v: unknown state "ingress-addon-legacy-20211117223942-9504": docker container inspect ingress-addon-legacy-20211117223942-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: ingress-addon-legacy-20211117223942-9504
	I1117 22:40:13.095565    8828 cli_runner.go:115] Run: docker container inspect ingress-addon-legacy-20211117223942-9504 --format={{.State.Status}}
	W1117 22:40:13.182350    8828 cli_runner.go:162] docker container inspect ingress-addon-legacy-20211117223942-9504 --format={{.State.Status}} returned with exit code 1
	I1117 22:40:13.182416    8828 oci.go:670] temporary error verifying shutdown: unknown state "ingress-addon-legacy-20211117223942-9504": docker container inspect ingress-addon-legacy-20211117223942-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: ingress-addon-legacy-20211117223942-9504
	I1117 22:40:13.182416    8828 oci.go:672] temporary error: container ingress-addon-legacy-20211117223942-9504 status is  but expect it to be exited
	I1117 22:40:13.182416    8828 retry.go:31] will retry after 5.781953173s: couldn't verify container is exited. %v: unknown state "ingress-addon-legacy-20211117223942-9504": docker container inspect ingress-addon-legacy-20211117223942-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: ingress-addon-legacy-20211117223942-9504
	I1117 22:40:18.969767    8828 cli_runner.go:115] Run: docker container inspect ingress-addon-legacy-20211117223942-9504 --format={{.State.Status}}
	W1117 22:40:19.060032    8828 cli_runner.go:162] docker container inspect ingress-addon-legacy-20211117223942-9504 --format={{.State.Status}} returned with exit code 1
	I1117 22:40:19.060032    8828 oci.go:670] temporary error verifying shutdown: unknown state "ingress-addon-legacy-20211117223942-9504": docker container inspect ingress-addon-legacy-20211117223942-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: ingress-addon-legacy-20211117223942-9504
	I1117 22:40:19.060032    8828 oci.go:672] temporary error: container ingress-addon-legacy-20211117223942-9504 status is  but expect it to be exited
	I1117 22:40:19.060032    8828 oci.go:87] couldn't shut down ingress-addon-legacy-20211117223942-9504 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "ingress-addon-legacy-20211117223942-9504": docker container inspect ingress-addon-legacy-20211117223942-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: ingress-addon-legacy-20211117223942-9504
	 
	I1117 22:40:19.062529    8828 cli_runner.go:115] Run: docker rm -f -v ingress-addon-legacy-20211117223942-9504
	W1117 22:40:19.151660    8828 cli_runner.go:162] docker rm -f -v ingress-addon-legacy-20211117223942-9504 returned with exit code 1
	W1117 22:40:19.152694    8828 delete.go:139] delete failed (probably ok) <nil>
	I1117 22:40:19.152694    8828 fix.go:120] Sleeping 1 second for extra luck!
	I1117 22:40:20.153679    8828 start.go:126] createHost starting for "" (driver="docker")
	I1117 22:40:20.172297    8828 out.go:203] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1117 22:40:20.172998    8828 start.go:160] libmachine.API.Create for "ingress-addon-legacy-20211117223942-9504" (driver="docker")
	I1117 22:40:20.173120    8828 client.go:168] LocalClient.Create starting
	I1117 22:40:20.173737    8828 main.go:130] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I1117 22:40:20.173982    8828 main.go:130] libmachine: Decoding PEM data...
	I1117 22:40:20.173982    8828 main.go:130] libmachine: Parsing certificate...
	I1117 22:40:20.174187    8828 main.go:130] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I1117 22:40:20.174410    8828 main.go:130] libmachine: Decoding PEM data...
	I1117 22:40:20.174410    8828 main.go:130] libmachine: Parsing certificate...
	I1117 22:40:20.179838    8828 cli_runner.go:115] Run: docker network inspect ingress-addon-legacy-20211117223942-9504 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1117 22:40:20.275737    8828 network_create.go:67] Found existing network {name:ingress-addon-legacy-20211117223942-9504 subnet:0xc0014f1ec0 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 168 49 1] mtu:1500}
	I1117 22:40:20.275737    8828 kic.go:106] calculated static IP "192.168.49.2" for the "ingress-addon-legacy-20211117223942-9504" container
	I1117 22:40:20.283718    8828 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
	I1117 22:40:20.374403    8828 cli_runner.go:115] Run: docker volume create ingress-addon-legacy-20211117223942-9504 --label name.minikube.sigs.k8s.io=ingress-addon-legacy-20211117223942-9504 --label created_by.minikube.sigs.k8s.io=true
	I1117 22:40:20.464596    8828 oci.go:102] Successfully created a docker volume ingress-addon-legacy-20211117223942-9504
	I1117 22:40:20.469430    8828 cli_runner.go:115] Run: docker run --rm --name ingress-addon-legacy-20211117223942-9504-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-20211117223942-9504 --entrypoint /usr/bin/test -v ingress-addon-legacy-20211117223942-9504:/var gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -d /var/lib
	I1117 22:40:21.338880    8828 oci.go:106] Successfully prepared a docker volume ingress-addon-legacy-20211117223942-9504
	I1117 22:40:21.339038    8828 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I1117 22:40:21.339142    8828 kic.go:179] Starting extracting preloaded images to volume ...
	I1117 22:40:21.344694    8828 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I1117 22:40:21.344773    8828 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v14-v1.18.20-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-20211117223942-9504:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir
	W1117 22:40:21.450712    8828 cli_runner.go:162] docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v14-v1.18.20-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-20211117223942-9504:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir returned with exit code 125
	I1117 22:40:21.450712    8828 kic.go:186] Unable to extract preloaded tarball to volume: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v14-v1.18.20-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-20211117223942-9504:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: status code not OK but 500: {"Message":"Unhandled exception: The notification platform is unavailable.\r\n\r\nThe notification platform is unavailable.\r\n","StackTrace":"   at Windows.UI.Notifications.ToastNotificationManager.CreateToastNotifier(String applicationId)\r\n   at Docker.WPF.PromptShareDirectory.<PromptUserAsync>d__0.MoveNext() in C:\\workspaces\\stable-2.3.x\\src\\github.com\\docker\\pinata\\win\\src\\Docker.WPF\\PromptShareDirectory.cs:line 25\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at Docker.ApiServices.Mounting.FileSharing.<DoShareAsync>d__6.MoveNext() in C:\\workspaces\\stable-2.3.x\\src\\github.com\\docker\\pinata\\win\\src\\Docker.ApiServices\\Mounting\\FileSharing.cs:line 80\r\n--- End of stack trace from previous location whe
re exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at Docker.ApiServices.Mounting.FileSharing.<ShareAsync>d__4.MoveNext() in C:\\workspaces\\stable-2.3.x\\src\\github.com\\docker\\pinata\\win\\src\\Docker.ApiServices\\Mounting\\FileSharing.cs:line 47\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at Docker.HttpApi.Controllers.FilesharingController.<ShareDirectory>d__2.MoveNext() in C:\\workspaces\\stable-2.3.x\\src\\github.com\\docker\\pinata\\win\\src\\Docker.HttpApi\\Controllers\\FilesharingController.cs:line 21\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.Exceptio
nDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at System.Threading.Tasks.TaskHelpersExtensions.<CastToObject>d__1`1.MoveNext()\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at System.Web.Http.Controllers.ApiControllerActionInvoker.<InvokeActionAsyncCore>d__1.MoveNext()\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at System.Web.Http.Controllers.ActionFilterResult.<ExecuteAsync>d__5.MoveNext()\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.Excepti
onServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at System.Web.Http.Dispatcher.HttpControllerDispatcher.<SendAsync>d__15.MoveNext()"}.
	See 'docker run --help'.
	I1117 22:40:21.689291    8828 info.go:263] docker info: {ID:L2W4:ZZCI:FAHF:ZZT6:NAWJ:ODFR:2ZBQ:YQ77:PYZZ:VR3E:WLUM:OBSV Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:41 OomKillDisable:true NGoroutines:53 SystemTime:2021-11-17 22:40:21.432865094 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:4.19.76-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://inde
x.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:6239399936 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:19.03.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7ad184331fa3e55e52b890ea95e65ba581ae3429 Expected:7ad184331fa3e55e52b890ea95e65ba581ae3429} RuncCommit:{ID:dc9208a3303feef5b3839f4323d9beb36df0a9dd Expected:dc9208a3303feef5b3839f4323d9beb36df0a9dd} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense:Community Engine Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	E1117 22:40:21.689893    8828 oci.go:197] error getting kernel modules path: Unable to locate kernel modules
	I1117 22:40:21.689968    8828 client.go:171] LocalClient.Create took 1.516836s
	I1117 22:40:23.698154    8828 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1117 22:40:23.701497    8828 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20211117223942-9504
	W1117 22:40:23.784662    8828 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20211117223942-9504 returned with exit code 1
	I1117 22:40:23.784781    8828 retry.go:31] will retry after 178.565968ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "ingress-addon-legacy-20211117223942-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20211117223942-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: ingress-addon-legacy-20211117223942-9504
	I1117 22:40:23.967610    8828 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20211117223942-9504
	W1117 22:40:24.056949    8828 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20211117223942-9504 returned with exit code 1
	I1117 22:40:24.057220    8828 retry.go:31] will retry after 330.246446ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "ingress-addon-legacy-20211117223942-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20211117223942-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: ingress-addon-legacy-20211117223942-9504
	I1117 22:40:24.393073    8828 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20211117223942-9504
	W1117 22:40:24.476573    8828 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20211117223942-9504 returned with exit code 1
	I1117 22:40:24.476766    8828 retry.go:31] will retry after 460.157723ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "ingress-addon-legacy-20211117223942-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20211117223942-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: ingress-addon-legacy-20211117223942-9504
	I1117 22:40:24.942784    8828 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20211117223942-9504
	W1117 22:40:25.029279    8828 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20211117223942-9504 returned with exit code 1
	W1117 22:40:25.029490    8828 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "ingress-addon-legacy-20211117223942-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20211117223942-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: ingress-addon-legacy-20211117223942-9504
	
	W1117 22:40:25.029557    8828 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "ingress-addon-legacy-20211117223942-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20211117223942-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: ingress-addon-legacy-20211117223942-9504
	I1117 22:40:25.029557    8828 start.go:129] duration metric: createHost completed in 4.8758419s
	I1117 22:40:25.036763    8828 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1117 22:40:25.040108    8828 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20211117223942-9504
	W1117 22:40:25.132443    8828 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20211117223942-9504 returned with exit code 1
	I1117 22:40:25.132954    8828 retry.go:31] will retry after 195.758538ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "ingress-addon-legacy-20211117223942-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20211117223942-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: ingress-addon-legacy-20211117223942-9504
	I1117 22:40:25.333957    8828 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20211117223942-9504
	W1117 22:40:25.426238    8828 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20211117223942-9504 returned with exit code 1
	I1117 22:40:25.426238    8828 retry.go:31] will retry after 297.413196ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "ingress-addon-legacy-20211117223942-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20211117223942-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: ingress-addon-legacy-20211117223942-9504
	I1117 22:40:25.728901    8828 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20211117223942-9504
	W1117 22:40:25.820451    8828 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20211117223942-9504 returned with exit code 1
	I1117 22:40:25.820649    8828 retry.go:31] will retry after 663.23513ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "ingress-addon-legacy-20211117223942-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20211117223942-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: ingress-addon-legacy-20211117223942-9504
	I1117 22:40:26.488058    8828 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20211117223942-9504
	W1117 22:40:26.577158    8828 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20211117223942-9504 returned with exit code 1
	W1117 22:40:26.577670    8828 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "ingress-addon-legacy-20211117223942-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20211117223942-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: ingress-addon-legacy-20211117223942-9504
	
	W1117 22:40:26.577745    8828 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "ingress-addon-legacy-20211117223942-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20211117223942-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: ingress-addon-legacy-20211117223942-9504
	I1117 22:40:26.577745    8828 fix.go:57] fixHost completed within 23.3158386s
	I1117 22:40:26.577745    8828 start.go:80] releasing machines lock for "ingress-addon-legacy-20211117223942-9504", held for 23.3163941s
	W1117 22:40:26.578256    8828 out.go:241] * Failed to start docker container. Running "minikube delete -p ingress-addon-legacy-20211117223942-9504" may fix it: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	* Failed to start docker container. Running "minikube delete -p ingress-addon-legacy-20211117223942-9504" may fix it: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	I1117 22:40:26.583673    8828 out.go:176] 
	W1117 22:40:26.583914    8828 out.go:241] X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	W1117 22:40:26.583914    8828 out.go:241] * 
	* 
	W1117 22:40:26.584713    8828 out.go:241] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1117 22:40:26.587336    8828 out.go:176] 

                                                
                                                
** /stderr **
ingress_addon_legacy_test.go:42: failed to start minikube with args: "out/minikube-windows-amd64.exe start -p ingress-addon-legacy-20211117223942-9504 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker" : exit status 80
--- FAIL: TestIngressAddonLegacy/StartLegacyK8sCluster (44.04s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (3.71s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:71: (dbg) Run:  out/minikube-windows-amd64.exe -p ingress-addon-legacy-20211117223942-9504 addons enable ingress --alsologtostderr -v=5
ingress_addon_legacy_test.go:71: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p ingress-addon-legacy-20211117223942-9504 addons enable ingress --alsologtostderr -v=5: exit status 10 (1.811715s)

                                                
                                                
-- stdout --
	* Verifying ingress addon...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1117 22:40:26.953770    8896 out.go:297] Setting OutFile to fd 960 ...
	I1117 22:40:27.037190    8896 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 22:40:27.037190    8896 out.go:310] Setting ErrFile to fd 840...
	I1117 22:40:27.037190    8896 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 22:40:27.050226    8896 config.go:176] Loaded profile config "ingress-addon-legacy-20211117223942-9504": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I1117 22:40:27.050226    8896 addons.go:65] Setting ingress=true in profile "ingress-addon-legacy-20211117223942-9504"
	I1117 22:40:27.050226    8896 addons.go:153] Setting addon ingress=true in "ingress-addon-legacy-20211117223942-9504"
	I1117 22:40:27.051274    8896 host.go:66] Checking if "ingress-addon-legacy-20211117223942-9504" exists ...
	I1117 22:40:27.067737    8896 cli_runner.go:115] Run: docker container inspect ingress-addon-legacy-20211117223942-9504 --format={{.State.Status}}
	W1117 22:40:28.546504    8896 cli_runner.go:162] docker container inspect ingress-addon-legacy-20211117223942-9504 --format={{.State.Status}} returned with exit code 1
	I1117 22:40:28.546504    8896 cli_runner.go:168] Completed: docker container inspect ingress-addon-legacy-20211117223942-9504 --format={{.State.Status}}: (1.4785968s)
	W1117 22:40:28.546617    8896 host.go:54] host status for "ingress-addon-legacy-20211117223942-9504" returned error: state: unknown state "ingress-addon-legacy-20211117223942-9504": docker container inspect ingress-addon-legacy-20211117223942-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: ingress-addon-legacy-20211117223942-9504
	W1117 22:40:28.546617    8896 addons.go:202] "ingress-addon-legacy-20211117223942-9504" is not running, setting ingress=true and skipping enablement (err=<nil>)
	I1117 22:40:28.546617    8896 addons.go:386] Verifying addon ingress=true in "ingress-addon-legacy-20211117223942-9504"
	I1117 22:40:28.549723    8896 out.go:176] * Verifying ingress addon...
	W1117 22:40:28.550340    8896 loader.go:221] Config not found: C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	I1117 22:40:28.553297    8896 out.go:176] 
	W1117 22:40:28.553527    8896 out.go:241] X Exiting due to MK_ADDON_ENABLE: run callbacks: running callbacks: [get kube-client to validate ingress addon: client config: context "ingress-addon-legacy-20211117223942-9504" does not exist: client config: context "ingress-addon-legacy-20211117223942-9504" does not exist]
	X Exiting due to MK_ADDON_ENABLE: run callbacks: running callbacks: [get kube-client to validate ingress addon: client config: context "ingress-addon-legacy-20211117223942-9504" does not exist: client config: context "ingress-addon-legacy-20211117223942-9504" does not exist]
	W1117 22:40:28.553561    8896 out.go:241] * 
	* 
	W1117 22:40:28.559945    8896 out.go:241] ╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                        │
	│    * If the above advice does not help, please let us know:                                                            │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                          │
	│                                                                                                                        │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                               │
	│    * Please also attach the following file to the GitHub issue:                                                        │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_addons_765a40db962dd8139438d8c956b5e6e825316d2d_0.log    │
	│                                                                                                                        │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                        │
	│    * If the above advice does not help, please let us know:                                                            │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                          │
	│                                                                                                                        │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                               │
	│    * Please also attach the following file to the GitHub issue:                                                        │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_addons_765a40db962dd8139438d8c956b5e6e825316d2d_0.log    │
	│                                                                                                                        │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I1117 22:40:28.559945    8896 out.go:176] 

                                                
                                                
** /stderr **
ingress_addon_legacy_test.go:72: failed to enable ingress addon: exit status 10
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddonActivation]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ingress-addon-legacy-20211117223942-9504
helpers_test.go:235: (dbg) docker inspect ingress-addon-legacy-20211117223942-9504:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "ingress-addon-legacy-20211117223942-9504",
	        "Id": "bfa89ba8bb3f15d6a64594182cbabe6f6f189b9861668c8467f8eedb1d6d26b6",
	        "Created": "2021-11-17T22:39:52.002856936Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.49.0/24",
	                    "Gateway": "192.168.49.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p ingress-addon-legacy-20211117223942-9504 -n ingress-addon-legacy-20211117223942-9504
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p ingress-addon-legacy-20211117223942-9504 -n ingress-addon-legacy-20211117223942-9504: exit status 7 (1.7828938s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 22:40:30.451998   10720 status.go:247] status error: host: state: unknown state "ingress-addon-legacy-20211117223942-9504": docker container inspect ingress-addon-legacy-20211117223942-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: ingress-addon-legacy-20211117223942-9504

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ingress-addon-legacy-20211117223942-9504" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (3.71s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (1.83s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:157: failed to get Kubernetes client: <nil>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ingress-addon-legacy-20211117223942-9504
helpers_test.go:235: (dbg) docker inspect ingress-addon-legacy-20211117223942-9504:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "ingress-addon-legacy-20211117223942-9504",
	        "Id": "bfa89ba8bb3f15d6a64594182cbabe6f6f189b9861668c8467f8eedb1d6d26b6",
	        "Created": "2021-11-17T22:39:52.002856936Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.49.0/24",
	                    "Gateway": "192.168.49.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p ingress-addon-legacy-20211117223942-9504 -n ingress-addon-legacy-20211117223942-9504
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p ingress-addon-legacy-20211117223942-9504 -n ingress-addon-legacy-20211117223942-9504: exit status 7 (1.7276256s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 22:40:34.091931    8180 status.go:247] status error: host: state: unknown state "ingress-addon-legacy-20211117223942-9504": docker container inspect ingress-addon-legacy-20211117223942-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: ingress-addon-legacy-20211117223942-9504

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ingress-addon-legacy-20211117223942-9504" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressAddons (1.83s)

                                                
                                    
x
+
TestJSONOutput/start/Command (37.19s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe start -p json-output-20211117224036-9504 --output=json --user=testUser --memory=2200 --wait=true --driver=docker
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p json-output-20211117224036-9504 --output=json --user=testUser --memory=2200 --wait=true --driver=docker: exit status 80 (37.1918473s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"daf71d6e-6aa9-4bc9-8744-8655d35cb912","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-20211117224036-9504] minikube v1.24.0 on Microsoft Windows 10 Enterprise N 10.0.18363 Build 18363","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"b20b0c1f-303a-466a-b1e7-788340aa3903","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=C:\\Users\\jenkins.minikube2\\minikube-integration\\kubeconfig"}}
	{"specversion":"1.0","id":"56de420f-15f3-41d3-b475-5c7aa08b6758","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube"}}
	{"specversion":"1.0","id":"625f9097-b480-4601-8840-f97c7b2db7e7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=12739"}}
	{"specversion":"1.0","id":"5663f820-14c1-4d3f-8171-f2178397fad4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"113a1fdb-0e70-4531-8307-44bb21c04953","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting control plane node json-output-20211117224036-9504 in cluster json-output-20211117224036-9504","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"9f01801a-c321-4735-a354-3403829bfdef","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"0ab317b5-9720-48eb-a12e-34894bbdf1e0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2200MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"8f3fb168-5174-4ec9-a5a3-502b04c9ce20","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"StartHost failed, but will try again: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules"}}
	{"specversion":"1.0","id":"fff2bfd8-cdb8-4f5e-9e39-ecf737e143e3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"docker \"json-output-20211117224036-9504\" container is missing, will recreate.","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"be16ee4f-de5b-428d-b4ca-f657a67dd2b4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2200MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"66aa2b29-7f65-44cc-945f-ee8d399d3e25","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"Failed to start docker container. Running \"minikube delete -p json-output-20211117224036-9504\" may fix it: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules"}}
	{"specversion":"1.0","id":"4a0c4834-6e57-408c-8a1a-0059d1e0dca4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Failed to start host: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules","name":"GUEST_PROVISION","url":""}}

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 22:40:41.602273    6492 oci.go:197] error getting kernel modules path: Unable to locate kernel modules
	E1117 22:41:09.031222    6492 oci.go:197] error getting kernel modules path: Unable to locate kernel modules

                                                
                                                
** /stderr **
json_output_test.go:65: failed to clean up: args "out/minikube-windows-amd64.exe start -p json-output-20211117224036-9504 --output=json --user=testUser --memory=2200 --wait=true --driver=docker": exit status 80
--- FAIL: TestJSONOutput/start/Command (37.19s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
json_output_test.go:80: audit.json does not contain the user testUser
--- FAIL: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
json_output_test.go:114: step 8 has already been assigned to another step:
Creating docker container (CPUs=2, Memory=2200MB) ...
Cannot use for:
docker "json-output-20211117224036-9504" container is missing, will recreate.
[Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: daf71d6e-6aa9-4bc9-8744-8655d35cb912
datacontenttype: application/json
Data,
{
"currentstep": "0",
"message": "[json-output-20211117224036-9504] minikube v1.24.0 on Microsoft Windows 10 Enterprise N 10.0.18363 Build 18363",
"name": "Initial Minikube Setup",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: b20b0c1f-303a-466a-b1e7-788340aa3903
datacontenttype: application/json
Data,
{
"message": "KUBECONFIG=C:\\Users\\jenkins.minikube2\\minikube-integration\\kubeconfig"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: 56de420f-15f3-41d3-b475-5c7aa08b6758
datacontenttype: application/json
Data,
{
"message": "MINIKUBE_HOME=C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: 625f9097-b480-4601-8840-f97c7b2db7e7
datacontenttype: application/json
Data,
{
"message": "MINIKUBE_LOCATION=12739"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 5663f820-14c1-4d3f-8171-f2178397fad4
datacontenttype: application/json
Data,
{
"currentstep": "1",
"message": "Using the docker driver based on user configuration",
"name": "Selecting Driver",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 113a1fdb-0e70-4531-8307-44bb21c04953
datacontenttype: application/json
Data,
{
"currentstep": "3",
"message": "Starting control plane node json-output-20211117224036-9504 in cluster json-output-20211117224036-9504",
"name": "Starting Node",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 9f01801a-c321-4735-a354-3403829bfdef
datacontenttype: application/json
Data,
{
"currentstep": "5",
"message": "Pulling base image ...",
"name": "Pulling Base Image",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 0ab317b5-9720-48eb-a12e-34894bbdf1e0
datacontenttype: application/json
Data,
{
"currentstep": "8",
"message": "Creating docker container (CPUs=2, Memory=2200MB) ...",
"name": "Creating Container",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.error
source: https://minikube.sigs.k8s.io/
id: 8f3fb168-5174-4ec9-a5a3-502b04c9ce20
datacontenttype: application/json
Data,
{
"message": "StartHost failed, but will try again: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: fff2bfd8-cdb8-4f5e-9e39-ecf737e143e3
datacontenttype: application/json
Data,
{
"currentstep": "8",
"message": "docker \"json-output-20211117224036-9504\" container is missing, will recreate.",
"name": "Creating Container",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: be16ee4f-de5b-428d-b4ca-f657a67dd2b4
datacontenttype: application/json
Data,
{
"currentstep": "8",
"message": "Creating docker container (CPUs=2, Memory=2200MB) ...",
"name": "Creating Container",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.error
source: https://minikube.sigs.k8s.io/
id: 66aa2b29-7f65-44cc-945f-ee8d399d3e25
datacontenttype: application/json
Data,
{
"message": "Failed to start docker container. Running \"minikube delete -p json-output-20211117224036-9504\" may fix it: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.error
source: https://minikube.sigs.k8s.io/
id: 4a0c4834-6e57-408c-8a1a-0059d1e0dca4
datacontenttype: application/json
Data,
{
"advice": "",
"exitcode": "80",
"issues": "",
"message": "Failed to start host: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules",
"name": "GUEST_PROVISION",
"url": ""
}
]
--- FAIL: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.01s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
json_output_test.go:133: current step is not in increasing order: [Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: daf71d6e-6aa9-4bc9-8744-8655d35cb912
datacontenttype: application/json
Data,
{
"currentstep": "0",
"message": "[json-output-20211117224036-9504] minikube v1.24.0 on Microsoft Windows 10 Enterprise N 10.0.18363 Build 18363",
"name": "Initial Minikube Setup",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: b20b0c1f-303a-466a-b1e7-788340aa3903
datacontenttype: application/json
Data,
{
"message": "KUBECONFIG=C:\\Users\\jenkins.minikube2\\minikube-integration\\kubeconfig"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: 56de420f-15f3-41d3-b475-5c7aa08b6758
datacontenttype: application/json
Data,
{
"message": "MINIKUBE_HOME=C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: 625f9097-b480-4601-8840-f97c7b2db7e7
datacontenttype: application/json
Data,
{
"message": "MINIKUBE_LOCATION=12739"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 5663f820-14c1-4d3f-8171-f2178397fad4
datacontenttype: application/json
Data,
{
"currentstep": "1",
"message": "Using the docker driver based on user configuration",
"name": "Selecting Driver",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 113a1fdb-0e70-4531-8307-44bb21c04953
datacontenttype: application/json
Data,
{
"currentstep": "3",
"message": "Starting control plane node json-output-20211117224036-9504 in cluster json-output-20211117224036-9504",
"name": "Starting Node",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 9f01801a-c321-4735-a354-3403829bfdef
datacontenttype: application/json
Data,
{
"currentstep": "5",
"message": "Pulling base image ...",
"name": "Pulling Base Image",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 0ab317b5-9720-48eb-a12e-34894bbdf1e0
datacontenttype: application/json
Data,
{
"currentstep": "8",
"message": "Creating docker container (CPUs=2, Memory=2200MB) ...",
"name": "Creating Container",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.error
source: https://minikube.sigs.k8s.io/
id: 8f3fb168-5174-4ec9-a5a3-502b04c9ce20
datacontenttype: application/json
Data,
{
"message": "StartHost failed, but will try again: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: fff2bfd8-cdb8-4f5e-9e39-ecf737e143e3
datacontenttype: application/json
Data,
{
"currentstep": "8",
"message": "docker \"json-output-20211117224036-9504\" container is missing, will recreate.",
"name": "Creating Container",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: be16ee4f-de5b-428d-b4ca-f657a67dd2b4
datacontenttype: application/json
Data,
{
"currentstep": "8",
"message": "Creating docker container (CPUs=2, Memory=2200MB) ...",
"name": "Creating Container",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.error
source: https://minikube.sigs.k8s.io/
id: 66aa2b29-7f65-44cc-945f-ee8d399d3e25
datacontenttype: application/json
Data,
{
"message": "Failed to start docker container. Running \"minikube delete -p json-output-20211117224036-9504\" may fix it: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.error
source: https://minikube.sigs.k8s.io/
id: 4a0c4834-6e57-408c-8a1a-0059d1e0dca4
datacontenttype: application/json
Data,
{
"advice": "",
"exitcode": "80",
"issues": "",
"message": "Failed to start host: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules",
"name": "GUEST_PROVISION",
"url": ""
}
]
--- FAIL: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.01s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (1.75s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe pause -p json-output-20211117224036-9504 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-windows-amd64.exe pause -p json-output-20211117224036-9504 --output=json --user=testUser: exit status 80 (1.7504603s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"a5af1054-1e1b-40e6-966f-4b122318a7cd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"state: unknown state \"json-output-20211117224036-9504\": docker container inspect json-output-20211117224036-9504 --format=: exit status 1\nstdout:\n\n\nstderr:\nError: No such container: json-output-20211117224036-9504","name":"GUEST_STATUS","url":""}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-windows-amd64.exe pause -p json-output-20211117224036-9504 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/pause/Command (1.75s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
json_output_test.go:80: audit.json does not contain the user testUser
--- FAIL: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (1.82s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p json-output-20211117224036-9504 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-windows-amd64.exe unpause -p json-output-20211117224036-9504 --output=json --user=testUser: exit status 80 (1.820175s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "json-output-20211117224036-9504": docker container inspect json-output-20211117224036-9504 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: json-output-20211117224036-9504
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_unpause_00b12d9cedab4ae1bb930a621bdee2ada68dbd98_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
json_output_test.go:65: failed to clean up: args "out/minikube-windows-amd64.exe unpause -p json-output-20211117224036-9504 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/unpause/Command (1.82s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
json_output_test.go:80: audit.json does not contain the user testUser
--- FAIL: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (14.98s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe stop -p json-output-20211117224036-9504 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-windows-amd64.exe stop -p json-output-20211117224036-9504 --output=json --user=testUser: exit status 82 (14.9766772s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"147bbffb-266b-4dc3-8821-a384045a9b06","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Stopping node \"json-output-20211117224036-9504\"  ...","name":"Stopping","totalsteps":"2"}}
	{"specversion":"1.0","id":"2b39e34d-51e9-4f6c-91fe-d2854461c29a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Stopping node \"json-output-20211117224036-9504\"  ...","name":"Stopping","totalsteps":"2"}}
	{"specversion":"1.0","id":"97ef5360-ed7c-4fb7-9b20-9684b0c2396e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Stopping node \"json-output-20211117224036-9504\"  ...","name":"Stopping","totalsteps":"2"}}
	{"specversion":"1.0","id":"dc3df969-70a9-4407-af18-a92c04cebeeb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Stopping node \"json-output-20211117224036-9504\"  ...","name":"Stopping","totalsteps":"2"}}
	{"specversion":"1.0","id":"1fc7c478-ce8e-4684-999a-9b42f1d82f65","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Stopping node \"json-output-20211117224036-9504\"  ...","name":"Stopping","totalsteps":"2"}}
	{"specversion":"1.0","id":"afe18874-0f10-4cd7-9d41-ae9bc1674a90","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Stopping node \"json-output-20211117224036-9504\"  ...","name":"Stopping","totalsteps":"2"}}
	{"specversion":"1.0","id":"51a096f2-eff1-43e5-8581-5d99f1543e79","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"82","issues":"","message":"docker container inspect json-output-20211117224036-9504 --format=: exit status 1\nstdout:\n\n\nstderr:\nError: No such container: json-output-20211117224036-9504","name":"GUEST_STOP_TIMEOUT","url":""}}

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 22:41:21.169946    7936 daemonize_windows.go:39] error terminating scheduled stop for profile json-output-20211117224036-9504: stopping schedule-stop service for profile json-output-20211117224036-9504: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "json-output-20211117224036-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" json-output-20211117224036-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: json-output-20211117224036-9504

                                                
                                                
** /stderr **
json_output_test.go:65: failed to clean up: args "out/minikube-windows-amd64.exe stop -p json-output-20211117224036-9504 --output=json --user=testUser": exit status 82
--- FAIL: TestJSONOutput/stop/Command (14.98s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
json_output_test.go:80: audit.json does not contain the user testUser
--- FAIL: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
json_output_test.go:114: step 0 has already been assigned to another step:
Stopping node "json-output-20211117224036-9504"  ...
Cannot use for:
Stopping node "json-output-20211117224036-9504"  ...
[Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 147bbffb-266b-4dc3-8821-a384045a9b06
datacontenttype: application/json
Data,
{
"currentstep": "0",
"message": "Stopping node \"json-output-20211117224036-9504\"  ...",
"name": "Stopping",
"totalsteps": "2"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 2b39e34d-51e9-4f6c-91fe-d2854461c29a
datacontenttype: application/json
Data,
{
"currentstep": "0",
"message": "Stopping node \"json-output-20211117224036-9504\"  ...",
"name": "Stopping",
"totalsteps": "2"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 97ef5360-ed7c-4fb7-9b20-9684b0c2396e
datacontenttype: application/json
Data,
{
"currentstep": "0",
"message": "Stopping node \"json-output-20211117224036-9504\"  ...",
"name": "Stopping",
"totalsteps": "2"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: dc3df969-70a9-4407-af18-a92c04cebeeb
datacontenttype: application/json
Data,
{
"currentstep": "0",
"message": "Stopping node \"json-output-20211117224036-9504\"  ...",
"name": "Stopping",
"totalsteps": "2"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 1fc7c478-ce8e-4684-999a-9b42f1d82f65
datacontenttype: application/json
Data,
{
"currentstep": "0",
"message": "Stopping node \"json-output-20211117224036-9504\"  ...",
"name": "Stopping",
"totalsteps": "2"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: afe18874-0f10-4cd7-9d41-ae9bc1674a90
datacontenttype: application/json
Data,
{
"currentstep": "0",
"message": "Stopping node \"json-output-20211117224036-9504\"  ...",
"name": "Stopping",
"totalsteps": "2"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.error
source: https://minikube.sigs.k8s.io/
id: 51a096f2-eff1-43e5-8581-5d99f1543e79
datacontenttype: application/json
Data,
{
"advice": "",
"exitcode": "82",
"issues": "",
"message": "docker container inspect json-output-20211117224036-9504 --format=: exit status 1\nstdout:\n\n\nstderr:\nError: No such container: json-output-20211117224036-9504",
"name": "GUEST_STOP_TIMEOUT",
"url": ""
}
]
--- FAIL: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
json_output_test.go:133: current step is not in increasing order: [Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 147bbffb-266b-4dc3-8821-a384045a9b06
datacontenttype: application/json
Data,
{
"currentstep": "0",
"message": "Stopping node \"json-output-20211117224036-9504\"  ...",
"name": "Stopping",
"totalsteps": "2"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 2b39e34d-51e9-4f6c-91fe-d2854461c29a
datacontenttype: application/json
Data,
{
"currentstep": "0",
"message": "Stopping node \"json-output-20211117224036-9504\"  ...",
"name": "Stopping",
"totalsteps": "2"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 97ef5360-ed7c-4fb7-9b20-9684b0c2396e
datacontenttype: application/json
Data,
{
"currentstep": "0",
"message": "Stopping node \"json-output-20211117224036-9504\"  ...",
"name": "Stopping",
"totalsteps": "2"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: dc3df969-70a9-4407-af18-a92c04cebeeb
datacontenttype: application/json
Data,
{
"currentstep": "0",
"message": "Stopping node \"json-output-20211117224036-9504\"  ...",
"name": "Stopping",
"totalsteps": "2"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 1fc7c478-ce8e-4684-999a-9b42f1d82f65
datacontenttype: application/json
Data,
{
"currentstep": "0",
"message": "Stopping node \"json-output-20211117224036-9504\"  ...",
"name": "Stopping",
"totalsteps": "2"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: afe18874-0f10-4cd7-9d41-ae9bc1674a90
datacontenttype: application/json
Data,
{
"currentstep": "0",
"message": "Stopping node \"json-output-20211117224036-9504\"  ...",
"name": "Stopping",
"totalsteps": "2"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.error
source: https://minikube.sigs.k8s.io/
id: 51a096f2-eff1-43e5-8581-5d99f1543e79
datacontenttype: application/json
Data,
{
"advice": "",
"exitcode": "82",
"issues": "",
"message": "docker container inspect json-output-20211117224036-9504 --format=: exit status 1\nstdout:\n\n\nstderr:\nError: No such container: json-output-20211117224036-9504",
"name": "GUEST_STOP_TIMEOUT",
"url": ""
}
]
--- FAIL: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (221.87s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:58: (dbg) Run:  out/minikube-windows-amd64.exe start -p docker-network-20211117224137-9504 --network=
kic_custom_network_test.go:58: (dbg) Done: out/minikube-windows-amd64.exe start -p docker-network-20211117224137-9504 --network=: (2m51.0113388s)
kic_custom_network_test.go:102: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:107: docker-network-20211117224137-9504 network is not listed by [[docker network ls --format {{.Name}}]]: 
-- stdout --
	bridge
	host
	none

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "docker-network-20211117224137-9504" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p docker-network-20211117224137-9504
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p docker-network-20211117224137-9504: (50.731151s)
--- FAIL: TestKicCustomNetwork/create_custom_network (221.87s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (39.09s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:77: (dbg) Run:  out/minikube-windows-amd64.exe start -p mount-start-1-20211117225233-9504 --memory=2048 --mount --driver=docker
mount_start_test.go:77: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p mount-start-1-20211117225233-9504 --memory=2048 --mount --driver=docker: exit status 80 (37.1468661s)

                                                
                                                
-- stdout --
	* [mount-start-1-20211117225233-9504] minikube v1.24.0 on Microsoft Windows 10 Enterprise N 10.0.18363 Build 18363
	  - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=12739
	* Using the docker driver based on user configuration
	* Starting control plane node mount-start-1-20211117225233-9504 in cluster mount-start-1-20211117225233-9504
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* docker "mount-start-1-20211117225233-9504" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 22:52:37.867599    4516 oci.go:197] error getting kernel modules path: Unable to locate kernel modules
	! StartHost failed, but will try again: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	E1117 22:53:05.262688    4516 oci.go:197] error getting kernel modules path: Unable to locate kernel modules
	* Failed to start docker container. Running "minikube delete -p mount-start-1-20211117225233-9504" may fix it: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
mount_start_test.go:79: failed to start minikube with args: "out/minikube-windows-amd64.exe start -p mount-start-1-20211117225233-9504 --memory=2048 --mount --driver=docker" : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMountStart/serial/StartWithMountFirst]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect mount-start-1-20211117225233-9504
helpers_test.go:235: (dbg) docker inspect mount-start-1-20211117225233-9504:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "mount-start-1-20211117225233-9504",
	        "Id": "141c2710066f9d4dbdb38111e74ec197b62efcebf462df0101ac96496329a35a",
	        "Created": "2021-11-17T22:52:36.144937665Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.49.0/24",
	                    "Gateway": "192.168.49.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p mount-start-1-20211117225233-9504 -n mount-start-1-20211117225233-9504
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p mount-start-1-20211117225233-9504 -n mount-start-1-20211117225233-9504: exit status 7 (1.8316152s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 22:53:12.126905    5160 status.go:247] status error: host: state: unknown state "mount-start-1-20211117225233-9504": docker container inspect mount-start-1-20211117225233-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: mount-start-1-20211117225233-9504

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "mount-start-1-20211117225233-9504" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMountStart/serial/StartWithMountFirst (39.09s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (39.32s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:77: (dbg) Run:  out/minikube-windows-amd64.exe start -p mount-start-2-20211117225233-9504 --memory=2048 --mount --driver=docker
mount_start_test.go:77: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p mount-start-2-20211117225233-9504 --memory=2048 --mount --driver=docker: exit status 80 (37.4628312s)

                                                
                                                
-- stdout --
	* [mount-start-2-20211117225233-9504] minikube v1.24.0 on Microsoft Windows 10 Enterprise N 10.0.18363 Build 18363
	  - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=12739
	* Using the docker driver based on user configuration
	* Starting control plane node mount-start-2-20211117225233-9504 in cluster mount-start-2-20211117225233-9504
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* docker "mount-start-2-20211117225233-9504" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 22:53:17.260523    1092 oci.go:197] error getting kernel modules path: Unable to locate kernel modules
	! StartHost failed, but will try again: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	E1117 22:53:44.679789    1092 oci.go:197] error getting kernel modules path: Unable to locate kernel modules
	* Failed to start docker container. Running "minikube delete -p mount-start-2-20211117225233-9504" may fix it: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
mount_start_test.go:79: failed to start minikube with args: "out/minikube-windows-amd64.exe start -p mount-start-2-20211117225233-9504 --memory=2048 --mount --driver=docker" : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMountStart/serial/StartWithMountSecond]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect mount-start-2-20211117225233-9504
helpers_test.go:235: (dbg) docker inspect mount-start-2-20211117225233-9504:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "mount-start-2-20211117225233-9504",
	        "Id": "ca55b09be169e0172f0fb4a8234b65f0c3df9472935121c53366a5af26dc3660",
	        "Created": "2021-11-17T22:53:15.422095835Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.58.0/24",
	                    "Gateway": "192.168.58.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p mount-start-2-20211117225233-9504 -n mount-start-2-20211117225233-9504
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p mount-start-2-20211117225233-9504 -n mount-start-2-20211117225233-9504: exit status 7 (1.7419767s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 22:53:51.446505    8792 status.go:247] status error: host: state: unknown state "mount-start-2-20211117225233-9504": docker container inspect mount-start-2-20211117225233-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: mount-start-2-20211117225233-9504

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "mount-start-2-20211117225233-9504" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMountStart/serial/StartWithMountSecond (39.32s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (3.63s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:88: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-1-20211117225233-9504 ssh ls /minikube-host
mount_start_test.go:88: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p mount-start-1-20211117225233-9504 ssh ls /minikube-host: exit status 80 (1.7889022s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "mount-start-1-20211117225233-9504": docker container inspect mount-start-1-20211117225233-9504 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: mount-start-1-20211117225233-9504
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                     │
	│    * If the above advice does not help, please let us know:                                                         │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                       │
	│                                                                                                                     │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                            │
	│    * Please also attach the following file to the GitHub issue:                                                     │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_ssh_8349843e7b5a7594824c24e0ce5e64040ef6553a_0.log    │
	│                                                                                                                     │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
mount_start_test.go:90: mount failed: "out/minikube-windows-amd64.exe -p mount-start-1-20211117225233-9504 ssh ls /minikube-host" : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMountStart/serial/VerifyMountFirst]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect mount-start-1-20211117225233-9504
helpers_test.go:235: (dbg) docker inspect mount-start-1-20211117225233-9504:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "mount-start-1-20211117225233-9504",
	        "Id": "141c2710066f9d4dbdb38111e74ec197b62efcebf462df0101ac96496329a35a",
	        "Created": "2021-11-17T22:52:36.144937665Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.49.0/24",
	                    "Gateway": "192.168.49.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p mount-start-1-20211117225233-9504 -n mount-start-1-20211117225233-9504
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p mount-start-1-20211117225233-9504 -n mount-start-1-20211117225233-9504: exit status 7 (1.730536s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 22:53:55.071596   12216 status.go:247] status error: host: state: unknown state "mount-start-1-20211117225233-9504": docker container inspect mount-start-1-20211117225233-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: mount-start-1-20211117225233-9504

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "mount-start-1-20211117225233-9504" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMountStart/serial/VerifyMountFirst (3.63s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (3.61s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:88: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-2-20211117225233-9504 ssh ls /minikube-host
mount_start_test.go:88: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p mount-start-2-20211117225233-9504 ssh ls /minikube-host: exit status 80 (1.7887375s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "mount-start-2-20211117225233-9504": docker container inspect mount-start-2-20211117225233-9504 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: mount-start-2-20211117225233-9504
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                     │
	│    * If the above advice does not help, please let us know:                                                         │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                       │
	│                                                                                                                     │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                            │
	│    * Please also attach the following file to the GitHub issue:                                                     │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_ssh_8349843e7b5a7594824c24e0ce5e64040ef6553a_0.log    │
	│                                                                                                                     │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
mount_start_test.go:90: mount failed: "out/minikube-windows-amd64.exe -p mount-start-2-20211117225233-9504 ssh ls /minikube-host" : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMountStart/serial/VerifyMountSecond]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect mount-start-2-20211117225233-9504
helpers_test.go:235: (dbg) docker inspect mount-start-2-20211117225233-9504:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "mount-start-2-20211117225233-9504",
	        "Id": "ca55b09be169e0172f0fb4a8234b65f0c3df9472935121c53366a5af26dc3660",
	        "Created": "2021-11-17T22:53:15.422095835Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.58.0/24",
	                    "Gateway": "192.168.58.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p mount-start-2-20211117225233-9504 -n mount-start-2-20211117225233-9504
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p mount-start-2-20211117225233-9504 -n mount-start-2-20211117225233-9504: exit status 7 (1.7233014s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 22:53:58.687579   12108 status.go:247] status error: host: state: unknown state "mount-start-2-20211117225233-9504": docker container inspect mount-start-2-20211117225233-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: mount-start-2-20211117225233-9504

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "mount-start-2-20211117225233-9504" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMountStart/serial/VerifyMountSecond (3.61s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (3.65s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:88: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-2-20211117225233-9504 ssh ls /minikube-host
mount_start_test.go:88: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p mount-start-2-20211117225233-9504 ssh ls /minikube-host: exit status 80 (1.7972718s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "mount-start-2-20211117225233-9504": docker container inspect mount-start-2-20211117225233-9504 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: mount-start-2-20211117225233-9504
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                     │
	│    * If the above advice does not help, please let us know:                                                         │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                       │
	│                                                                                                                     │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                            │
	│    * Please also attach the following file to the GitHub issue:                                                     │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_ssh_8349843e7b5a7594824c24e0ce5e64040ef6553a_0.log    │
	│                                                                                                                     │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
mount_start_test.go:90: mount failed: "out/minikube-windows-amd64.exe -p mount-start-2-20211117225233-9504 ssh ls /minikube-host" : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMountStart/serial/VerifyMountPostDelete]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect mount-start-2-20211117225233-9504
helpers_test.go:235: (dbg) docker inspect mount-start-2-20211117225233-9504:

                                                
                                                
-- stdout --
	[
	    {
	        "CreatedAt": "2021-11-17T22:53:16Z",
	        "Driver": "local",
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "mount-start-2-20211117225233-9504"
	        },
	        "Mountpoint": "/var/lib/docker/volumes/mount-start-2-20211117225233-9504/_data",
	        "Name": "mount-start-2-20211117225233-9504",
	        "Options": {},
	        "Scope": "local"
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p mount-start-2-20211117225233-9504 -n mount-start-2-20211117225233-9504
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p mount-start-2-20211117225233-9504 -n mount-start-2-20211117225233-9504: exit status 7 (1.7346821s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 22:54:05.323388    7956 status.go:247] status error: host: state: unknown state "mount-start-2-20211117225233-9504": docker container inspect mount-start-2-20211117225233-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: mount-start-2-20211117225233-9504

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "mount-start-2-20211117225233-9504" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMountStart/serial/VerifyMountPostDelete (3.65s)

                                                
                                    
x
+
TestMountStart/serial/Stop (16.98s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:99: (dbg) Run:  out/minikube-windows-amd64.exe stop -p mount-start-2-20211117225233-9504
mount_start_test.go:99: (dbg) Non-zero exit: out/minikube-windows-amd64.exe stop -p mount-start-2-20211117225233-9504: exit status 82 (15.0950417s)

                                                
                                                
-- stdout --
	* Stopping node "mount-start-2-20211117225233-9504"  ...
	* Stopping node "mount-start-2-20211117225233-9504"  ...
	* Stopping node "mount-start-2-20211117225233-9504"  ...
	* Stopping node "mount-start-2-20211117225233-9504"  ...
	* Stopping node "mount-start-2-20211117225233-9504"  ...
	* Stopping node "mount-start-2-20211117225233-9504"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 22:54:09.017546    6344 daemonize_windows.go:39] error terminating scheduled stop for profile mount-start-2-20211117225233-9504: stopping schedule-stop service for profile mount-start-2-20211117225233-9504: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "mount-start-2-20211117225233-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" mount-start-2-20211117225233-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: mount-start-2-20211117225233-9504
	X Exiting due to GUEST_STOP_TIMEOUT: docker container inspect mount-start-2-20211117225233-9504 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: mount-start-2-20211117225233-9504
	
	* 
	╭──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                      │
	│    * If the above advice does not help, please let us know:                                                          │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                        │
	│                                                                                                                      │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                             │
	│    * Please also attach the following file to the GitHub issue:                                                      │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_stop_eeafc14e343686f0df1f1d4295ac2d3042636ff8_2.log    │
	│                                                                                                                      │
	╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
mount_start_test.go:101: stop failed: "out/minikube-windows-amd64.exe stop -p mount-start-2-20211117225233-9504" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMountStart/serial/Stop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect mount-start-2-20211117225233-9504
helpers_test.go:235: (dbg) docker inspect mount-start-2-20211117225233-9504:

                                                
                                                
-- stdout --
	[
	    {
	        "CreatedAt": "2021-11-17T22:53:16Z",
	        "Driver": "local",
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "mount-start-2-20211117225233-9504"
	        },
	        "Mountpoint": "/var/lib/docker/volumes/mount-start-2-20211117225233-9504/_data",
	        "Name": "mount-start-2-20211117225233-9504",
	        "Options": {},
	        "Scope": "local"
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p mount-start-2-20211117225233-9504 -n mount-start-2-20211117225233-9504
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p mount-start-2-20211117225233-9504 -n mount-start-2-20211117225233-9504: exit status 7 (1.7669977s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 22:54:22.299531    7812 status.go:247] status error: host: state: unknown state "mount-start-2-20211117225233-9504": docker container inspect mount-start-2-20211117225233-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: mount-start-2-20211117225233-9504

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "mount-start-2-20211117225233-9504" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMountStart/serial/Stop (16.98s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (59.45s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:110: (dbg) Run:  out/minikube-windows-amd64.exe start -p mount-start-2-20211117225233-9504
mount_start_test.go:110: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p mount-start-2-20211117225233-9504: exit status 80 (57.5185918s)

                                                
                                                
-- stdout --
	* [mount-start-2-20211117225233-9504] minikube v1.24.0 on Microsoft Windows 10 Enterprise N 10.0.18363 Build 18363
	  - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=12739
	* Using the docker driver based on existing profile
	* Starting control plane node mount-start-2-20211117225233-9504 in cluster mount-start-2-20211117225233-9504
	* Pulling base image ...
	* docker "mount-start-2-20211117225233-9504" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* docker "mount-start-2-20211117225233-9504" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 22:54:45.005804   10596 oci.go:197] error getting kernel modules path: Unable to locate kernel modules
	! StartHost failed, but will try again: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	E1117 22:55:14.453616   10596 oci.go:197] error getting kernel modules path: Unable to locate kernel modules
	* Failed to start docker container. Running "minikube delete -p mount-start-2-20211117225233-9504" may fix it: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
mount_start_test.go:112: restart failed: "out/minikube-windows-amd64.exe start -p mount-start-2-20211117225233-9504" : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMountStart/serial/RestartStopped]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect mount-start-2-20211117225233-9504
helpers_test.go:235: (dbg) docker inspect mount-start-2-20211117225233-9504:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "mount-start-2-20211117225233-9504",
	        "Id": "2453454e1fe69a51d043e76d00081e607d2e3e93abd5d565328def6edcfa24e5",
	        "Created": "2021-11-17T22:54:43.488455648Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.49.0/24",
	                    "Gateway": "192.168.49.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p mount-start-2-20211117225233-9504 -n mount-start-2-20211117225233-9504
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p mount-start-2-20211117225233-9504 -n mount-start-2-20211117225233-9504: exit status 7 (1.8201179s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 22:55:21.750440   10040 status.go:247] status error: host: state: unknown state "mount-start-2-20211117225233-9504": docker container inspect mount-start-2-20211117225233-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: mount-start-2-20211117225233-9504

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "mount-start-2-20211117225233-9504" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMountStart/serial/RestartStopped (59.45s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (3.61s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:88: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-2-20211117225233-9504 ssh ls /minikube-host
mount_start_test.go:88: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p mount-start-2-20211117225233-9504 ssh ls /minikube-host: exit status 80 (1.7553357s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "mount-start-2-20211117225233-9504": docker container inspect mount-start-2-20211117225233-9504 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: mount-start-2-20211117225233-9504
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                     │
	│    * If the above advice does not help, please let us know:                                                         │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                       │
	│                                                                                                                     │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                            │
	│    * Please also attach the following file to the GitHub issue:                                                     │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_ssh_8349843e7b5a7594824c24e0ce5e64040ef6553a_0.log    │
	│                                                                                                                     │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
mount_start_test.go:90: mount failed: "out/minikube-windows-amd64.exe -p mount-start-2-20211117225233-9504 ssh ls /minikube-host" : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMountStart/serial/VerifyMountPostStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect mount-start-2-20211117225233-9504
helpers_test.go:235: (dbg) docker inspect mount-start-2-20211117225233-9504:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "mount-start-2-20211117225233-9504",
	        "Id": "2453454e1fe69a51d043e76d00081e607d2e3e93abd5d565328def6edcfa24e5",
	        "Created": "2021-11-17T22:54:43.488455648Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.49.0/24",
	                    "Gateway": "192.168.49.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p mount-start-2-20211117225233-9504 -n mount-start-2-20211117225233-9504
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p mount-start-2-20211117225233-9504 -n mount-start-2-20211117225233-9504: exit status 7 (1.7462829s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 22:55:25.363498    5208 status.go:247] status error: host: state: unknown state "mount-start-2-20211117225233-9504": docker container inspect mount-start-2-20211117225233-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: mount-start-2-20211117225233-9504

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "mount-start-2-20211117225233-9504" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMountStart/serial/VerifyMountPostStop (3.61s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (39.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:82: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-20211117225530-9504 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker
multinode_test.go:82: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p multinode-20211117225530-9504 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker: exit status 80 (37.1229918s)

                                                
                                                
-- stdout --
	* [multinode-20211117225530-9504] minikube v1.24.0 on Microsoft Windows 10 Enterprise N 10.0.18363 Build 18363
	  - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=12739
	* Using the docker driver based on user configuration
	* Starting control plane node multinode-20211117225530-9504 in cluster multinode-20211117225530-9504
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* docker "multinode-20211117225530-9504" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1117 22:55:30.416930   10640 out.go:297] Setting OutFile to fd 744 ...
	I1117 22:55:30.482665   10640 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 22:55:30.482665   10640 out.go:310] Setting ErrFile to fd 688...
	I1117 22:55:30.482665   10640 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 22:55:30.492464   10640 out.go:304] Setting JSON to false
	I1117 22:55:30.494448   10640 start.go:112] hostinfo: {"hostname":"minikube2","uptime":79046,"bootTime":1637110684,"procs":127,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.18363 Build 18363","kernelVersion":"10.0.18363 Build 18363","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"29dced62-21fb-45d8-a34e-472b66ced002"}
	W1117 22:55:30.494448   10640 start.go:120] gopshost.Virtualization returned error: not implemented yet
	I1117 22:55:30.499627   10640 out.go:176] * [multinode-20211117225530-9504] minikube v1.24.0 on Microsoft Windows 10 Enterprise N 10.0.18363 Build 18363
	I1117 22:55:30.500205   10640 notify.go:174] Checking for updates...
	I1117 22:55:30.503624   10640 out.go:176]   - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	I1117 22:55:30.505937   10640 out.go:176]   - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	I1117 22:55:30.509604   10640 out.go:176]   - MINIKUBE_LOCATION=12739
	I1117 22:55:30.510432   10640 driver.go:343] Setting default libvirt URI to qemu:///system
	I1117 22:55:32.021581   10640 docker.go:132] docker version: linux-19.03.12
	I1117 22:55:32.022876   10640 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I1117 22:55:32.357680   10640 info.go:263] docker info: {ID:L2W4:ZZCI:FAHF:ZZT6:NAWJ:ODFR:2ZBQ:YQ77:PYZZ:VR3E:WLUM:OBSV Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:40 OomKillDisable:true NGoroutines:52 SystemTime:2021-11-17 22:55:32.107588328 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:4.19.76-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://inde
x.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:6239399936 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:19.03.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7ad184331fa3e55e52b890ea95e65ba581ae3429 Expected:7ad184331fa3e55e52b890ea95e65ba581ae3429} RuncCommit:{ID:dc9208a3303feef5b3839f4323d9beb36df0a9dd Expected:dc9208a3303feef5b3839f4323d9beb36df0a9dd} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense:Community Engine Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I1117 22:55:32.361912   10640 out.go:176] * Using the docker driver based on user configuration
	I1117 22:55:32.361990   10640 start.go:280] selected driver: docker
	I1117 22:55:32.362062   10640 start.go:775] validating driver "docker" against <nil>
	I1117 22:55:32.362147   10640 start.go:786] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I1117 22:55:32.431240   10640 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I1117 22:55:32.789882   10640 info.go:263] docker info: {ID:L2W4:ZZCI:FAHF:ZZT6:NAWJ:ODFR:2ZBQ:YQ77:PYZZ:VR3E:WLUM:OBSV Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:40 OomKillDisable:true NGoroutines:52 SystemTime:2021-11-17 22:55:32.52412296 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:4.19.76-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://index
.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:6239399936 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:19.03.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7ad184331fa3e55e52b890ea95e65ba581ae3429 Expected:7ad184331fa3e55e52b890ea95e65ba581ae3429} RuncCommit:{ID:dc9208a3303feef5b3839f4323d9beb36df0a9dd Expected:dc9208a3303feef5b3839f4323d9beb36df0a9dd} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense:Community Engine Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I1117 22:55:32.790196   10640 start_flags.go:268] no existing cluster config was found, will generate one from the flags 
	I1117 22:55:32.790659   10640 start_flags.go:758] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1117 22:55:32.790766   10640 cni.go:93] Creating CNI manager for ""
	I1117 22:55:32.790766   10640 cni.go:154] 0 nodes found, recommending kindnet
	I1117 22:55:32.790766   10640 cni.go:217] auto-setting extra-config to "kubelet.cni-conf-dir=/etc/cni/net.mk"
	I1117 22:55:32.790766   10640 cni.go:222] extra-config set to "kubelet.cni-conf-dir=/etc/cni/net.mk"
	I1117 22:55:32.790919   10640 start_flags.go:277] Found "CNI" CNI - setting NetworkPlugin=cni
	I1117 22:55:32.790919   10640 start_flags.go:282] config:
	{Name:multinode-20211117225530-9504 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.3 ClusterName:multinode-20211117225530-9504 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket
: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host}
	I1117 22:55:32.795748   10640 out.go:176] * Starting control plane node multinode-20211117225530-9504 in cluster multinode-20211117225530-9504
	I1117 22:55:32.795877   10640 cache.go:118] Beginning downloading kic base image for docker with docker
	I1117 22:55:32.798739   10640 out.go:176] * Pulling base image ...
	I1117 22:55:32.798739   10640 preload.go:132] Checking if preload exists for k8s version v1.22.3 and runtime docker
	I1117 22:55:32.798739   10640 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c in local docker daemon
	I1117 22:55:32.798739   10640 preload.go:148] Found local preload: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4
	I1117 22:55:32.798739   10640 cache.go:57] Caching tarball of preloaded images
	I1117 22:55:32.799671   10640 preload.go:174] Found C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1117 22:55:32.799671   10640 cache.go:60] Finished verifying existence of preloaded tar for  v1.22.3 on docker
	I1117 22:55:32.799671   10640 profile.go:147] Saving config to C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\multinode-20211117225530-9504\config.json ...
	I1117 22:55:32.800773   10640 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\multinode-20211117225530-9504\config.json: {Name:mkee4ae02f87d609117bfd258fda1b0b5a42295d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1117 22:55:32.889873   10640 image.go:79] Found gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c in local docker daemon, skipping pull
	I1117 22:55:32.889873   10640 cache.go:140] gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c exists in daemon, skipping load
	I1117 22:55:32.889873   10640 cache.go:206] Successfully downloaded all kic artifacts
	I1117 22:55:32.889873   10640 start.go:313] acquiring machines lock for multinode-20211117225530-9504: {Name:mk4ceed01407c773b7965bcd22df69b99303385f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 22:55:32.889873   10640 start.go:317] acquired machines lock for "multinode-20211117225530-9504" in 0s
	I1117 22:55:32.889873   10640 start.go:89] Provisioning new machine with config: &{Name:multinode-20211117225530-9504 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.3 ClusterName:multinode-20211117225530-9504 Namespace:default APIServerName:minikubeCA APIServer
Names:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.22.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host} &{Name: IP: Port:8443 KubernetesVersion:v1.22.3 ControlPlane:true Worker:true}
	I1117 22:55:32.889873   10640 start.go:126] createHost starting for "" (driver="docker")
	I1117 22:55:32.894441   10640 out.go:203] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I1117 22:55:32.894955   10640 start.go:160] libmachine.API.Create for "multinode-20211117225530-9504" (driver="docker")
	I1117 22:55:32.894955   10640 client.go:168] LocalClient.Create starting
	I1117 22:55:32.895447   10640 main.go:130] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I1117 22:55:32.895697   10640 main.go:130] libmachine: Decoding PEM data...
	I1117 22:55:32.895777   10640 main.go:130] libmachine: Parsing certificate...
	I1117 22:55:32.895997   10640 main.go:130] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I1117 22:55:32.895997   10640 main.go:130] libmachine: Decoding PEM data...
	I1117 22:55:32.895997   10640 main.go:130] libmachine: Parsing certificate...
	I1117 22:55:32.900143   10640 cli_runner.go:115] Run: docker network inspect multinode-20211117225530-9504 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1117 22:55:33.000258   10640 cli_runner.go:162] docker network inspect multinode-20211117225530-9504 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1117 22:55:33.004648   10640 network_create.go:254] running [docker network inspect multinode-20211117225530-9504] to gather additional debugging logs...
	I1117 22:55:33.004782   10640 cli_runner.go:115] Run: docker network inspect multinode-20211117225530-9504
	W1117 22:55:33.102134   10640 cli_runner.go:162] docker network inspect multinode-20211117225530-9504 returned with exit code 1
	I1117 22:55:33.102134   10640 network_create.go:257] error running [docker network inspect multinode-20211117225530-9504]: docker network inspect multinode-20211117225530-9504: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: multinode-20211117225530-9504
	I1117 22:55:33.102134   10640 network_create.go:259] output of [docker network inspect multinode-20211117225530-9504]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: multinode-20211117225530-9504
	
	** /stderr **
	I1117 22:55:33.106988   10640 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1117 22:55:33.228138   10640 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc00066c260] misses:0}
	I1117 22:55:33.228138   10640 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1117 22:55:33.228138   10640 network_create.go:106] attempt to create docker network multinode-20211117225530-9504 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1117 22:55:33.231912   10640 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true multinode-20211117225530-9504
	I1117 22:55:33.451333   10640 network_create.go:90] docker network multinode-20211117225530-9504 192.168.49.0/24 created
	I1117 22:55:33.451440   10640 kic.go:106] calculated static IP "192.168.49.2" for the "multinode-20211117225530-9504" container
	I1117 22:55:33.459061   10640 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
	I1117 22:55:33.556871   10640 cli_runner.go:115] Run: docker volume create multinode-20211117225530-9504 --label name.minikube.sigs.k8s.io=multinode-20211117225530-9504 --label created_by.minikube.sigs.k8s.io=true
	I1117 22:55:33.656427   10640 oci.go:102] Successfully created a docker volume multinode-20211117225530-9504
	I1117 22:55:33.664825   10640 cli_runner.go:115] Run: docker run --rm --name multinode-20211117225530-9504-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-20211117225530-9504 --entrypoint /usr/bin/test -v multinode-20211117225530-9504:/var gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -d /var/lib
	I1117 22:55:34.730984   10640 cli_runner.go:168] Completed: docker run --rm --name multinode-20211117225530-9504-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-20211117225530-9504 --entrypoint /usr/bin/test -v multinode-20211117225530-9504:/var gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -d /var/lib: (1.0661511s)
	I1117 22:55:34.731274   10640 oci.go:106] Successfully prepared a docker volume multinode-20211117225530-9504
	I1117 22:55:34.731274   10640 preload.go:132] Checking if preload exists for k8s version v1.22.3 and runtime docker
	I1117 22:55:34.731388   10640 kic.go:179] Starting extracting preloaded images to volume ...
	I1117 22:55:34.736364   10640 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I1117 22:55:34.736656   10640 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-20211117225530-9504:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir
	W1117 22:55:34.860378   10640 cli_runner.go:162] docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-20211117225530-9504:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir returned with exit code 125
	I1117 22:55:34.860481   10640 kic.go:186] Unable to extract preloaded tarball to volume: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-20211117225530-9504:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: status code not OK but 500: {"Message":"Unhandled exception: The notification platform is unavailable.\r\n\r\nThe notification platform is unavailable.\r\n","StackTrace":"   at Windows.UI.Notifications.ToastNotificationManager.CreateToastNotifier(String applicationId)\r\n   at Docker.WPF.PromptShareDirectory.<PromptUserAsync>d__0.MoveNext() in C:\\workspaces\\stable-2.3.x\\src\\github.com\\docker\\pinata\\win\\src\\Docker.WPF\\PromptShareDirectory.cs:line 25\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at Docker.ApiServices.Mounting.FileSharing.<DoShareAsync>d__6.MoveNext() in C:\\workspaces\\stable-2.3.x\\src\\github.com\\docker\\pinata\\win\\src\\Docker.ApiServices\\Mounting\\FileSharing.cs:line 80\r\n--- End of stack trace from previous location whe
re exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at Docker.ApiServices.Mounting.FileSharing.<ShareAsync>d__4.MoveNext() in C:\\workspaces\\stable-2.3.x\\src\\github.com\\docker\\pinata\\win\\src\\Docker.ApiServices\\Mounting\\FileSharing.cs:line 47\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at Docker.HttpApi.Controllers.FilesharingController.<ShareDirectory>d__2.MoveNext() in C:\\workspaces\\stable-2.3.x\\src\\github.com\\docker\\pinata\\win\\src\\Docker.HttpApi\\Controllers\\FilesharingController.cs:line 21\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.Exceptio
nDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at System.Threading.Tasks.TaskHelpersExtensions.<CastToObject>d__1`1.MoveNext()\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at System.Web.Http.Controllers.ApiControllerActionInvoker.<InvokeActionAsyncCore>d__1.MoveNext()\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at System.Web.Http.Controllers.ActionFilterResult.<ExecuteAsync>d__5.MoveNext()\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.Excepti
onServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at System.Web.Http.Dispatcher.HttpControllerDispatcher.<SendAsync>d__15.MoveNext()"}.
	See 'docker run --help'.
	I1117 22:55:35.074081   10640 info.go:263] docker info: {ID:L2W4:ZZCI:FAHF:ZZT6:NAWJ:ODFR:2ZBQ:YQ77:PYZZ:VR3E:WLUM:OBSV Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:40 OomKillDisable:true NGoroutines:52 SystemTime:2021-11-17 22:55:34.816624784 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:4.19.76-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://inde
x.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:6239399936 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:19.03.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7ad184331fa3e55e52b890ea95e65ba581ae3429 Expected:7ad184331fa3e55e52b890ea95e65ba581ae3429} RuncCommit:{ID:dc9208a3303feef5b3839f4323d9beb36df0a9dd Expected:dc9208a3303feef5b3839f4323d9beb36df0a9dd} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense:Community Engine Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	E1117 22:55:35.074656   10640 oci.go:197] error getting kernel modules path: Unable to locate kernel modules
	I1117 22:55:35.074656   10640 client.go:171] LocalClient.Create took 2.1796024s
	I1117 22:55:37.083593   10640 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1117 22:55:37.088125   10640 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117225530-9504
	W1117 22:55:37.176892   10640 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117225530-9504 returned with exit code 1
	I1117 22:55:37.177375   10640 retry.go:31] will retry after 276.165072ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20211117225530-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117225530-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117225530-9504
	I1117 22:55:37.458656   10640 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117225530-9504
	W1117 22:55:37.546670   10640 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117225530-9504 returned with exit code 1
	I1117 22:55:37.546934   10640 retry.go:31] will retry after 540.190908ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20211117225530-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117225530-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117225530-9504
	I1117 22:55:38.092535   10640 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117225530-9504
	W1117 22:55:38.180245   10640 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117225530-9504 returned with exit code 1
	I1117 22:55:38.180605   10640 retry.go:31] will retry after 655.06503ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20211117225530-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117225530-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117225530-9504
	I1117 22:55:38.839385   10640 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117225530-9504
	W1117 22:55:38.935029   10640 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117225530-9504 returned with exit code 1
	W1117 22:55:38.935264   10640 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20211117225530-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117225530-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117225530-9504
	
	W1117 22:55:38.935264   10640 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20211117225530-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117225530-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117225530-9504
	I1117 22:55:38.935264   10640 start.go:129] duration metric: createHost completed in 6.0453459s
	I1117 22:55:38.935264   10640 start.go:80] releasing machines lock for "multinode-20211117225530-9504", held for 6.0453459s
	W1117 22:55:38.935264   10640 start.go:532] error starting host: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	I1117 22:55:38.943883   10640 cli_runner.go:115] Run: docker container inspect multinode-20211117225530-9504 --format={{.State.Status}}
	W1117 22:55:39.028019   10640 cli_runner.go:162] docker container inspect multinode-20211117225530-9504 --format={{.State.Status}} returned with exit code 1
	I1117 22:55:39.028218   10640 delete.go:82] Unable to get host status for multinode-20211117225530-9504, assuming it has already been deleted: state: unknown state "multinode-20211117225530-9504": docker container inspect multinode-20211117225530-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117225530-9504
	W1117 22:55:39.028455   10640 out.go:241] ! StartHost failed, but will try again: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	! StartHost failed, but will try again: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	I1117 22:55:39.028455   10640 start.go:547] Will try again in 5 seconds ...
	I1117 22:55:44.028811   10640 start.go:313] acquiring machines lock for multinode-20211117225530-9504: {Name:mk4ceed01407c773b7965bcd22df69b99303385f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 22:55:44.028811   10640 start.go:317] acquired machines lock for "multinode-20211117225530-9504" in 0s
	I1117 22:55:44.029576   10640 start.go:93] Skipping create...Using existing machine configuration
	I1117 22:55:44.029576   10640 fix.go:55] fixHost starting: 
	I1117 22:55:44.038104   10640 cli_runner.go:115] Run: docker container inspect multinode-20211117225530-9504 --format={{.State.Status}}
	W1117 22:55:44.128277   10640 cli_runner.go:162] docker container inspect multinode-20211117225530-9504 --format={{.State.Status}} returned with exit code 1
	I1117 22:55:44.128545   10640 fix.go:108] recreateIfNeeded on multinode-20211117225530-9504: state= err=unknown state "multinode-20211117225530-9504": docker container inspect multinode-20211117225530-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117225530-9504
	I1117 22:55:44.128545   10640 fix.go:113] machineExists: false. err=machine does not exist
	I1117 22:55:44.133002   10640 out.go:176] * docker "multinode-20211117225530-9504" container is missing, will recreate.
	I1117 22:55:44.133002   10640 delete.go:124] DEMOLISHING multinode-20211117225530-9504 ...
	I1117 22:55:44.139919   10640 cli_runner.go:115] Run: docker container inspect multinode-20211117225530-9504 --format={{.State.Status}}
	W1117 22:55:44.226189   10640 cli_runner.go:162] docker container inspect multinode-20211117225530-9504 --format={{.State.Status}} returned with exit code 1
	W1117 22:55:44.226292   10640 stop.go:75] unable to get state: unknown state "multinode-20211117225530-9504": docker container inspect multinode-20211117225530-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117225530-9504
	I1117 22:55:44.226292   10640 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "multinode-20211117225530-9504": docker container inspect multinode-20211117225530-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117225530-9504
	I1117 22:55:44.233710   10640 cli_runner.go:115] Run: docker container inspect multinode-20211117225530-9504 --format={{.State.Status}}
	W1117 22:55:44.323078   10640 cli_runner.go:162] docker container inspect multinode-20211117225530-9504 --format={{.State.Status}} returned with exit code 1
	I1117 22:55:44.323275   10640 delete.go:82] Unable to get host status for multinode-20211117225530-9504, assuming it has already been deleted: state: unknown state "multinode-20211117225530-9504": docker container inspect multinode-20211117225530-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117225530-9504
	I1117 22:55:44.327508   10640 cli_runner.go:115] Run: docker container inspect -f {{.Id}} multinode-20211117225530-9504
	W1117 22:55:44.414627   10640 cli_runner.go:162] docker container inspect -f {{.Id}} multinode-20211117225530-9504 returned with exit code 1
	I1117 22:55:44.414961   10640 kic.go:360] could not find the container multinode-20211117225530-9504 to remove it. will try anyways
	I1117 22:55:44.419296   10640 cli_runner.go:115] Run: docker container inspect multinode-20211117225530-9504 --format={{.State.Status}}
	W1117 22:55:44.507100   10640 cli_runner.go:162] docker container inspect multinode-20211117225530-9504 --format={{.State.Status}} returned with exit code 1
	W1117 22:55:44.507511   10640 oci.go:83] error getting container status, will try to delete anyways: unknown state "multinode-20211117225530-9504": docker container inspect multinode-20211117225530-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117225530-9504
	I1117 22:55:44.511573   10640 cli_runner.go:115] Run: docker exec --privileged -t multinode-20211117225530-9504 /bin/bash -c "sudo init 0"
	W1117 22:55:44.606060   10640 cli_runner.go:162] docker exec --privileged -t multinode-20211117225530-9504 /bin/bash -c "sudo init 0" returned with exit code 1
	I1117 22:55:44.606060   10640 oci.go:658] error shutdown multinode-20211117225530-9504: docker exec --privileged -t multinode-20211117225530-9504 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: multinode-20211117225530-9504
	I1117 22:55:45.611584   10640 cli_runner.go:115] Run: docker container inspect multinode-20211117225530-9504 --format={{.State.Status}}
	W1117 22:55:45.704502   10640 cli_runner.go:162] docker container inspect multinode-20211117225530-9504 --format={{.State.Status}} returned with exit code 1
	I1117 22:55:45.704502   10640 oci.go:670] temporary error verifying shutdown: unknown state "multinode-20211117225530-9504": docker container inspect multinode-20211117225530-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117225530-9504
	I1117 22:55:45.704502   10640 oci.go:672] temporary error: container multinode-20211117225530-9504 status is  but expect it to be exited
	I1117 22:55:45.704763   10640 retry.go:31] will retry after 462.318748ms: couldn't verify container is exited. %v: unknown state "multinode-20211117225530-9504": docker container inspect multinode-20211117225530-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117225530-9504
	I1117 22:55:46.172597   10640 cli_runner.go:115] Run: docker container inspect multinode-20211117225530-9504 --format={{.State.Status}}
	W1117 22:55:46.266654   10640 cli_runner.go:162] docker container inspect multinode-20211117225530-9504 --format={{.State.Status}} returned with exit code 1
	I1117 22:55:46.266654   10640 oci.go:670] temporary error verifying shutdown: unknown state "multinode-20211117225530-9504": docker container inspect multinode-20211117225530-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117225530-9504
	I1117 22:55:46.266654   10640 oci.go:672] temporary error: container multinode-20211117225530-9504 status is  but expect it to be exited
	I1117 22:55:46.266654   10640 retry.go:31] will retry after 890.117305ms: couldn't verify container is exited. %v: unknown state "multinode-20211117225530-9504": docker container inspect multinode-20211117225530-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117225530-9504
	I1117 22:55:47.162288   10640 cli_runner.go:115] Run: docker container inspect multinode-20211117225530-9504 --format={{.State.Status}}
	W1117 22:55:47.247460   10640 cli_runner.go:162] docker container inspect multinode-20211117225530-9504 --format={{.State.Status}} returned with exit code 1
	I1117 22:55:47.247816   10640 oci.go:670] temporary error verifying shutdown: unknown state "multinode-20211117225530-9504": docker container inspect multinode-20211117225530-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117225530-9504
	I1117 22:55:47.247816   10640 oci.go:672] temporary error: container multinode-20211117225530-9504 status is  but expect it to be exited
	I1117 22:55:47.247816   10640 retry.go:31] will retry after 636.341646ms: couldn't verify container is exited. %v: unknown state "multinode-20211117225530-9504": docker container inspect multinode-20211117225530-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117225530-9504
	I1117 22:55:47.889455   10640 cli_runner.go:115] Run: docker container inspect multinode-20211117225530-9504 --format={{.State.Status}}
	W1117 22:55:47.979938   10640 cli_runner.go:162] docker container inspect multinode-20211117225530-9504 --format={{.State.Status}} returned with exit code 1
	I1117 22:55:47.980095   10640 oci.go:670] temporary error verifying shutdown: unknown state "multinode-20211117225530-9504": docker container inspect multinode-20211117225530-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117225530-9504
	I1117 22:55:47.980095   10640 oci.go:672] temporary error: container multinode-20211117225530-9504 status is  but expect it to be exited
	I1117 22:55:47.980095   10640 retry.go:31] will retry after 1.107876242s: couldn't verify container is exited. %v: unknown state "multinode-20211117225530-9504": docker container inspect multinode-20211117225530-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117225530-9504
	I1117 22:55:49.092628   10640 cli_runner.go:115] Run: docker container inspect multinode-20211117225530-9504 --format={{.State.Status}}
	W1117 22:55:49.182598   10640 cli_runner.go:162] docker container inspect multinode-20211117225530-9504 --format={{.State.Status}} returned with exit code 1
	I1117 22:55:49.182598   10640 oci.go:670] temporary error verifying shutdown: unknown state "multinode-20211117225530-9504": docker container inspect multinode-20211117225530-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117225530-9504
	I1117 22:55:49.182984   10640 oci.go:672] temporary error: container multinode-20211117225530-9504 status is  but expect it to be exited
	I1117 22:55:49.183057   10640 retry.go:31] will retry after 1.511079094s: couldn't verify container is exited. %v: unknown state "multinode-20211117225530-9504": docker container inspect multinode-20211117225530-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117225530-9504
	I1117 22:55:50.700971   10640 cli_runner.go:115] Run: docker container inspect multinode-20211117225530-9504 --format={{.State.Status}}
	W1117 22:55:50.793204   10640 cli_runner.go:162] docker container inspect multinode-20211117225530-9504 --format={{.State.Status}} returned with exit code 1
	I1117 22:55:50.793309   10640 oci.go:670] temporary error verifying shutdown: unknown state "multinode-20211117225530-9504": docker container inspect multinode-20211117225530-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117225530-9504
	I1117 22:55:50.793371   10640 oci.go:672] temporary error: container multinode-20211117225530-9504 status is  but expect it to be exited
	I1117 22:55:50.793371   10640 retry.go:31] will retry after 3.04096222s: couldn't verify container is exited. %v: unknown state "multinode-20211117225530-9504": docker container inspect multinode-20211117225530-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117225530-9504
	I1117 22:55:53.841647   10640 cli_runner.go:115] Run: docker container inspect multinode-20211117225530-9504 --format={{.State.Status}}
	W1117 22:55:53.928586   10640 cli_runner.go:162] docker container inspect multinode-20211117225530-9504 --format={{.State.Status}} returned with exit code 1
	I1117 22:55:53.928656   10640 oci.go:670] temporary error verifying shutdown: unknown state "multinode-20211117225530-9504": docker container inspect multinode-20211117225530-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117225530-9504
	I1117 22:55:53.928837   10640 oci.go:672] temporary error: container multinode-20211117225530-9504 status is  but expect it to be exited
	I1117 22:55:53.928895   10640 retry.go:31] will retry after 5.781953173s: couldn't verify container is exited. %v: unknown state "multinode-20211117225530-9504": docker container inspect multinode-20211117225530-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117225530-9504
	I1117 22:55:59.715892   10640 cli_runner.go:115] Run: docker container inspect multinode-20211117225530-9504 --format={{.State.Status}}
	W1117 22:55:59.821408   10640 cli_runner.go:162] docker container inspect multinode-20211117225530-9504 --format={{.State.Status}} returned with exit code 1
	I1117 22:55:59.821408   10640 oci.go:670] temporary error verifying shutdown: unknown state "multinode-20211117225530-9504": docker container inspect multinode-20211117225530-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117225530-9504
	I1117 22:55:59.821408   10640 oci.go:672] temporary error: container multinode-20211117225530-9504 status is  but expect it to be exited
	I1117 22:55:59.821408   10640 oci.go:87] couldn't shut down multinode-20211117225530-9504 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "multinode-20211117225530-9504": docker container inspect multinode-20211117225530-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117225530-9504
	 
	I1117 22:55:59.827461   10640 cli_runner.go:115] Run: docker rm -f -v multinode-20211117225530-9504
	W1117 22:55:59.920444   10640 cli_runner.go:162] docker rm -f -v multinode-20211117225530-9504 returned with exit code 1
	W1117 22:55:59.921549   10640 delete.go:139] delete failed (probably ok) <nil>
	I1117 22:55:59.921549   10640 fix.go:120] Sleeping 1 second for extra luck!
	I1117 22:56:00.921803   10640 start.go:126] createHost starting for "" (driver="docker")
	I1117 22:56:00.926174   10640 out.go:203] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I1117 22:56:00.926407   10640 start.go:160] libmachine.API.Create for "multinode-20211117225530-9504" (driver="docker")
	I1117 22:56:00.926499   10640 client.go:168] LocalClient.Create starting
	I1117 22:56:00.927016   10640 main.go:130] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I1117 22:56:00.927269   10640 main.go:130] libmachine: Decoding PEM data...
	I1117 22:56:00.927303   10640 main.go:130] libmachine: Parsing certificate...
	I1117 22:56:00.927403   10640 main.go:130] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I1117 22:56:00.927403   10640 main.go:130] libmachine: Decoding PEM data...
	I1117 22:56:00.927403   10640 main.go:130] libmachine: Parsing certificate...
	I1117 22:56:00.931958   10640 cli_runner.go:115] Run: docker network inspect multinode-20211117225530-9504 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1117 22:56:01.035331   10640 network_create.go:67] Found existing network {name:multinode-20211117225530-9504 subnet:0xc000cb0450 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 168 49 1] mtu:1500}
	I1117 22:56:01.035545   10640 kic.go:106] calculated static IP "192.168.49.2" for the "multinode-20211117225530-9504" container
	I1117 22:56:01.042589   10640 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
	I1117 22:56:01.145670   10640 cli_runner.go:115] Run: docker volume create multinode-20211117225530-9504 --label name.minikube.sigs.k8s.io=multinode-20211117225530-9504 --label created_by.minikube.sigs.k8s.io=true
	I1117 22:56:01.237000   10640 oci.go:102] Successfully created a docker volume multinode-20211117225530-9504
	I1117 22:56:01.241169   10640 cli_runner.go:115] Run: docker run --rm --name multinode-20211117225530-9504-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-20211117225530-9504 --entrypoint /usr/bin/test -v multinode-20211117225530-9504:/var gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -d /var/lib
	I1117 22:56:02.068890   10640 oci.go:106] Successfully prepared a docker volume multinode-20211117225530-9504
	I1117 22:56:02.069228   10640 preload.go:132] Checking if preload exists for k8s version v1.22.3 and runtime docker
	I1117 22:56:02.069272   10640 kic.go:179] Starting extracting preloaded images to volume ...
	I1117 22:56:02.074305   10640 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-20211117225530-9504:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir
	I1117 22:56:02.074305   10640 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	W1117 22:56:02.191747   10640 cli_runner.go:162] docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-20211117225530-9504:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir returned with exit code 125
	I1117 22:56:02.191829   10640 kic.go:186] Unable to extract preloaded tarball to volume: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-20211117225530-9504:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: status code not OK but 500: {"Message":"Unhandled exception: The notification platform is unavailable.\r\n\r\nThe notification platform is unavailable.\r\n","StackTrace":"   at Windows.UI.Notifications.ToastNotificationManager.CreateToastNotifier(String applicationId)\r\n   at Docker.WPF.PromptShareDirectory.<PromptUserAsync>d__0.MoveNext() in C:\\workspaces\\stable-2.3.x\\src\\github.com\\docker\\pinata\\win\\src\\Docker.WPF\\PromptShareDirectory.cs:line 25\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at Docker.ApiServices.Mounting.FileSharing.<DoShareAsync>d__6.MoveNext() in C:\\workspaces\\stable-2.3.x\\src\\github.com\\docker\\pinata\\win\\src\\Docker.ApiServices\\Mounting\\FileSharing.cs:line 80\r\n--- End of stack trace from previous location whe
re exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at Docker.ApiServices.Mounting.FileSharing.<ShareAsync>d__4.MoveNext() in C:\\workspaces\\stable-2.3.x\\src\\github.com\\docker\\pinata\\win\\src\\Docker.ApiServices\\Mounting\\FileSharing.cs:line 47\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at Docker.HttpApi.Controllers.FilesharingController.<ShareDirectory>d__2.MoveNext() in C:\\workspaces\\stable-2.3.x\\src\\github.com\\docker\\pinata\\win\\src\\Docker.HttpApi\\Controllers\\FilesharingController.cs:line 21\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.Exceptio
nDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at System.Threading.Tasks.TaskHelpersExtensions.<CastToObject>d__1`1.MoveNext()\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at System.Web.Http.Controllers.ApiControllerActionInvoker.<InvokeActionAsyncCore>d__1.MoveNext()\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at System.Web.Http.Controllers.ActionFilterResult.<ExecuteAsync>d__5.MoveNext()\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.Excepti
onServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at System.Web.Http.Dispatcher.HttpControllerDispatcher.<SendAsync>d__15.MoveNext()"}.
	See 'docker run --help'.
	I1117 22:56:02.413736   10640 info.go:263] docker info: {ID:L2W4:ZZCI:FAHF:ZZT6:NAWJ:ODFR:2ZBQ:YQ77:PYZZ:VR3E:WLUM:OBSV Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:40 OomKillDisable:true NGoroutines:52 SystemTime:2021-11-17 22:56:02.156462073 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:4.19.76-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://inde
x.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:6239399936 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:19.03.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7ad184331fa3e55e52b890ea95e65ba581ae3429 Expected:7ad184331fa3e55e52b890ea95e65ba581ae3429} RuncCommit:{ID:dc9208a3303feef5b3839f4323d9beb36df0a9dd Expected:dc9208a3303feef5b3839f4323d9beb36df0a9dd} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense:Community Engine Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	E1117 22:56:02.414206   10640 oci.go:197] error getting kernel modules path: Unable to locate kernel modules
	I1117 22:56:02.414206   10640 client.go:171] LocalClient.Create took 1.4876957s
	I1117 22:56:04.422739   10640 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1117 22:56:04.425525   10640 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117225530-9504
	W1117 22:56:04.528639   10640 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117225530-9504 returned with exit code 1
	I1117 22:56:04.528917   10640 retry.go:31] will retry after 178.565968ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20211117225530-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117225530-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117225530-9504
	I1117 22:56:04.713989   10640 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117225530-9504
	W1117 22:56:04.806541   10640 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117225530-9504 returned with exit code 1
	I1117 22:56:04.806702   10640 retry.go:31] will retry after 330.246446ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20211117225530-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117225530-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117225530-9504
	I1117 22:56:05.142603   10640 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117225530-9504
	W1117 22:56:05.230889   10640 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117225530-9504 returned with exit code 1
	I1117 22:56:05.231107   10640 retry.go:31] will retry after 460.157723ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20211117225530-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117225530-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117225530-9504
	I1117 22:56:05.696594   10640 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117225530-9504
	W1117 22:56:05.784782   10640 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117225530-9504 returned with exit code 1
	W1117 22:56:05.784939   10640 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20211117225530-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117225530-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117225530-9504
	
	W1117 22:56:05.784939   10640 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20211117225530-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117225530-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117225530-9504
	I1117 22:56:05.784939   10640 start.go:129] duration metric: createHost completed in 4.8628936s
	I1117 22:56:05.793034   10640 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1117 22:56:05.796144   10640 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117225530-9504
	W1117 22:56:05.885754   10640 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117225530-9504 returned with exit code 1
	I1117 22:56:05.885873   10640 retry.go:31] will retry after 195.758538ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20211117225530-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117225530-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117225530-9504
	I1117 22:56:06.086509   10640 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117225530-9504
	W1117 22:56:06.185896   10640 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117225530-9504 returned with exit code 1
	I1117 22:56:06.186283   10640 retry.go:31] will retry after 297.413196ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20211117225530-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117225530-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117225530-9504
	I1117 22:56:06.488947   10640 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117225530-9504
	W1117 22:56:06.576227   10640 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117225530-9504 returned with exit code 1
	I1117 22:56:06.576413   10640 retry.go:31] will retry after 663.23513ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20211117225530-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117225530-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117225530-9504
	I1117 22:56:07.244749   10640 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117225530-9504
	W1117 22:56:07.333188   10640 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117225530-9504 returned with exit code 1
	W1117 22:56:07.333259   10640 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20211117225530-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117225530-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117225530-9504
	
	W1117 22:56:07.333259   10640 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20211117225530-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117225530-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117225530-9504
	I1117 22:56:07.333259   10640 fix.go:57] fixHost completed within 23.3035083s
	I1117 22:56:07.333259   10640 start.go:80] releasing machines lock for "multinode-20211117225530-9504", held for 23.3037037s
	W1117 22:56:07.333807   10640 out.go:241] * Failed to start docker container. Running "minikube delete -p multinode-20211117225530-9504" may fix it: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	* Failed to start docker container. Running "minikube delete -p multinode-20211117225530-9504" may fix it: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	I1117 22:56:07.338551   10640 out.go:176] 
	W1117 22:56:07.338551   10640 out.go:241] X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	W1117 22:56:07.339121   10640 out.go:241] * 
	* 
	W1117 22:56:07.340189   10640 out.go:241] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1117 22:56:07.342521   10640 out.go:176] 

                                                
                                                
** /stderr **
multinode_test.go:84: failed to start cluster. args "out/minikube-windows-amd64.exe start -p multinode-20211117225530-9504 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker" : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/FreshStart2Nodes]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-20211117225530-9504
helpers_test.go:235: (dbg) docker inspect multinode-20211117225530-9504:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-20211117225530-9504",
	        "Id": "499612ee6071dd6efa08c962472a0a3eb9e81c69d859854a1afad7c16b310041",
	        "Created": "2021-11-17T22:55:33.328415206Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.49.0/24",
	                    "Gateway": "192.168.49.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-20211117225530-9504 -n multinode-20211117225530-9504
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-20211117225530-9504 -n multinode-20211117225530-9504: exit status 7 (1.7563594s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 22:56:09.298452    7580 status.go:247] status error: host: state: unknown state "multinode-20211117225530-9504": docker container inspect multinode-20211117225530-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117225530-9504

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-20211117225530-9504" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/FreshStart2Nodes (39.08s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (14.13s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:463: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-20211117225530-9504 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:463: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p multinode-20211117225530-9504 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml: exit status 1 (1.7326737s)

                                                
                                                
** stderr ** 
	error: cluster "multinode-20211117225530-9504" does not exist

                                                
                                                
** /stderr **
multinode_test.go:465: failed to create busybox deployment to multinode cluster
multinode_test.go:468: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-20211117225530-9504 -- rollout status deployment/busybox
multinode_test.go:468: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p multinode-20211117225530-9504 -- rollout status deployment/busybox: exit status 1 (1.7166508s)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-20211117225530-9504"

                                                
                                                
** /stderr **
multinode_test.go:470: failed to deploy busybox to multinode cluster
multinode_test.go:474: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-20211117225530-9504 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:474: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p multinode-20211117225530-9504 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (1.7866629s)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-20211117225530-9504"

                                                
                                                
** /stderr **
multinode_test.go:476: failed to retrieve Pod IPs
multinode_test.go:480: expected 2 Pod IPs but got 1
multinode_test.go:486: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-20211117225530-9504 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:486: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p multinode-20211117225530-9504 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (1.7935513s)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-20211117225530-9504"

                                                
                                                
** /stderr **
multinode_test.go:488: failed get Pod names
multinode_test.go:494: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-20211117225530-9504 -- exec  -- nslookup kubernetes.io
multinode_test.go:494: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p multinode-20211117225530-9504 -- exec  -- nslookup kubernetes.io: exit status 1 (1.7603291s)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-20211117225530-9504"

                                                
                                                
** /stderr **
multinode_test.go:496: Pod  could not resolve 'kubernetes.io': exit status 1
multinode_test.go:504: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-20211117225530-9504 -- exec  -- nslookup kubernetes.default
multinode_test.go:504: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p multinode-20211117225530-9504 -- exec  -- nslookup kubernetes.default: exit status 1 (1.7348455s)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-20211117225530-9504"

                                                
                                                
** /stderr **
multinode_test.go:506: Pod  could not resolve 'kubernetes.default': exit status 1
multinode_test.go:512: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-20211117225530-9504 -- exec  -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:512: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p multinode-20211117225530-9504 -- exec  -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (1.7385735s)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-20211117225530-9504"

                                                
                                                
** /stderr **
multinode_test.go:514: Pod  could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/DeployApp2Nodes]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-20211117225530-9504
helpers_test.go:235: (dbg) docker inspect multinode-20211117225530-9504:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-20211117225530-9504",
	        "Id": "499612ee6071dd6efa08c962472a0a3eb9e81c69d859854a1afad7c16b310041",
	        "Created": "2021-11-17T22:55:33.328415206Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.49.0/24",
	                    "Gateway": "192.168.49.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-20211117225530-9504 -n multinode-20211117225530-9504
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-20211117225530-9504 -n multinode-20211117225530-9504: exit status 7 (1.7620763s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 22:56:23.424333   10672 status.go:247] status error: host: state: unknown state "multinode-20211117225530-9504": docker container inspect multinode-20211117225530-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117225530-9504

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-20211117225530-9504" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/DeployApp2Nodes (14.13s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (3.64s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:522: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-20211117225530-9504 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:522: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p multinode-20211117225530-9504 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (1.7998089s)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-20211117225530-9504"

                                                
                                                
** /stderr **
multinode_test.go:524: failed to get Pod names: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-20211117225530-9504
helpers_test.go:235: (dbg) docker inspect multinode-20211117225530-9504:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-20211117225530-9504",
	        "Id": "499612ee6071dd6efa08c962472a0a3eb9e81c69d859854a1afad7c16b310041",
	        "Created": "2021-11-17T22:55:33.328415206Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.49.0/24",
	                    "Gateway": "192.168.49.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-20211117225530-9504 -n multinode-20211117225530-9504
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-20211117225530-9504 -n multinode-20211117225530-9504: exit status 7 (1.7368296s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 22:56:27.066261    7244 status.go:247] status error: host: state: unknown state "multinode-20211117225530-9504": docker container inspect multinode-20211117225530-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117225530-9504

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-20211117225530-9504" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (3.64s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (3.67s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:107: (dbg) Run:  out/minikube-windows-amd64.exe node add -p multinode-20211117225530-9504 -v 3 --alsologtostderr
multinode_test.go:107: (dbg) Non-zero exit: out/minikube-windows-amd64.exe node add -p multinode-20211117225530-9504 -v 3 --alsologtostderr: exit status 80 (1.7783154s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1117 22:56:27.275210    9972 out.go:297] Setting OutFile to fd 992 ...
	I1117 22:56:27.358570    9972 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 22:56:27.358570    9972 out.go:310] Setting ErrFile to fd 868...
	I1117 22:56:27.358570    9972 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 22:56:27.370892    9972 mustload.go:65] Loading cluster: multinode-20211117225530-9504
	I1117 22:56:27.371583    9972 config.go:176] Loaded profile config "multinode-20211117225530-9504": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.3
	I1117 22:56:27.380160    9972 cli_runner.go:115] Run: docker container inspect multinode-20211117225530-9504 --format={{.State.Status}}
	W1117 22:56:28.831478    9972 cli_runner.go:162] docker container inspect multinode-20211117225530-9504 --format={{.State.Status}} returned with exit code 1
	I1117 22:56:28.831536    9972 cli_runner.go:168] Completed: docker container inspect multinode-20211117225530-9504 --format={{.State.Status}}: (1.4513073s)
	I1117 22:56:28.835264    9972 out.go:176] 
	W1117 22:56:28.835264    9972 out.go:241] X Exiting due to GUEST_STATUS: state: unknown state "multinode-20211117225530-9504": docker container inspect multinode-20211117225530-9504 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117225530-9504
	
	X Exiting due to GUEST_STATUS: state: unknown state "multinode-20211117225530-9504": docker container inspect multinode-20211117225530-9504 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117225530-9504
	
	W1117 22:56:28.835264    9972 out.go:241] * 
	* 
	W1117 22:56:28.842873    9972 out.go:241] ╭──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                      │
	│    * If the above advice does not help, please let us know:                                                          │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                        │
	│                                                                                                                      │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                             │
	│    * Please also attach the following file to the GitHub issue:                                                      │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_node_e3f75f9fdd712fd5423563a6a11e787bf6359068_1.log    │
	│                                                                                                                      │
	╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                      │
	│    * If the above advice does not help, please let us know:                                                          │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                        │
	│                                                                                                                      │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                             │
	│    * Please also attach the following file to the GitHub issue:                                                      │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_node_e3f75f9fdd712fd5423563a6a11e787bf6359068_1.log    │
	│                                                                                                                      │
	╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I1117 22:56:28.846066    9972 out.go:176] 

                                                
                                                
** /stderr **
multinode_test.go:109: failed to add node to current cluster. args "out/minikube-windows-amd64.exe node add -p multinode-20211117225530-9504 -v 3 --alsologtostderr" : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/AddNode]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-20211117225530-9504
helpers_test.go:235: (dbg) docker inspect multinode-20211117225530-9504:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-20211117225530-9504",
	        "Id": "499612ee6071dd6efa08c962472a0a3eb9e81c69d859854a1afad7c16b310041",
	        "Created": "2021-11-17T22:55:33.328415206Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.49.0/24",
	                    "Gateway": "192.168.49.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-20211117225530-9504 -n multinode-20211117225530-9504
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-20211117225530-9504 -n multinode-20211117225530-9504: exit status 7 (1.784349s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 22:56:30.732425   10656 status.go:247] status error: host: state: unknown state "multinode-20211117225530-9504": docker container inspect multinode-20211117225530-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117225530-9504

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-20211117225530-9504" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/AddNode (3.67s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (3.7s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:129: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
multinode_test.go:129: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (1.8465376s)
multinode_test.go:152: expected profile "multinode-20211117225530-9504" in json of 'profile list' include 3 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"multinode-20211117225530-9504\",\"Status\":\"Unknown\",\"Config\":{\"Name\":\"multinode-20211117225530-9504\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"VMDriver\":\"\",\"Driver\":\"docker\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFS
Share\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.22.3\",\"ClusterName\":\"multinode-20211117225530-9504\",\"Namespace\":\"default\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"ExtraOptions\":[{\"Component\":\"kubelet\",\"Key\":\"cni-conf-dir\",\"Value\":\"/etc/cni/net.mk\"}],\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\",\"NodeIP\":\"\",\"NodePort\":8443,\"NodeName\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"Kubern
etesVersion\":\"v1.22.3\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"C:\\\\Users\\\\jenkins.minikube2:/minikube-host\"}}]}"*. args: "out/minikube-windows-amd64.exe profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/ProfileList]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-20211117225530-9504
helpers_test.go:235: (dbg) docker inspect multinode-20211117225530-9504:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-20211117225530-9504",
	        "Id": "499612ee6071dd6efa08c962472a0a3eb9e81c69d859854a1afad7c16b310041",
	        "Created": "2021-11-17T22:55:33.328415206Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.49.0/24",
	                    "Gateway": "192.168.49.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-20211117225530-9504 -n multinode-20211117225530-9504
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-20211117225530-9504 -n multinode-20211117225530-9504: exit status 7 (1.7438734s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 22:56:34.432225   12104 status.go:247] status error: host: state: unknown state "multinode-20211117225530-9504": docker container inspect multinode-20211117225530-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117225530-9504

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-20211117225530-9504" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/ProfileList (3.70s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (3.52s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:170: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20211117225530-9504 status --output json --alsologtostderr
multinode_test.go:170: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-20211117225530-9504 status --output json --alsologtostderr: exit status 7 (1.7141851s)

                                                
                                                
-- stdout --
	{"Name":"multinode-20211117225530-9504","Host":"Nonexistent","Kubelet":"Nonexistent","APIServer":"Nonexistent","Kubeconfig":"Nonexistent","Worker":false}

                                                
                                                
-- /stdout --
** stderr ** 
	I1117 22:56:34.632461    3172 out.go:297] Setting OutFile to fd 976 ...
	I1117 22:56:34.694901    3172 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 22:56:34.694901    3172 out.go:310] Setting ErrFile to fd 920...
	I1117 22:56:34.694901    3172 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 22:56:34.705269    3172 out.go:304] Setting JSON to true
	I1117 22:56:34.705269    3172 mustload.go:65] Loading cluster: multinode-20211117225530-9504
	I1117 22:56:34.705969    3172 config.go:176] Loaded profile config "multinode-20211117225530-9504": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.3
	I1117 22:56:34.705969    3172 status.go:253] checking status of multinode-20211117225530-9504 ...
	I1117 22:56:34.713242    3172 cli_runner.go:115] Run: docker container inspect multinode-20211117225530-9504 --format={{.State.Status}}
	W1117 22:56:36.143549    3172 cli_runner.go:162] docker container inspect multinode-20211117225530-9504 --format={{.State.Status}} returned with exit code 1
	I1117 22:56:36.143549    3172 cli_runner.go:168] Completed: docker container inspect multinode-20211117225530-9504 --format={{.State.Status}}: (1.429662s)
	I1117 22:56:36.143549    3172 status.go:328] multinode-20211117225530-9504 host status = "" (err=state: unknown state "multinode-20211117225530-9504": docker container inspect multinode-20211117225530-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117225530-9504
	)
	I1117 22:56:36.144083    3172 status.go:255] multinode-20211117225530-9504 status: &{Name:multinode-20211117225530-9504 Host:Nonexistent Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Nonexistent Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E1117 22:56:36.144274    3172 status.go:258] status error: host: state: unknown state "multinode-20211117225530-9504": docker container inspect multinode-20211117225530-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117225530-9504
	E1117 22:56:36.144274    3172 status.go:261] The "multinode-20211117225530-9504" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:177: failed to decode json from status: args "out/minikube-windows-amd64.exe -p multinode-20211117225530-9504 status --output json --alsologtostderr": json: cannot unmarshal object into Go value of type []cmd.Status
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/CopyFile]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-20211117225530-9504
helpers_test.go:235: (dbg) docker inspect multinode-20211117225530-9504:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-20211117225530-9504",
	        "Id": "499612ee6071dd6efa08c962472a0a3eb9e81c69d859854a1afad7c16b310041",
	        "Created": "2021-11-17T22:55:33.328415206Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.49.0/24",
	                    "Gateway": "192.168.49.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-20211117225530-9504 -n multinode-20211117225530-9504
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-20211117225530-9504 -n multinode-20211117225530-9504: exit status 7 (1.7087776s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 22:56:37.955731   11996 status.go:247] status error: host: state: unknown state "multinode-20211117225530-9504": docker container inspect multinode-20211117225530-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117225530-9504

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-20211117225530-9504" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/CopyFile (3.52s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (5.56s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:192: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20211117225530-9504 node stop m03
multinode_test.go:192: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-20211117225530-9504 node stop m03: exit status 85 (303.9498ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_RETRIEVE: Could not find node m03
	* 
	╭──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                      │
	│    * If the above advice does not help, please let us know:                                                          │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                        │
	│                                                                                                                      │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                             │
	│    * Please also attach the following file to the GitHub issue:                                                      │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_node_a721422985a44b3996d93fcfe1a29c6759a29372_0.log    │
	│                                                                                                                      │
	╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:194: node stop returned an error. args "out/minikube-windows-amd64.exe -p multinode-20211117225530-9504 node stop m03": exit status 85
multinode_test.go:198: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20211117225530-9504 status
multinode_test.go:198: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-20211117225530-9504 status: exit status 7 (1.7058826s)

                                                
                                                
-- stdout --
	multinode-20211117225530-9504
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 22:56:39.962497    4588 status.go:258] status error: host: state: unknown state "multinode-20211117225530-9504": docker container inspect multinode-20211117225530-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117225530-9504
	E1117 22:56:39.962602    4588 status.go:261] The "multinode-20211117225530-9504" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:205: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20211117225530-9504 status --alsologtostderr
multinode_test.go:205: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-20211117225530-9504 status --alsologtostderr: exit status 7 (1.7227164s)

                                                
                                                
-- stdout --
	multinode-20211117225530-9504
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1117 22:56:40.167514    8184 out.go:297] Setting OutFile to fd 916 ...
	I1117 22:56:40.234591    8184 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 22:56:40.234591    8184 out.go:310] Setting ErrFile to fd 792...
	I1117 22:56:40.234591    8184 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 22:56:40.245099    8184 out.go:304] Setting JSON to false
	I1117 22:56:40.245099    8184 mustload.go:65] Loading cluster: multinode-20211117225530-9504
	I1117 22:56:40.245938    8184 config.go:176] Loaded profile config "multinode-20211117225530-9504": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.3
	I1117 22:56:40.245938    8184 status.go:253] checking status of multinode-20211117225530-9504 ...
	I1117 22:56:40.255145    8184 cli_runner.go:115] Run: docker container inspect multinode-20211117225530-9504 --format={{.State.Status}}
	W1117 22:56:41.687802    8184 cli_runner.go:162] docker container inspect multinode-20211117225530-9504 --format={{.State.Status}} returned with exit code 1
	I1117 22:56:41.687802    8184 cli_runner.go:168] Completed: docker container inspect multinode-20211117225530-9504 --format={{.State.Status}}: (1.4323919s)
	I1117 22:56:41.687802    8184 status.go:328] multinode-20211117225530-9504 host status = "" (err=state: unknown state "multinode-20211117225530-9504": docker container inspect multinode-20211117225530-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117225530-9504
	)
	I1117 22:56:41.688010    8184 status.go:255] multinode-20211117225530-9504 status: &{Name:multinode-20211117225530-9504 Host:Nonexistent Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Nonexistent Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E1117 22:56:41.688010    8184 status.go:258] status error: host: state: unknown state "multinode-20211117225530-9504": docker container inspect multinode-20211117225530-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117225530-9504
	E1117 22:56:41.688010    8184 status.go:261] The "multinode-20211117225530-9504" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:211: incorrect number of running kubelets: args "out/minikube-windows-amd64.exe -p multinode-20211117225530-9504 status --alsologtostderr": multinode-20211117225530-9504
type: Control Plane
host: Nonexistent
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Nonexistent

                                                
                                                
multinode_test.go:215: incorrect number of stopped hosts: args "out/minikube-windows-amd64.exe -p multinode-20211117225530-9504 status --alsologtostderr": multinode-20211117225530-9504
type: Control Plane
host: Nonexistent
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Nonexistent

                                                
                                                
multinode_test.go:219: incorrect number of stopped kubelets: args "out/minikube-windows-amd64.exe -p multinode-20211117225530-9504 status --alsologtostderr": multinode-20211117225530-9504
type: Control Plane
host: Nonexistent
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Nonexistent

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/StopNode]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-20211117225530-9504
helpers_test.go:235: (dbg) docker inspect multinode-20211117225530-9504:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-20211117225530-9504",
	        "Id": "499612ee6071dd6efa08c962472a0a3eb9e81c69d859854a1afad7c16b310041",
	        "Created": "2021-11-17T22:55:33.328415206Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.49.0/24",
	                    "Gateway": "192.168.49.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-20211117225530-9504 -n multinode-20211117225530-9504
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-20211117225530-9504 -n multinode-20211117225530-9504: exit status 7 (1.731509s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 22:56:43.519508    7852 status.go:247] status error: host: state: unknown state "multinode-20211117225530-9504": docker container inspect multinode-20211117225530-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117225530-9504

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-20211117225530-9504" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/StopNode (5.56s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (4.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:226: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:236: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20211117225530-9504 node start m03 --alsologtostderr
multinode_test.go:236: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-20211117225530-9504 node start m03 --alsologtostderr: exit status 85 (320.3547ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1117 22:56:43.910031    6496 out.go:297] Setting OutFile to fd 916 ...
	I1117 22:56:43.987310    6496 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 22:56:43.987310    6496 out.go:310] Setting ErrFile to fd 792...
	I1117 22:56:43.987310    6496 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 22:56:43.999230    6496 mustload.go:65] Loading cluster: multinode-20211117225530-9504
	I1117 22:56:44.000086    6496 config.go:176] Loaded profile config "multinode-20211117225530-9504": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.3
	I1117 22:56:44.007926    6496 out.go:176] 
	W1117 22:56:44.008488    6496 out.go:241] X Exiting due to GUEST_NODE_RETRIEVE: Could not find node m03
	X Exiting due to GUEST_NODE_RETRIEVE: Could not find node m03
	W1117 22:56:44.008488    6496 out.go:241] * 
	* 
	W1117 22:56:44.019282    6496 out.go:241] ╭──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                      │
	│    * If the above advice does not help, please let us know:                                                          │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                        │
	│                                                                                                                      │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                             │
	│    * Please also attach the following file to the GitHub issue:                                                      │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_node_6eb326fa97d317035b4344941f9b9e6dd8ab3d92_1.log    │
	│                                                                                                                      │
	╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                      │
	│    * If the above advice does not help, please let us know:                                                          │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                        │
	│                                                                                                                      │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                             │
	│    * Please also attach the following file to the GitHub issue:                                                      │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_node_6eb326fa97d317035b4344941f9b9e6dd8ab3d92_1.log    │
	│                                                                                                                      │
	╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I1117 22:56:44.021976    6496 out.go:176] 

                                                
                                                
** /stderr **
multinode_test.go:238: I1117 22:56:43.910031    6496 out.go:297] Setting OutFile to fd 916 ...
I1117 22:56:43.987310    6496 out.go:344] TERM=,COLORTERM=, which probably does not support color
I1117 22:56:43.987310    6496 out.go:310] Setting ErrFile to fd 792...
I1117 22:56:43.987310    6496 out.go:344] TERM=,COLORTERM=, which probably does not support color
I1117 22:56:43.999230    6496 mustload.go:65] Loading cluster: multinode-20211117225530-9504
I1117 22:56:44.000086    6496 config.go:176] Loaded profile config "multinode-20211117225530-9504": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.3
I1117 22:56:44.007926    6496 out.go:176] 
W1117 22:56:44.008488    6496 out.go:241] X Exiting due to GUEST_NODE_RETRIEVE: Could not find node m03
X Exiting due to GUEST_NODE_RETRIEVE: Could not find node m03
W1117 22:56:44.008488    6496 out.go:241] * 
* 
W1117 22:56:44.019282    6496 out.go:241] ╭──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                      │
│    * If the above advice does not help, please let us know:                                                          │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                        │
│                                                                                                                      │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                             │
│    * Please also attach the following file to the GitHub issue:                                                      │
│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_node_6eb326fa97d317035b4344941f9b9e6dd8ab3d92_1.log    │
│                                                                                                                      │
╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                      │
│    * If the above advice does not help, please let us know:                                                          │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                        │
│                                                                                                                      │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                             │
│    * Please also attach the following file to the GitHub issue:                                                      │
│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_node_6eb326fa97d317035b4344941f9b9e6dd8ab3d92_1.log    │
│                                                                                                                      │
╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I1117 22:56:44.021976    6496 out.go:176] 
multinode_test.go:239: node start returned an error. args "out/minikube-windows-amd64.exe -p multinode-20211117225530-9504 node start m03 --alsologtostderr": exit status 85
multinode_test.go:243: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20211117225530-9504 status
multinode_test.go:243: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-20211117225530-9504 status: exit status 7 (1.7327899s)

                                                
                                                
-- stdout --
	multinode-20211117225530-9504
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 22:56:45.762242   11476 status.go:258] status error: host: state: unknown state "multinode-20211117225530-9504": docker container inspect multinode-20211117225530-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117225530-9504
	E1117 22:56:45.762321   11476 status.go:261] The "multinode-20211117225530-9504" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:245: failed to run minikube status. args "out/minikube-windows-amd64.exe -p multinode-20211117225530-9504 status" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/StartAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-20211117225530-9504
helpers_test.go:235: (dbg) docker inspect multinode-20211117225530-9504:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-20211117225530-9504",
	        "Id": "499612ee6071dd6efa08c962472a0a3eb9e81c69d859854a1afad7c16b310041",
	        "Created": "2021-11-17T22:55:33.328415206Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.49.0/24",
	                    "Gateway": "192.168.49.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-20211117225530-9504 -n multinode-20211117225530-9504
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-20211117225530-9504 -n multinode-20211117225530-9504: exit status 7 (1.7459222s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 22:56:47.619688    4980 status.go:247] status error: host: state: unknown state "multinode-20211117225530-9504": docker container inspect multinode-20211117225530-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117225530-9504

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-20211117225530-9504" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/StartAfterStop (4.10s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (74.82s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:265: (dbg) Run:  out/minikube-windows-amd64.exe node list -p multinode-20211117225530-9504
multinode_test.go:272: (dbg) Run:  out/minikube-windows-amd64.exe stop -p multinode-20211117225530-9504
multinode_test.go:272: (dbg) Non-zero exit: out/minikube-windows-amd64.exe stop -p multinode-20211117225530-9504: exit status 82 (15.053709s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-20211117225530-9504"  ...
	* Stopping node "multinode-20211117225530-9504"  ...
	* Stopping node "multinode-20211117225530-9504"  ...
	* Stopping node "multinode-20211117225530-9504"  ...
	* Stopping node "multinode-20211117225530-9504"  ...
	* Stopping node "multinode-20211117225530-9504"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 22:56:51.586136    9384 daemonize_windows.go:39] error terminating scheduled stop for profile multinode-20211117225530-9504: stopping schedule-stop service for profile multinode-20211117225530-9504: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20211117225530-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117225530-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117225530-9504
	X Exiting due to GUEST_STOP_TIMEOUT: docker container inspect multinode-20211117225530-9504 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117225530-9504
	
	* 
	╭──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                      │
	│    * If the above advice does not help, please let us know:                                                          │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                        │
	│                                                                                                                      │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                             │
	│    * Please also attach the following file to the GitHub issue:                                                      │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_stop_eeafc14e343686f0df1f1d4295ac2d3042636ff8_2.log    │
	│                                                                                                                      │
	╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:274: failed to run minikube stop. args "out/minikube-windows-amd64.exe node list -p multinode-20211117225530-9504" : exit status 82
multinode_test.go:277: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-20211117225530-9504 --wait=true -v=8 --alsologtostderr
multinode_test.go:277: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p multinode-20211117225530-9504 --wait=true -v=8 --alsologtostderr: exit status 80 (57.2035023s)

                                                
                                                
-- stdout --
	* [multinode-20211117225530-9504] minikube v1.24.0 on Microsoft Windows 10 Enterprise N 10.0.18363 Build 18363
	  - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=12739
	* Using the docker driver based on existing profile
	* Starting control plane node multinode-20211117225530-9504 in cluster multinode-20211117225530-9504
	* Pulling base image ...
	* docker "multinode-20211117225530-9504" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* docker "multinode-20211117225530-9504" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1117 22:57:03.167384    4260 out.go:297] Setting OutFile to fd 980 ...
	I1117 22:57:03.234870    4260 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 22:57:03.234870    4260 out.go:310] Setting ErrFile to fd 704...
	I1117 22:57:03.234870    4260 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 22:57:03.244479    4260 out.go:304] Setting JSON to false
	I1117 22:57:03.246884    4260 start.go:112] hostinfo: {"hostname":"minikube2","uptime":79139,"bootTime":1637110684,"procs":126,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.18363 Build 18363","kernelVersion":"10.0.18363 Build 18363","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"29dced62-21fb-45d8-a34e-472b66ced002"}
	W1117 22:57:03.246884    4260 start.go:120] gopshost.Virtualization returned error: not implemented yet
	I1117 22:57:03.252924    4260 out.go:176] * [multinode-20211117225530-9504] minikube v1.24.0 on Microsoft Windows 10 Enterprise N 10.0.18363 Build 18363
	I1117 22:57:03.252924    4260 notify.go:174] Checking for updates...
	I1117 22:57:03.255601    4260 out.go:176]   - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	I1117 22:57:03.257471    4260 out.go:176]   - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	I1117 22:57:03.259466    4260 out.go:176]   - MINIKUBE_LOCATION=12739
	I1117 22:57:03.261501    4260 config.go:176] Loaded profile config "multinode-20211117225530-9504": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.3
	I1117 22:57:03.261779    4260 driver.go:343] Setting default libvirt URI to qemu:///system
	I1117 22:57:04.801061    4260 docker.go:132] docker version: linux-19.03.12
	I1117 22:57:04.805615    4260 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I1117 22:57:05.139879    4260 info.go:263] docker info: {ID:L2W4:ZZCI:FAHF:ZZT6:NAWJ:ODFR:2ZBQ:YQ77:PYZZ:VR3E:WLUM:OBSV Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:40 OomKillDisable:true NGoroutines:52 SystemTime:2021-11-17 22:57:04.88215645 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:4.19.76-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://index
.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:6239399936 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:19.03.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7ad184331fa3e55e52b890ea95e65ba581ae3429 Expected:7ad184331fa3e55e52b890ea95e65ba581ae3429} RuncCommit:{ID:dc9208a3303feef5b3839f4323d9beb36df0a9dd Expected:dc9208a3303feef5b3839f4323d9beb36df0a9dd} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense:Community Engine Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I1117 22:57:05.143258    4260 out.go:176] * Using the docker driver based on existing profile
	I1117 22:57:05.143355    4260 start.go:280] selected driver: docker
	I1117 22:57:05.143355    4260 start.go:775] validating driver "docker" against &{Name:multinode-20211117225530-9504 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.3 ClusterName:multinode-20211117225530-9504 Namespace:default APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.22.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host}
	I1117 22:57:05.143522    4260 start.go:786] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I1117 22:57:05.155289    4260 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I1117 22:57:05.530943    4260 info.go:263] docker info: {ID:L2W4:ZZCI:FAHF:ZZT6:NAWJ:ODFR:2ZBQ:YQ77:PYZZ:VR3E:WLUM:OBSV Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:40 OomKillDisable:true NGoroutines:52 SystemTime:2021-11-17 22:57:05.245426273 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:4.19.76-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://inde
x.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:6239399936 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:19.03.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7ad184331fa3e55e52b890ea95e65ba581ae3429 Expected:7ad184331fa3e55e52b890ea95e65ba581ae3429} RuncCommit:{ID:dc9208a3303feef5b3839f4323d9beb36df0a9dd Expected:dc9208a3303feef5b3839f4323d9beb36df0a9dd} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense:Community Engine Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I1117 22:57:05.641886    4260 start_flags.go:758] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1117 22:57:05.641886    4260 cni.go:93] Creating CNI manager for ""
	I1117 22:57:05.641886    4260 cni.go:154] 1 nodes found, recommending kindnet
	I1117 22:57:05.641886    4260 start_flags.go:282] config:
	{Name:multinode-20211117225530-9504 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.3 ClusterName:multinode-20211117225530-9504 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket
: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.22.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host}
	I1117 22:57:05.646279    4260 out.go:176] * Starting control plane node multinode-20211117225530-9504 in cluster multinode-20211117225530-9504
	I1117 22:57:05.646279    4260 cache.go:118] Beginning downloading kic base image for docker with docker
	I1117 22:57:05.649140    4260 out.go:176] * Pulling base image ...
	I1117 22:57:05.650144    4260 preload.go:132] Checking if preload exists for k8s version v1.22.3 and runtime docker
	I1117 22:57:05.650339    4260 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c in local docker daemon
	I1117 22:57:05.650396    4260 preload.go:148] Found local preload: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4
	I1117 22:57:05.650518    4260 cache.go:57] Caching tarball of preloaded images
	I1117 22:57:05.650732    4260 preload.go:174] Found C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1117 22:57:05.650732    4260 cache.go:60] Finished verifying existence of preloaded tar for  v1.22.3 on docker
	I1117 22:57:05.651275    4260 profile.go:147] Saving config to C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\multinode-20211117225530-9504\config.json ...
	I1117 22:57:05.750203    4260 image.go:79] Found gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c in local docker daemon, skipping pull
	I1117 22:57:05.750203    4260 cache.go:140] gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c exists in daemon, skipping load
	I1117 22:57:05.750203    4260 cache.go:206] Successfully downloaded all kic artifacts
	I1117 22:57:05.750613    4260 start.go:313] acquiring machines lock for multinode-20211117225530-9504: {Name:mk4ceed01407c773b7965bcd22df69b99303385f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 22:57:05.750956    4260 start.go:317] acquired machines lock for "multinode-20211117225530-9504" in 109.2µs
	I1117 22:57:05.751130    4260 start.go:93] Skipping create...Using existing machine configuration
	I1117 22:57:05.751130    4260 fix.go:55] fixHost starting: 
	I1117 22:57:05.762581    4260 cli_runner.go:115] Run: docker container inspect multinode-20211117225530-9504 --format={{.State.Status}}
	W1117 22:57:05.856425    4260 cli_runner.go:162] docker container inspect multinode-20211117225530-9504 --format={{.State.Status}} returned with exit code 1
	I1117 22:57:05.856781    4260 fix.go:108] recreateIfNeeded on multinode-20211117225530-9504: state= err=unknown state "multinode-20211117225530-9504": docker container inspect multinode-20211117225530-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117225530-9504
	I1117 22:57:05.856870    4260 fix.go:113] machineExists: false. err=machine does not exist
	I1117 22:57:05.860185    4260 out.go:176] * docker "multinode-20211117225530-9504" container is missing, will recreate.
	I1117 22:57:05.860336    4260 delete.go:124] DEMOLISHING multinode-20211117225530-9504 ...
	I1117 22:57:05.867419    4260 cli_runner.go:115] Run: docker container inspect multinode-20211117225530-9504 --format={{.State.Status}}
	W1117 22:57:05.956197    4260 cli_runner.go:162] docker container inspect multinode-20211117225530-9504 --format={{.State.Status}} returned with exit code 1
	W1117 22:57:05.956197    4260 stop.go:75] unable to get state: unknown state "multinode-20211117225530-9504": docker container inspect multinode-20211117225530-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117225530-9504
	I1117 22:57:05.956197    4260 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "multinode-20211117225530-9504": docker container inspect multinode-20211117225530-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117225530-9504
	I1117 22:57:05.965802    4260 cli_runner.go:115] Run: docker container inspect multinode-20211117225530-9504 --format={{.State.Status}}
	W1117 22:57:06.054204    4260 cli_runner.go:162] docker container inspect multinode-20211117225530-9504 --format={{.State.Status}} returned with exit code 1
	I1117 22:57:06.054479    4260 delete.go:82] Unable to get host status for multinode-20211117225530-9504, assuming it has already been deleted: state: unknown state "multinode-20211117225530-9504": docker container inspect multinode-20211117225530-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117225530-9504
	I1117 22:57:06.058747    4260 cli_runner.go:115] Run: docker container inspect -f {{.Id}} multinode-20211117225530-9504
	W1117 22:57:06.162168    4260 cli_runner.go:162] docker container inspect -f {{.Id}} multinode-20211117225530-9504 returned with exit code 1
	I1117 22:57:06.162424    4260 kic.go:360] could not find the container multinode-20211117225530-9504 to remove it. will try anyways
	I1117 22:57:06.166914    4260 cli_runner.go:115] Run: docker container inspect multinode-20211117225530-9504 --format={{.State.Status}}
	W1117 22:57:06.266774    4260 cli_runner.go:162] docker container inspect multinode-20211117225530-9504 --format={{.State.Status}} returned with exit code 1
	W1117 22:57:06.266932    4260 oci.go:83] error getting container status, will try to delete anyways: unknown state "multinode-20211117225530-9504": docker container inspect multinode-20211117225530-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117225530-9504
	I1117 22:57:06.271370    4260 cli_runner.go:115] Run: docker exec --privileged -t multinode-20211117225530-9504 /bin/bash -c "sudo init 0"
	W1117 22:57:06.362071    4260 cli_runner.go:162] docker exec --privileged -t multinode-20211117225530-9504 /bin/bash -c "sudo init 0" returned with exit code 1
	I1117 22:57:06.362153    4260 oci.go:658] error shutdown multinode-20211117225530-9504: docker exec --privileged -t multinode-20211117225530-9504 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: multinode-20211117225530-9504
	I1117 22:57:07.366811    4260 cli_runner.go:115] Run: docker container inspect multinode-20211117225530-9504 --format={{.State.Status}}
	W1117 22:57:07.452602    4260 cli_runner.go:162] docker container inspect multinode-20211117225530-9504 --format={{.State.Status}} returned with exit code 1
	I1117 22:57:07.452602    4260 oci.go:670] temporary error verifying shutdown: unknown state "multinode-20211117225530-9504": docker container inspect multinode-20211117225530-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117225530-9504
	I1117 22:57:07.452602    4260 oci.go:672] temporary error: container multinode-20211117225530-9504 status is  but expect it to be exited
	I1117 22:57:07.452602    4260 retry.go:31] will retry after 552.330144ms: couldn't verify container is exited. %v: unknown state "multinode-20211117225530-9504": docker container inspect multinode-20211117225530-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117225530-9504
	I1117 22:57:08.010963    4260 cli_runner.go:115] Run: docker container inspect multinode-20211117225530-9504 --format={{.State.Status}}
	W1117 22:57:08.098150    4260 cli_runner.go:162] docker container inspect multinode-20211117225530-9504 --format={{.State.Status}} returned with exit code 1
	I1117 22:57:08.098365    4260 oci.go:670] temporary error verifying shutdown: unknown state "multinode-20211117225530-9504": docker container inspect multinode-20211117225530-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117225530-9504
	I1117 22:57:08.098365    4260 oci.go:672] temporary error: container multinode-20211117225530-9504 status is  but expect it to be exited
	I1117 22:57:08.098365    4260 retry.go:31] will retry after 1.080381816s: couldn't verify container is exited. %v: unknown state "multinode-20211117225530-9504": docker container inspect multinode-20211117225530-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117225530-9504
	I1117 22:57:09.184606    4260 cli_runner.go:115] Run: docker container inspect multinode-20211117225530-9504 --format={{.State.Status}}
	W1117 22:57:09.276469    4260 cli_runner.go:162] docker container inspect multinode-20211117225530-9504 --format={{.State.Status}} returned with exit code 1
	I1117 22:57:09.276469    4260 oci.go:670] temporary error verifying shutdown: unknown state "multinode-20211117225530-9504": docker container inspect multinode-20211117225530-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117225530-9504
	I1117 22:57:09.276469    4260 oci.go:672] temporary error: container multinode-20211117225530-9504 status is  but expect it to be exited
	I1117 22:57:09.276469    4260 retry.go:31] will retry after 1.31013006s: couldn't verify container is exited. %v: unknown state "multinode-20211117225530-9504": docker container inspect multinode-20211117225530-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117225530-9504
	I1117 22:57:10.592178    4260 cli_runner.go:115] Run: docker container inspect multinode-20211117225530-9504 --format={{.State.Status}}
	W1117 22:57:10.682699    4260 cli_runner.go:162] docker container inspect multinode-20211117225530-9504 --format={{.State.Status}} returned with exit code 1
	I1117 22:57:10.682938    4260 oci.go:670] temporary error verifying shutdown: unknown state "multinode-20211117225530-9504": docker container inspect multinode-20211117225530-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117225530-9504
	I1117 22:57:10.682938    4260 oci.go:672] temporary error: container multinode-20211117225530-9504 status is  but expect it to be exited
	I1117 22:57:10.683007    4260 retry.go:31] will retry after 1.582392691s: couldn't verify container is exited. %v: unknown state "multinode-20211117225530-9504": docker container inspect multinode-20211117225530-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117225530-9504
	I1117 22:57:12.271036    4260 cli_runner.go:115] Run: docker container inspect multinode-20211117225530-9504 --format={{.State.Status}}
	W1117 22:57:12.361151    4260 cli_runner.go:162] docker container inspect multinode-20211117225530-9504 --format={{.State.Status}} returned with exit code 1
	I1117 22:57:12.361431    4260 oci.go:670] temporary error verifying shutdown: unknown state "multinode-20211117225530-9504": docker container inspect multinode-20211117225530-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117225530-9504
	I1117 22:57:12.361431    4260 oci.go:672] temporary error: container multinode-20211117225530-9504 status is  but expect it to be exited
	I1117 22:57:12.361431    4260 retry.go:31] will retry after 2.340488664s: couldn't verify container is exited. %v: unknown state "multinode-20211117225530-9504": docker container inspect multinode-20211117225530-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117225530-9504
	I1117 22:57:14.707684    4260 cli_runner.go:115] Run: docker container inspect multinode-20211117225530-9504 --format={{.State.Status}}
	W1117 22:57:14.798190    4260 cli_runner.go:162] docker container inspect multinode-20211117225530-9504 --format={{.State.Status}} returned with exit code 1
	I1117 22:57:14.798190    4260 oci.go:670] temporary error verifying shutdown: unknown state "multinode-20211117225530-9504": docker container inspect multinode-20211117225530-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117225530-9504
	I1117 22:57:14.798190    4260 oci.go:672] temporary error: container multinode-20211117225530-9504 status is  but expect it to be exited
	I1117 22:57:14.798190    4260 retry.go:31] will retry after 4.506218855s: couldn't verify container is exited. %v: unknown state "multinode-20211117225530-9504": docker container inspect multinode-20211117225530-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117225530-9504
	I1117 22:57:19.309327    4260 cli_runner.go:115] Run: docker container inspect multinode-20211117225530-9504 --format={{.State.Status}}
	W1117 22:57:19.401168    4260 cli_runner.go:162] docker container inspect multinode-20211117225530-9504 --format={{.State.Status}} returned with exit code 1
	I1117 22:57:19.401439    4260 oci.go:670] temporary error verifying shutdown: unknown state "multinode-20211117225530-9504": docker container inspect multinode-20211117225530-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117225530-9504
	I1117 22:57:19.401439    4260 oci.go:672] temporary error: container multinode-20211117225530-9504 status is  but expect it to be exited
	I1117 22:57:19.401439    4260 retry.go:31] will retry after 3.221479586s: couldn't verify container is exited. %v: unknown state "multinode-20211117225530-9504": docker container inspect multinode-20211117225530-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117225530-9504
	I1117 22:57:22.628096    4260 cli_runner.go:115] Run: docker container inspect multinode-20211117225530-9504 --format={{.State.Status}}
	W1117 22:57:22.719813    4260 cli_runner.go:162] docker container inspect multinode-20211117225530-9504 --format={{.State.Status}} returned with exit code 1
	I1117 22:57:22.719813    4260 oci.go:670] temporary error verifying shutdown: unknown state "multinode-20211117225530-9504": docker container inspect multinode-20211117225530-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117225530-9504
	I1117 22:57:22.719813    4260 oci.go:672] temporary error: container multinode-20211117225530-9504 status is  but expect it to be exited
	I1117 22:57:22.720134    4260 oci.go:87] couldn't shut down multinode-20211117225530-9504 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "multinode-20211117225530-9504": docker container inspect multinode-20211117225530-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117225530-9504
	 
	I1117 22:57:22.724654    4260 cli_runner.go:115] Run: docker rm -f -v multinode-20211117225530-9504
	W1117 22:57:22.810557    4260 cli_runner.go:162] docker rm -f -v multinode-20211117225530-9504 returned with exit code 1
	W1117 22:57:22.812377    4260 delete.go:139] delete failed (probably ok) <nil>
	I1117 22:57:22.812377    4260 fix.go:120] Sleeping 1 second for extra luck!
	I1117 22:57:23.812676    4260 start.go:126] createHost starting for "" (driver="docker")
	I1117 22:57:23.815930    4260 out.go:203] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I1117 22:57:23.816355    4260 start.go:160] libmachine.API.Create for "multinode-20211117225530-9504" (driver="docker")
	I1117 22:57:23.816488    4260 client.go:168] LocalClient.Create starting
	I1117 22:57:23.816570    4260 main.go:130] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I1117 22:57:23.817176    4260 main.go:130] libmachine: Decoding PEM data...
	I1117 22:57:23.817375    4260 main.go:130] libmachine: Parsing certificate...
	I1117 22:57:23.817597    4260 main.go:130] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I1117 22:57:23.817788    4260 main.go:130] libmachine: Decoding PEM data...
	I1117 22:57:23.817788    4260 main.go:130] libmachine: Parsing certificate...
	I1117 22:57:23.823493    4260 cli_runner.go:115] Run: docker network inspect multinode-20211117225530-9504 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1117 22:57:23.916420    4260 network_create.go:67] Found existing network {name:multinode-20211117225530-9504 subnet:0xc0011f62d0 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 168 49 1] mtu:1500}
	I1117 22:57:23.916420    4260 kic.go:106] calculated static IP "192.168.49.2" for the "multinode-20211117225530-9504" container
	I1117 22:57:23.923867    4260 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
	I1117 22:57:24.019661    4260 cli_runner.go:115] Run: docker volume create multinode-20211117225530-9504 --label name.minikube.sigs.k8s.io=multinode-20211117225530-9504 --label created_by.minikube.sigs.k8s.io=true
	I1117 22:57:24.117482    4260 oci.go:102] Successfully created a docker volume multinode-20211117225530-9504
	I1117 22:57:24.121531    4260 cli_runner.go:115] Run: docker run --rm --name multinode-20211117225530-9504-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-20211117225530-9504 --entrypoint /usr/bin/test -v multinode-20211117225530-9504:/var gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -d /var/lib
	I1117 22:57:24.993417    4260 oci.go:106] Successfully prepared a docker volume multinode-20211117225530-9504
	I1117 22:57:24.993613    4260 preload.go:132] Checking if preload exists for k8s version v1.22.3 and runtime docker
	I1117 22:57:24.993708    4260 kic.go:179] Starting extracting preloaded images to volume ...
	I1117 22:57:24.998216    4260 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I1117 22:57:24.998825    4260 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-20211117225530-9504:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir
	W1117 22:57:25.122121    4260 cli_runner.go:162] docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-20211117225530-9504:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir returned with exit code 125
	I1117 22:57:25.122207    4260 kic.go:186] Unable to extract preloaded tarball to volume: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-20211117225530-9504:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: status code not OK but 500: {"Message":"Unhandled exception: The notification platform is unavailable.\r\n\r\nThe notification platform is unavailable.\r\n","StackTrace":"   at Windows.UI.Notifications.ToastNotificationManager.CreateToastNotifier(String applicationId)\r\n   at Docker.WPF.PromptShareDirectory.<PromptUserAsync>d__0.MoveNext() in C:\\workspaces\\stable-2.3.x\\src\\github.com\\docker\\pinata\\win\\src\\Docker.WPF\\PromptShareDirectory.cs:line 25\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at Docker.ApiServices.Mounting.FileSharing.<DoShareAsync>d__6.MoveNext() in C:\\workspaces\\stable-2.3.x\\src\\github.com\\docker\\pinata\\win\\src\\Docker.ApiServices\\Mounting\\FileSharing.cs:line 80\r\n--- End of stack trace from previous location whe
re exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at Docker.ApiServices.Mounting.FileSharing.<ShareAsync>d__4.MoveNext() in C:\\workspaces\\stable-2.3.x\\src\\github.com\\docker\\pinata\\win\\src\\Docker.ApiServices\\Mounting\\FileSharing.cs:line 47\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at Docker.HttpApi.Controllers.FilesharingController.<ShareDirectory>d__2.MoveNext() in C:\\workspaces\\stable-2.3.x\\src\\github.com\\docker\\pinata\\win\\src\\Docker.HttpApi\\Controllers\\FilesharingController.cs:line 21\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.Exceptio
nDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at System.Threading.Tasks.TaskHelpersExtensions.<CastToObject>d__1`1.MoveNext()\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at System.Web.Http.Controllers.ApiControllerActionInvoker.<InvokeActionAsyncCore>d__1.MoveNext()\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at System.Web.Http.Controllers.ActionFilterResult.<ExecuteAsync>d__5.MoveNext()\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.Excepti
onServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at System.Web.Http.Dispatcher.HttpControllerDispatcher.<SendAsync>d__15.MoveNext()"}.
	See 'docker run --help'.
	I1117 22:57:25.350153    4260 info.go:263] docker info: {ID:L2W4:ZZCI:FAHF:ZZT6:NAWJ:ODFR:2ZBQ:YQ77:PYZZ:VR3E:WLUM:OBSV Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:40 OomKillDisable:true NGoroutines:52 SystemTime:2021-11-17 22:57:25.086908141 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:4.19.76-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://inde
x.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:6239399936 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:19.03.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7ad184331fa3e55e52b890ea95e65ba581ae3429 Expected:7ad184331fa3e55e52b890ea95e65ba581ae3429} RuncCommit:{ID:dc9208a3303feef5b3839f4323d9beb36df0a9dd Expected:dc9208a3303feef5b3839f4323d9beb36df0a9dd} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense:Community Engine Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	E1117 22:57:25.350153    4260 oci.go:197] error getting kernel modules path: Unable to locate kernel modules
	I1117 22:57:25.350153    4260 client.go:171] LocalClient.Create took 1.5336536s
	I1117 22:57:27.358578    4260 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1117 22:57:27.363341    4260 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117225530-9504
	W1117 22:57:27.452114    4260 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117225530-9504 returned with exit code 1
	I1117 22:57:27.452114    4260 retry.go:31] will retry after 149.242379ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20211117225530-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117225530-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117225530-9504
	I1117 22:57:27.606605    4260 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117225530-9504
	W1117 22:57:27.698026    4260 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117225530-9504 returned with exit code 1
	I1117 22:57:27.698188    4260 retry.go:31] will retry after 300.341948ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20211117225530-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117225530-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117225530-9504
	I1117 22:57:28.004572    4260 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117225530-9504
	W1117 22:57:28.104327    4260 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117225530-9504 returned with exit code 1
	I1117 22:57:28.104414    4260 retry.go:31] will retry after 571.057104ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20211117225530-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117225530-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117225530-9504
	I1117 22:57:28.680555    4260 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117225530-9504
	W1117 22:57:28.767610    4260 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117225530-9504 returned with exit code 1
	W1117 22:57:28.767961    4260 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20211117225530-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117225530-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117225530-9504
	
	W1117 22:57:28.768043    4260 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20211117225530-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117225530-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117225530-9504
	I1117 22:57:28.768043    4260 start.go:129] duration metric: createHost completed in 4.9553291s
	I1117 22:57:28.775060    4260 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1117 22:57:28.780688    4260 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117225530-9504
	W1117 22:57:28.871305    4260 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117225530-9504 returned with exit code 1
	I1117 22:57:28.871689    4260 retry.go:31] will retry after 178.565968ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20211117225530-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117225530-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117225530-9504
	I1117 22:57:29.055456    4260 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117225530-9504
	W1117 22:57:29.151352    4260 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117225530-9504 returned with exit code 1
	I1117 22:57:29.151352    4260 retry.go:31] will retry after 330.246446ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20211117225530-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117225530-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117225530-9504
	I1117 22:57:29.486331    4260 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117225530-9504
	W1117 22:57:29.571761    4260 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117225530-9504 returned with exit code 1
	I1117 22:57:29.571934    4260 retry.go:31] will retry after 460.157723ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20211117225530-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117225530-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117225530-9504
	I1117 22:57:30.037512    4260 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117225530-9504
	W1117 22:57:30.126416    4260 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117225530-9504 returned with exit code 1
	W1117 22:57:30.126669    4260 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20211117225530-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117225530-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117225530-9504
	
	W1117 22:57:30.126669    4260 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20211117225530-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117225530-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117225530-9504
	I1117 22:57:30.126760    4260 fix.go:57] fixHost completed within 24.3753562s
	I1117 22:57:30.126760    4260 start.go:80] releasing machines lock for "multinode-20211117225530-9504", held for 24.3756206s
	W1117 22:57:30.126914    4260 start.go:532] error starting host: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	W1117 22:57:30.127185    4260 out.go:241] ! StartHost failed, but will try again: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	! StartHost failed, but will try again: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	I1117 22:57:30.127185    4260 start.go:547] Will try again in 5 seconds ...
	I1117 22:57:35.127745    4260 start.go:313] acquiring machines lock for multinode-20211117225530-9504: {Name:mk4ceed01407c773b7965bcd22df69b99303385f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 22:57:35.127745    4260 start.go:317] acquired machines lock for "multinode-20211117225530-9504" in 0s
	I1117 22:57:35.127745    4260 start.go:93] Skipping create...Using existing machine configuration
	I1117 22:57:35.127745    4260 fix.go:55] fixHost starting: 
	I1117 22:57:35.135563    4260 cli_runner.go:115] Run: docker container inspect multinode-20211117225530-9504 --format={{.State.Status}}
	W1117 22:57:35.227942    4260 cli_runner.go:162] docker container inspect multinode-20211117225530-9504 --format={{.State.Status}} returned with exit code 1
	I1117 22:57:35.227942    4260 fix.go:108] recreateIfNeeded on multinode-20211117225530-9504: state= err=unknown state "multinode-20211117225530-9504": docker container inspect multinode-20211117225530-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117225530-9504
	I1117 22:57:35.227942    4260 fix.go:113] machineExists: false. err=machine does not exist
	I1117 22:57:35.232238    4260 out.go:176] * docker "multinode-20211117225530-9504" container is missing, will recreate.
	I1117 22:57:35.232238    4260 delete.go:124] DEMOLISHING multinode-20211117225530-9504 ...
	I1117 22:57:35.239485    4260 cli_runner.go:115] Run: docker container inspect multinode-20211117225530-9504 --format={{.State.Status}}
	W1117 22:57:35.346453    4260 cli_runner.go:162] docker container inspect multinode-20211117225530-9504 --format={{.State.Status}} returned with exit code 1
	W1117 22:57:35.346453    4260 stop.go:75] unable to get state: unknown state "multinode-20211117225530-9504": docker container inspect multinode-20211117225530-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117225530-9504
	I1117 22:57:35.346453    4260 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "multinode-20211117225530-9504": docker container inspect multinode-20211117225530-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117225530-9504
	I1117 22:57:35.354868    4260 cli_runner.go:115] Run: docker container inspect multinode-20211117225530-9504 --format={{.State.Status}}
	W1117 22:57:35.441884    4260 cli_runner.go:162] docker container inspect multinode-20211117225530-9504 --format={{.State.Status}} returned with exit code 1
	I1117 22:57:35.442083    4260 delete.go:82] Unable to get host status for multinode-20211117225530-9504, assuming it has already been deleted: state: unknown state "multinode-20211117225530-9504": docker container inspect multinode-20211117225530-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117225530-9504
	I1117 22:57:35.446304    4260 cli_runner.go:115] Run: docker container inspect -f {{.Id}} multinode-20211117225530-9504
	W1117 22:57:35.531395    4260 cli_runner.go:162] docker container inspect -f {{.Id}} multinode-20211117225530-9504 returned with exit code 1
	I1117 22:57:35.531474    4260 kic.go:360] could not find the container multinode-20211117225530-9504 to remove it. will try anyways
	I1117 22:57:35.535806    4260 cli_runner.go:115] Run: docker container inspect multinode-20211117225530-9504 --format={{.State.Status}}
	W1117 22:57:35.623460    4260 cli_runner.go:162] docker container inspect multinode-20211117225530-9504 --format={{.State.Status}} returned with exit code 1
	W1117 22:57:35.623733    4260 oci.go:83] error getting container status, will try to delete anyways: unknown state "multinode-20211117225530-9504": docker container inspect multinode-20211117225530-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117225530-9504
	I1117 22:57:35.627919    4260 cli_runner.go:115] Run: docker exec --privileged -t multinode-20211117225530-9504 /bin/bash -c "sudo init 0"
	W1117 22:57:35.715501    4260 cli_runner.go:162] docker exec --privileged -t multinode-20211117225530-9504 /bin/bash -c "sudo init 0" returned with exit code 1
	I1117 22:57:35.715501    4260 oci.go:658] error shutdown multinode-20211117225530-9504: docker exec --privileged -t multinode-20211117225530-9504 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: multinode-20211117225530-9504
	I1117 22:57:36.723986    4260 cli_runner.go:115] Run: docker container inspect multinode-20211117225530-9504 --format={{.State.Status}}
	W1117 22:57:36.809903    4260 cli_runner.go:162] docker container inspect multinode-20211117225530-9504 --format={{.State.Status}} returned with exit code 1
	I1117 22:57:36.810183    4260 oci.go:670] temporary error verifying shutdown: unknown state "multinode-20211117225530-9504": docker container inspect multinode-20211117225530-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117225530-9504
	I1117 22:57:36.810183    4260 oci.go:672] temporary error: container multinode-20211117225530-9504 status is  but expect it to be exited
	I1117 22:57:36.810322    4260 retry.go:31] will retry after 391.517075ms: couldn't verify container is exited. %v: unknown state "multinode-20211117225530-9504": docker container inspect multinode-20211117225530-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117225530-9504
	I1117 22:57:37.208462    4260 cli_runner.go:115] Run: docker container inspect multinode-20211117225530-9504 --format={{.State.Status}}
	W1117 22:57:37.295868    4260 cli_runner.go:162] docker container inspect multinode-20211117225530-9504 --format={{.State.Status}} returned with exit code 1
	I1117 22:57:37.295943    4260 oci.go:670] temporary error verifying shutdown: unknown state "multinode-20211117225530-9504": docker container inspect multinode-20211117225530-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117225530-9504
	I1117 22:57:37.295943    4260 oci.go:672] temporary error: container multinode-20211117225530-9504 status is  but expect it to be exited
	I1117 22:57:37.295943    4260 retry.go:31] will retry after 594.826393ms: couldn't verify container is exited. %v: unknown state "multinode-20211117225530-9504": docker container inspect multinode-20211117225530-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117225530-9504
	I1117 22:57:37.896320    4260 cli_runner.go:115] Run: docker container inspect multinode-20211117225530-9504 --format={{.State.Status}}
	W1117 22:57:37.984536    4260 cli_runner.go:162] docker container inspect multinode-20211117225530-9504 --format={{.State.Status}} returned with exit code 1
	I1117 22:57:37.984536    4260 oci.go:670] temporary error verifying shutdown: unknown state "multinode-20211117225530-9504": docker container inspect multinode-20211117225530-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117225530-9504
	I1117 22:57:37.984536    4260 oci.go:672] temporary error: container multinode-20211117225530-9504 status is  but expect it to be exited
	I1117 22:57:37.984536    4260 retry.go:31] will retry after 1.326470261s: couldn't verify container is exited. %v: unknown state "multinode-20211117225530-9504": docker container inspect multinode-20211117225530-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117225530-9504
	I1117 22:57:39.315261    4260 cli_runner.go:115] Run: docker container inspect multinode-20211117225530-9504 --format={{.State.Status}}
	W1117 22:57:39.403551    4260 cli_runner.go:162] docker container inspect multinode-20211117225530-9504 --format={{.State.Status}} returned with exit code 1
	I1117 22:57:39.403551    4260 oci.go:670] temporary error verifying shutdown: unknown state "multinode-20211117225530-9504": docker container inspect multinode-20211117225530-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117225530-9504
	I1117 22:57:39.403942    4260 oci.go:672] temporary error: container multinode-20211117225530-9504 status is  but expect it to be exited
	I1117 22:57:39.403988    4260 retry.go:31] will retry after 1.212558276s: couldn't verify container is exited. %v: unknown state "multinode-20211117225530-9504": docker container inspect multinode-20211117225530-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117225530-9504
	I1117 22:57:40.621091    4260 cli_runner.go:115] Run: docker container inspect multinode-20211117225530-9504 --format={{.State.Status}}
	W1117 22:57:40.711917    4260 cli_runner.go:162] docker container inspect multinode-20211117225530-9504 --format={{.State.Status}} returned with exit code 1
	I1117 22:57:40.712017    4260 oci.go:670] temporary error verifying shutdown: unknown state "multinode-20211117225530-9504": docker container inspect multinode-20211117225530-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117225530-9504
	I1117 22:57:40.712017    4260 oci.go:672] temporary error: container multinode-20211117225530-9504 status is  but expect it to be exited
	I1117 22:57:40.712017    4260 retry.go:31] will retry after 1.779941781s: couldn't verify container is exited. %v: unknown state "multinode-20211117225530-9504": docker container inspect multinode-20211117225530-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117225530-9504
	I1117 22:57:42.497552    4260 cli_runner.go:115] Run: docker container inspect multinode-20211117225530-9504 --format={{.State.Status}}
	W1117 22:57:42.586939    4260 cli_runner.go:162] docker container inspect multinode-20211117225530-9504 --format={{.State.Status}} returned with exit code 1
	I1117 22:57:42.587025    4260 oci.go:670] temporary error verifying shutdown: unknown state "multinode-20211117225530-9504": docker container inspect multinode-20211117225530-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117225530-9504
	I1117 22:57:42.587025    4260 oci.go:672] temporary error: container multinode-20211117225530-9504 status is  but expect it to be exited
	I1117 22:57:42.587100    4260 retry.go:31] will retry after 3.268621161s: couldn't verify container is exited. %v: unknown state "multinode-20211117225530-9504": docker container inspect multinode-20211117225530-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117225530-9504
	I1117 22:57:45.860646    4260 cli_runner.go:115] Run: docker container inspect multinode-20211117225530-9504 --format={{.State.Status}}
	W1117 22:57:45.959469    4260 cli_runner.go:162] docker container inspect multinode-20211117225530-9504 --format={{.State.Status}} returned with exit code 1
	I1117 22:57:45.959624    4260 oci.go:670] temporary error verifying shutdown: unknown state "multinode-20211117225530-9504": docker container inspect multinode-20211117225530-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117225530-9504
	I1117 22:57:45.959624    4260 oci.go:672] temporary error: container multinode-20211117225530-9504 status is  but expect it to be exited
	I1117 22:57:45.959624    4260 retry.go:31] will retry after 6.097818893s: couldn't verify container is exited. %v: unknown state "multinode-20211117225530-9504": docker container inspect multinode-20211117225530-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117225530-9504
	I1117 22:57:52.063301    4260 cli_runner.go:115] Run: docker container inspect multinode-20211117225530-9504 --format={{.State.Status}}
	W1117 22:57:52.159557    4260 cli_runner.go:162] docker container inspect multinode-20211117225530-9504 --format={{.State.Status}} returned with exit code 1
	I1117 22:57:52.159557    4260 oci.go:670] temporary error verifying shutdown: unknown state "multinode-20211117225530-9504": docker container inspect multinode-20211117225530-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117225530-9504
	I1117 22:57:52.159557    4260 oci.go:672] temporary error: container multinode-20211117225530-9504 status is  but expect it to be exited
	I1117 22:57:52.160160    4260 oci.go:87] couldn't shut down multinode-20211117225530-9504 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "multinode-20211117225530-9504": docker container inspect multinode-20211117225530-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117225530-9504
	 
	I1117 22:57:52.164740    4260 cli_runner.go:115] Run: docker rm -f -v multinode-20211117225530-9504
	W1117 22:57:52.254416    4260 cli_runner.go:162] docker rm -f -v multinode-20211117225530-9504 returned with exit code 1
	W1117 22:57:52.255626    4260 delete.go:139] delete failed (probably ok) <nil>
	I1117 22:57:52.255626    4260 fix.go:120] Sleeping 1 second for extra luck!
	I1117 22:57:53.256653    4260 start.go:126] createHost starting for "" (driver="docker")
	I1117 22:57:53.262990    4260 out.go:203] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I1117 22:57:53.263119    4260 start.go:160] libmachine.API.Create for "multinode-20211117225530-9504" (driver="docker")
	I1117 22:57:53.263349    4260 client.go:168] LocalClient.Create starting
	I1117 22:57:53.263857    4260 main.go:130] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I1117 22:57:53.264081    4260 main.go:130] libmachine: Decoding PEM data...
	I1117 22:57:53.264137    4260 main.go:130] libmachine: Parsing certificate...
	I1117 22:57:53.264137    4260 main.go:130] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I1117 22:57:53.264137    4260 main.go:130] libmachine: Decoding PEM data...
	I1117 22:57:53.264137    4260 main.go:130] libmachine: Parsing certificate...
	I1117 22:57:53.268360    4260 cli_runner.go:115] Run: docker network inspect multinode-20211117225530-9504 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1117 22:57:53.357050    4260 network_create.go:67] Found existing network {name:multinode-20211117225530-9504 subnet:0xc000c1c090 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 168 49 1] mtu:1500}
	I1117 22:57:53.357231    4260 kic.go:106] calculated static IP "192.168.49.2" for the "multinode-20211117225530-9504" container
	I1117 22:57:53.364146    4260 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
	I1117 22:57:53.454684    4260 cli_runner.go:115] Run: docker volume create multinode-20211117225530-9504 --label name.minikube.sigs.k8s.io=multinode-20211117225530-9504 --label created_by.minikube.sigs.k8s.io=true
	I1117 22:57:53.539331    4260 oci.go:102] Successfully created a docker volume multinode-20211117225530-9504
	I1117 22:57:53.543704    4260 cli_runner.go:115] Run: docker run --rm --name multinode-20211117225530-9504-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-20211117225530-9504 --entrypoint /usr/bin/test -v multinode-20211117225530-9504:/var gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -d /var/lib
	I1117 22:57:54.445105    4260 oci.go:106] Successfully prepared a docker volume multinode-20211117225530-9504
	I1117 22:57:54.445404    4260 preload.go:132] Checking if preload exists for k8s version v1.22.3 and runtime docker
	I1117 22:57:54.445404    4260 kic.go:179] Starting extracting preloaded images to volume ...
	I1117 22:57:54.450024    4260 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I1117 22:57:54.450024    4260 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-20211117225530-9504:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir
	W1117 22:57:54.560868    4260 cli_runner.go:162] docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-20211117225530-9504:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir returned with exit code 125
	I1117 22:57:54.561028    4260 kic.go:186] Unable to extract preloaded tarball to volume: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-20211117225530-9504:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: status code not OK but 500: {"Message":"Unhandled exception: The notification platform is unavailable.\r\n\r\nThe notification platform is unavailable.\r\n","StackTrace":"   at Windows.UI.Notifications.ToastNotificationManager.CreateToastNotifier(String applicationId)\r\n   at Docker.WPF.PromptShareDirectory.<PromptUserAsync>d__0.MoveNext() in C:\\workspaces\\stable-2.3.x\\src\\github.com\\docker\\pinata\\win\\src\\Docker.WPF\\PromptShareDirectory.cs:line 25\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at Docker.ApiServices.Mounting.FileSharing.<DoShareAsync>d__6.MoveNext() in C:\\workspaces\\stable-2.3.x\\src\\github.com\\docker\\pinata\\win\\src\\Docker.ApiServices\\Mounting\\FileSharing.cs:line 80\r\n--- End of stack trace from previous location whe
re exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at Docker.ApiServices.Mounting.FileSharing.<ShareAsync>d__4.MoveNext() in C:\\workspaces\\stable-2.3.x\\src\\github.com\\docker\\pinata\\win\\src\\Docker.ApiServices\\Mounting\\FileSharing.cs:line 47\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at Docker.HttpApi.Controllers.FilesharingController.<ShareDirectory>d__2.MoveNext() in C:\\workspaces\\stable-2.3.x\\src\\github.com\\docker\\pinata\\win\\src\\Docker.HttpApi\\Controllers\\FilesharingController.cs:line 21\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.Exceptio
nDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at System.Threading.Tasks.TaskHelpersExtensions.<CastToObject>d__1`1.MoveNext()\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at System.Web.Http.Controllers.ApiControllerActionInvoker.<InvokeActionAsyncCore>d__1.MoveNext()\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at System.Web.Http.Controllers.ActionFilterResult.<ExecuteAsync>d__5.MoveNext()\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.Excepti
onServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at System.Web.Http.Dispatcher.HttpControllerDispatcher.<SendAsync>d__15.MoveNext()"}.
	See 'docker run --help'.
	I1117 22:57:54.805429    4260 info.go:263] docker info: {ID:L2W4:ZZCI:FAHF:ZZT6:NAWJ:ODFR:2ZBQ:YQ77:PYZZ:VR3E:WLUM:OBSV Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:41 OomKillDisable:true NGoroutines:53 SystemTime:2021-11-17 22:57:54.537930521 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:4.19.76-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://inde
x.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:6239399936 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:19.03.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7ad184331fa3e55e52b890ea95e65ba581ae3429 Expected:7ad184331fa3e55e52b890ea95e65ba581ae3429} RuncCommit:{ID:dc9208a3303feef5b3839f4323d9beb36df0a9dd Expected:dc9208a3303feef5b3839f4323d9beb36df0a9dd} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense:Community Engine Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	E1117 22:57:54.805795    4260 oci.go:197] error getting kernel modules path: Unable to locate kernel modules
	I1117 22:57:54.805920    4260 client.go:171] LocalClient.Create took 1.5424343s
	I1117 22:57:56.815247    4260 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1117 22:57:56.819066    4260 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117225530-9504
	W1117 22:57:56.916700    4260 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117225530-9504 returned with exit code 1
	I1117 22:57:56.916988    4260 retry.go:31] will retry after 198.278561ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20211117225530-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117225530-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117225530-9504
	I1117 22:57:57.123032    4260 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117225530-9504
	W1117 22:57:57.213464    4260 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117225530-9504 returned with exit code 1
	I1117 22:57:57.213464    4260 retry.go:31] will retry after 298.905961ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20211117225530-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117225530-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117225530-9504
	I1117 22:57:57.516315    4260 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117225530-9504
	W1117 22:57:57.604046    4260 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117225530-9504 returned with exit code 1
	I1117 22:57:57.604303    4260 retry.go:31] will retry after 704.572333ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20211117225530-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117225530-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117225530-9504
	I1117 22:57:58.314910    4260 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117225530-9504
	W1117 22:57:58.405344    4260 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117225530-9504 returned with exit code 1
	W1117 22:57:58.405655    4260 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20211117225530-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117225530-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117225530-9504
	
	W1117 22:57:58.405725    4260 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20211117225530-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117225530-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117225530-9504
	I1117 22:57:58.405739    4260 start.go:129] duration metric: createHost completed in 5.1490471s
	I1117 22:57:58.412789    4260 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1117 22:57:58.416570    4260 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117225530-9504
	W1117 22:57:58.504858    4260 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117225530-9504 returned with exit code 1
	I1117 22:57:58.505212    4260 retry.go:31] will retry after 341.333754ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20211117225530-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117225530-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117225530-9504
	I1117 22:57:58.852345    4260 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117225530-9504
	W1117 22:57:58.937405    4260 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117225530-9504 returned with exit code 1
	I1117 22:57:58.937549    4260 retry.go:31] will retry after 448.769687ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20211117225530-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117225530-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117225530-9504
	I1117 22:57:59.390099    4260 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117225530-9504
	W1117 22:57:59.477462    4260 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117225530-9504 returned with exit code 1
	I1117 22:57:59.477752    4260 retry.go:31] will retry after 575.898922ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20211117225530-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117225530-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117225530-9504
	I1117 22:58:00.066183    4260 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117225530-9504
	W1117 22:58:00.159067    4260 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117225530-9504 returned with exit code 1
	W1117 22:58:00.159433    4260 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20211117225530-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117225530-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117225530-9504
	
	W1117 22:58:00.159433    4260 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20211117225530-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117225530-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117225530-9504
	I1117 22:58:00.159433    4260 fix.go:57] fixHost completed within 25.0314996s
	I1117 22:58:00.159433    4260 start.go:80] releasing machines lock for "multinode-20211117225530-9504", held for 25.0314996s
	W1117 22:58:00.159702    4260 out.go:241] * Failed to start docker container. Running "minikube delete -p multinode-20211117225530-9504" may fix it: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	* Failed to start docker container. Running "minikube delete -p multinode-20211117225530-9504" may fix it: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	I1117 22:58:00.166836    4260 out.go:176] 
	W1117 22:58:00.166836    4260 out.go:241] X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	W1117 22:58:00.166836    4260 out.go:241] * 
	* 
	W1117 22:58:00.167681    4260 out.go:241] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1117 22:58:00.170673    4260 out.go:176] 

                                                
                                                
** /stderr **
multinode_test.go:279: failed to run minikube start. args "out/minikube-windows-amd64.exe node list -p multinode-20211117225530-9504" : exit status 80
multinode_test.go:282: (dbg) Run:  out/minikube-windows-amd64.exe node list -p multinode-20211117225530-9504
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/RestartKeepsNodes]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-20211117225530-9504
helpers_test.go:235: (dbg) docker inspect multinode-20211117225530-9504:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-20211117225530-9504",
	        "Id": "499612ee6071dd6efa08c962472a0a3eb9e81c69d859854a1afad7c16b310041",
	        "Created": "2021-11-17T22:55:33.328415206Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.49.0/24",
	                    "Gateway": "192.168.49.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-20211117225530-9504 -n multinode-20211117225530-9504
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-20211117225530-9504 -n multinode-20211117225530-9504: exit status 7 (1.7458483s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 22:58:02.433738    3712 status.go:247] status error: host: state: unknown state "multinode-20211117225530-9504": docker container inspect multinode-20211117225530-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117225530-9504

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-20211117225530-9504" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (74.82s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.42s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:376: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20211117225530-9504 node delete m03
multinode_test.go:376: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-20211117225530-9504 node delete m03: exit status 80 (1.7941789s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "multinode-20211117225530-9504": docker container inspect multinode-20211117225530-9504 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117225530-9504
	
	* 
	╭──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                      │
	│    * If the above advice does not help, please let us know:                                                          │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                        │
	│                                                                                                                      │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                             │
	│    * Please also attach the following file to the GitHub issue:                                                      │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_node_207105384607abbf0a822abec5db82084f27bc08_0.log    │
	│                                                                                                                      │
	╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:378: node stop returned an error. args "out/minikube-windows-amd64.exe -p multinode-20211117225530-9504 node delete m03": exit status 80
multinode_test.go:382: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20211117225530-9504 status --alsologtostderr
multinode_test.go:382: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-20211117225530-9504 status --alsologtostderr: exit status 7 (1.7769868s)

                                                
                                                
-- stdout --
	multinode-20211117225530-9504
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1117 22:58:04.447493    8100 out.go:297] Setting OutFile to fd 1004 ...
	I1117 22:58:04.507481    8100 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 22:58:04.507481    8100 out.go:310] Setting ErrFile to fd 884...
	I1117 22:58:04.507481    8100 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 22:58:04.516474    8100 out.go:304] Setting JSON to false
	I1117 22:58:04.516474    8100 mustload.go:65] Loading cluster: multinode-20211117225530-9504
	I1117 22:58:04.517478    8100 config.go:176] Loaded profile config "multinode-20211117225530-9504": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.3
	I1117 22:58:04.517478    8100 status.go:253] checking status of multinode-20211117225530-9504 ...
	I1117 22:58:04.526474    8100 cli_runner.go:115] Run: docker container inspect multinode-20211117225530-9504 --format={{.State.Status}}
	W1117 22:58:06.006302    8100 cli_runner.go:162] docker container inspect multinode-20211117225530-9504 --format={{.State.Status}} returned with exit code 1
	I1117 22:58:06.006302    8100 cli_runner.go:168] Completed: docker container inspect multinode-20211117225530-9504 --format={{.State.Status}}: (1.4798168s)
	I1117 22:58:06.006582    8100 status.go:328] multinode-20211117225530-9504 host status = "" (err=state: unknown state "multinode-20211117225530-9504": docker container inspect multinode-20211117225530-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117225530-9504
	)
	I1117 22:58:06.006582    8100 status.go:255] multinode-20211117225530-9504 status: &{Name:multinode-20211117225530-9504 Host:Nonexistent Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Nonexistent Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E1117 22:58:06.006582    8100 status.go:258] status error: host: state: unknown state "multinode-20211117225530-9504": docker container inspect multinode-20211117225530-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117225530-9504
	E1117 22:58:06.006582    8100 status.go:261] The "multinode-20211117225530-9504" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:384: failed to run minikube status. args "out/minikube-windows-amd64.exe -p multinode-20211117225530-9504 status --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/DeleteNode]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-20211117225530-9504
helpers_test.go:235: (dbg) docker inspect multinode-20211117225530-9504:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-20211117225530-9504",
	        "Id": "499612ee6071dd6efa08c962472a0a3eb9e81c69d859854a1afad7c16b310041",
	        "Created": "2021-11-17T22:55:33.328415206Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.49.0/24",
	                    "Gateway": "192.168.49.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-20211117225530-9504 -n multinode-20211117225530-9504
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-20211117225530-9504 -n multinode-20211117225530-9504: exit status 7 (1.740892s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 22:58:07.852223   11568 status.go:247] status error: host: state: unknown state "multinode-20211117225530-9504": docker container inspect multinode-20211117225530-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117225530-9504

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-20211117225530-9504" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/DeleteNode (5.42s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (20.53s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:296: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20211117225530-9504 stop
multinode_test.go:296: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-20211117225530-9504 stop: exit status 82 (15.1498282s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-20211117225530-9504"  ...
	* Stopping node "multinode-20211117225530-9504"  ...
	* Stopping node "multinode-20211117225530-9504"  ...
	* Stopping node "multinode-20211117225530-9504"  ...
	* Stopping node "multinode-20211117225530-9504"  ...
	* Stopping node "multinode-20211117225530-9504"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 22:58:11.609497    8692 daemonize_windows.go:39] error terminating scheduled stop for profile multinode-20211117225530-9504: stopping schedule-stop service for profile multinode-20211117225530-9504: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20211117225530-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117225530-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117225530-9504
	X Exiting due to GUEST_STOP_TIMEOUT: docker container inspect multinode-20211117225530-9504 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117225530-9504
	
	* 
	╭──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                      │
	│    * If the above advice does not help, please let us know:                                                          │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                        │
	│                                                                                                                      │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                             │
	│    * Please also attach the following file to the GitHub issue:                                                      │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_stop_eeafc14e343686f0df1f1d4295ac2d3042636ff8_2.log    │
	│                                                                                                                      │
	╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:298: node stop returned an error. args "out/minikube-windows-amd64.exe -p multinode-20211117225530-9504 stop": exit status 82
multinode_test.go:302: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20211117225530-9504 status
multinode_test.go:302: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-20211117225530-9504 status: exit status 7 (1.7967744s)

                                                
                                                
-- stdout --
	multinode-20211117225530-9504
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 22:58:24.795715   10224 status.go:258] status error: host: state: unknown state "multinode-20211117225530-9504": docker container inspect multinode-20211117225530-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117225530-9504
	E1117 22:58:24.795715   10224 status.go:261] The "multinode-20211117225530-9504" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:309: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20211117225530-9504 status --alsologtostderr
multinode_test.go:309: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-20211117225530-9504 status --alsologtostderr: exit status 7 (1.7559915s)

                                                
                                                
-- stdout --
	multinode-20211117225530-9504
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1117 22:58:24.996065    6052 out.go:297] Setting OutFile to fd 324 ...
	I1117 22:58:25.063144    6052 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 22:58:25.063144    6052 out.go:310] Setting ErrFile to fd 648...
	I1117 22:58:25.063144    6052 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 22:58:25.074143    6052 out.go:304] Setting JSON to false
	I1117 22:58:25.074143    6052 mustload.go:65] Loading cluster: multinode-20211117225530-9504
	I1117 22:58:25.074834    6052 config.go:176] Loaded profile config "multinode-20211117225530-9504": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.3
	I1117 22:58:25.074834    6052 status.go:253] checking status of multinode-20211117225530-9504 ...
	I1117 22:58:25.081758    6052 cli_runner.go:115] Run: docker container inspect multinode-20211117225530-9504 --format={{.State.Status}}
	W1117 22:58:26.553358    6052 cli_runner.go:162] docker container inspect multinode-20211117225530-9504 --format={{.State.Status}} returned with exit code 1
	I1117 22:58:26.553572    6052 cli_runner.go:168] Completed: docker container inspect multinode-20211117225530-9504 --format={{.State.Status}}: (1.471589s)
	I1117 22:58:26.553572    6052 status.go:328] multinode-20211117225530-9504 host status = "" (err=state: unknown state "multinode-20211117225530-9504": docker container inspect multinode-20211117225530-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117225530-9504
	)
	I1117 22:58:26.553572    6052 status.go:255] multinode-20211117225530-9504 status: &{Name:multinode-20211117225530-9504 Host:Nonexistent Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Nonexistent Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E1117 22:58:26.553572    6052 status.go:258] status error: host: state: unknown state "multinode-20211117225530-9504": docker container inspect multinode-20211117225530-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117225530-9504
	E1117 22:58:26.553572    6052 status.go:261] The "multinode-20211117225530-9504" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:315: incorrect number of stopped hosts: args "out/minikube-windows-amd64.exe -p multinode-20211117225530-9504 status --alsologtostderr": multinode-20211117225530-9504
type: Control Plane
host: Nonexistent
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Nonexistent

                                                
                                                
multinode_test.go:319: incorrect number of stopped kubelets: args "out/minikube-windows-amd64.exe -p multinode-20211117225530-9504 status --alsologtostderr": multinode-20211117225530-9504
type: Control Plane
host: Nonexistent
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Nonexistent

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/StopMultiNode]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-20211117225530-9504
helpers_test.go:235: (dbg) docker inspect multinode-20211117225530-9504:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-20211117225530-9504",
	        "Id": "499612ee6071dd6efa08c962472a0a3eb9e81c69d859854a1afad7c16b310041",
	        "Created": "2021-11-17T22:55:33.328415206Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.49.0/24",
	                    "Gateway": "192.168.49.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-20211117225530-9504 -n multinode-20211117225530-9504
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-20211117225530-9504 -n multinode-20211117225530-9504: exit status 7 (1.7301027s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 22:58:28.386690    6776 status.go:247] status error: host: state: unknown state "multinode-20211117225530-9504": docker container inspect multinode-20211117225530-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117225530-9504

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-20211117225530-9504" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/StopMultiNode (20.53s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (59.35s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:326: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:336: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-20211117225530-9504 --wait=true -v=8 --alsologtostderr --driver=docker
multinode_test.go:336: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p multinode-20211117225530-9504 --wait=true -v=8 --alsologtostderr --driver=docker: exit status 80 (57.1800887s)

                                                
                                                
-- stdout --
	* [multinode-20211117225530-9504] minikube v1.24.0 on Microsoft Windows 10 Enterprise N 10.0.18363 Build 18363
	  - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=12739
	* Using the docker driver based on existing profile
	* Starting control plane node multinode-20211117225530-9504 in cluster multinode-20211117225530-9504
	* Pulling base image ...
	* docker "multinode-20211117225530-9504" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* docker "multinode-20211117225530-9504" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1117 22:58:28.767999    6300 out.go:297] Setting OutFile to fd 976 ...
	I1117 22:58:28.832186    6300 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 22:58:28.832186    6300 out.go:310] Setting ErrFile to fd 668...
	I1117 22:58:28.832186    6300 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 22:58:28.842220    6300 out.go:304] Setting JSON to false
	I1117 22:58:28.844316    6300 start.go:112] hostinfo: {"hostname":"minikube2","uptime":79224,"bootTime":1637110684,"procs":126,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.18363 Build 18363","kernelVersion":"10.0.18363 Build 18363","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"29dced62-21fb-45d8-a34e-472b66ced002"}
	W1117 22:58:28.845301    6300 start.go:120] gopshost.Virtualization returned error: not implemented yet
	I1117 22:58:28.851061    6300 out.go:176] * [multinode-20211117225530-9504] minikube v1.24.0 on Microsoft Windows 10 Enterprise N 10.0.18363 Build 18363
	I1117 22:58:28.851344    6300 notify.go:174] Checking for updates...
	I1117 22:58:28.857969    6300 out.go:176]   - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	I1117 22:58:28.859625    6300 out.go:176]   - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	I1117 22:58:28.862336    6300 out.go:176]   - MINIKUBE_LOCATION=12739
	I1117 22:58:28.863613    6300 config.go:176] Loaded profile config "multinode-20211117225530-9504": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.3
	I1117 22:58:28.865207    6300 driver.go:343] Setting default libvirt URI to qemu:///system
	I1117 22:58:30.425156    6300 docker.go:132] docker version: linux-19.03.12
	I1117 22:58:30.428028    6300 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I1117 22:58:30.757564    6300 info.go:263] docker info: {ID:L2W4:ZZCI:FAHF:ZZT6:NAWJ:ODFR:2ZBQ:YQ77:PYZZ:VR3E:WLUM:OBSV Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:40 OomKillDisable:true NGoroutines:52 SystemTime:2021-11-17 22:58:30.507717544 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:4.19.76-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://inde
x.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:6239399936 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:19.03.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7ad184331fa3e55e52b890ea95e65ba581ae3429 Expected:7ad184331fa3e55e52b890ea95e65ba581ae3429} RuncCommit:{ID:dc9208a3303feef5b3839f4323d9beb36df0a9dd Expected:dc9208a3303feef5b3839f4323d9beb36df0a9dd} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense:Community Engine Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I1117 22:58:30.765354    6300 out.go:176] * Using the docker driver based on existing profile
	I1117 22:58:30.765449    6300 start.go:280] selected driver: docker
	I1117 22:58:30.765588    6300 start.go:775] validating driver "docker" against &{Name:multinode-20211117225530-9504 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.3 ClusterName:multinode-20211117225530-9504 Namespace:default APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.22.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host}
	I1117 22:58:30.765739    6300 start.go:786] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I1117 22:58:30.780159    6300 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I1117 22:58:31.121754    6300 info.go:263] docker info: {ID:L2W4:ZZCI:FAHF:ZZT6:NAWJ:ODFR:2ZBQ:YQ77:PYZZ:VR3E:WLUM:OBSV Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:40 OomKillDisable:true NGoroutines:52 SystemTime:2021-11-17 22:58:30.857548468 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:4.19.76-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://inde
x.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:6239399936 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:19.03.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7ad184331fa3e55e52b890ea95e65ba581ae3429 Expected:7ad184331fa3e55e52b890ea95e65ba581ae3429} RuncCommit:{ID:dc9208a3303feef5b3839f4323d9beb36df0a9dd Expected:dc9208a3303feef5b3839f4323d9beb36df0a9dd} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense:Community Engine Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I1117 22:58:31.181789    6300 start_flags.go:758] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1117 22:58:31.181862    6300 cni.go:93] Creating CNI manager for ""
	I1117 22:58:31.181936    6300 cni.go:154] 1 nodes found, recommending kindnet
	I1117 22:58:31.181936    6300 start_flags.go:282] config:
	{Name:multinode-20211117225530-9504 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.3 ClusterName:multinode-20211117225530-9504 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket
: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.22.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host}
	I1117 22:58:31.185300    6300 out.go:176] * Starting control plane node multinode-20211117225530-9504 in cluster multinode-20211117225530-9504
	I1117 22:58:31.185300    6300 cache.go:118] Beginning downloading kic base image for docker with docker
	I1117 22:58:31.189854    6300 out.go:176] * Pulling base image ...
	I1117 22:58:31.189854    6300 preload.go:132] Checking if preload exists for k8s version v1.22.3 and runtime docker
	I1117 22:58:31.189854    6300 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c in local docker daemon
	I1117 22:58:31.189854    6300 preload.go:148] Found local preload: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4
	I1117 22:58:31.189854    6300 cache.go:57] Caching tarball of preloaded images
	I1117 22:58:31.190543    6300 preload.go:174] Found C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1117 22:58:31.190732    6300 cache.go:60] Finished verifying existence of preloaded tar for  v1.22.3 on docker
	I1117 22:58:31.190732    6300 profile.go:147] Saving config to C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\multinode-20211117225530-9504\config.json ...
	I1117 22:58:31.285112    6300 image.go:79] Found gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c in local docker daemon, skipping pull
	I1117 22:58:31.285112    6300 cache.go:140] gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c exists in daemon, skipping load
	I1117 22:58:31.285112    6300 cache.go:206] Successfully downloaded all kic artifacts
	I1117 22:58:31.285112    6300 start.go:313] acquiring machines lock for multinode-20211117225530-9504: {Name:mk4ceed01407c773b7965bcd22df69b99303385f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 22:58:31.285694    6300 start.go:317] acquired machines lock for "multinode-20211117225530-9504" in 581.9µs
	I1117 22:58:31.285892    6300 start.go:93] Skipping create...Using existing machine configuration
	I1117 22:58:31.285928    6300 fix.go:55] fixHost starting: 
	I1117 22:58:31.292116    6300 cli_runner.go:115] Run: docker container inspect multinode-20211117225530-9504 --format={{.State.Status}}
	W1117 22:58:31.390044    6300 cli_runner.go:162] docker container inspect multinode-20211117225530-9504 --format={{.State.Status}} returned with exit code 1
	I1117 22:58:31.390044    6300 fix.go:108] recreateIfNeeded on multinode-20211117225530-9504: state= err=unknown state "multinode-20211117225530-9504": docker container inspect multinode-20211117225530-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117225530-9504
	I1117 22:58:31.390044    6300 fix.go:113] machineExists: false. err=machine does not exist
	I1117 22:58:31.398601    6300 out.go:176] * docker "multinode-20211117225530-9504" container is missing, will recreate.
	I1117 22:58:31.398601    6300 delete.go:124] DEMOLISHING multinode-20211117225530-9504 ...
	I1117 22:58:31.406175    6300 cli_runner.go:115] Run: docker container inspect multinode-20211117225530-9504 --format={{.State.Status}}
	W1117 22:58:31.500127    6300 cli_runner.go:162] docker container inspect multinode-20211117225530-9504 --format={{.State.Status}} returned with exit code 1
	W1117 22:58:31.500127    6300 stop.go:75] unable to get state: unknown state "multinode-20211117225530-9504": docker container inspect multinode-20211117225530-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117225530-9504
	I1117 22:58:31.500127    6300 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "multinode-20211117225530-9504": docker container inspect multinode-20211117225530-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117225530-9504
	I1117 22:58:31.508412    6300 cli_runner.go:115] Run: docker container inspect multinode-20211117225530-9504 --format={{.State.Status}}
	W1117 22:58:31.608241    6300 cli_runner.go:162] docker container inspect multinode-20211117225530-9504 --format={{.State.Status}} returned with exit code 1
	I1117 22:58:31.608241    6300 delete.go:82] Unable to get host status for multinode-20211117225530-9504, assuming it has already been deleted: state: unknown state "multinode-20211117225530-9504": docker container inspect multinode-20211117225530-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117225530-9504
	I1117 22:58:31.611316    6300 cli_runner.go:115] Run: docker container inspect -f {{.Id}} multinode-20211117225530-9504
	W1117 22:58:31.711418    6300 cli_runner.go:162] docker container inspect -f {{.Id}} multinode-20211117225530-9504 returned with exit code 1
	I1117 22:58:31.711418    6300 kic.go:360] could not find the container multinode-20211117225530-9504 to remove it. will try anyways
	I1117 22:58:31.715493    6300 cli_runner.go:115] Run: docker container inspect multinode-20211117225530-9504 --format={{.State.Status}}
	W1117 22:58:31.804685    6300 cli_runner.go:162] docker container inspect multinode-20211117225530-9504 --format={{.State.Status}} returned with exit code 1
	W1117 22:58:31.804685    6300 oci.go:83] error getting container status, will try to delete anyways: unknown state "multinode-20211117225530-9504": docker container inspect multinode-20211117225530-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117225530-9504
	I1117 22:58:31.808034    6300 cli_runner.go:115] Run: docker exec --privileged -t multinode-20211117225530-9504 /bin/bash -c "sudo init 0"
	W1117 22:58:31.891413    6300 cli_runner.go:162] docker exec --privileged -t multinode-20211117225530-9504 /bin/bash -c "sudo init 0" returned with exit code 1
	I1117 22:58:31.891413    6300 oci.go:658] error shutdown multinode-20211117225530-9504: docker exec --privileged -t multinode-20211117225530-9504 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: multinode-20211117225530-9504
	I1117 22:58:32.894940    6300 cli_runner.go:115] Run: docker container inspect multinode-20211117225530-9504 --format={{.State.Status}}
	W1117 22:58:32.984648    6300 cli_runner.go:162] docker container inspect multinode-20211117225530-9504 --format={{.State.Status}} returned with exit code 1
	I1117 22:58:32.984648    6300 oci.go:670] temporary error verifying shutdown: unknown state "multinode-20211117225530-9504": docker container inspect multinode-20211117225530-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117225530-9504
	I1117 22:58:32.984648    6300 oci.go:672] temporary error: container multinode-20211117225530-9504 status is  but expect it to be exited
	I1117 22:58:32.984648    6300 retry.go:31] will retry after 552.330144ms: couldn't verify container is exited. %v: unknown state "multinode-20211117225530-9504": docker container inspect multinode-20211117225530-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117225530-9504
	I1117 22:58:33.540939    6300 cli_runner.go:115] Run: docker container inspect multinode-20211117225530-9504 --format={{.State.Status}}
	W1117 22:58:33.631692    6300 cli_runner.go:162] docker container inspect multinode-20211117225530-9504 --format={{.State.Status}} returned with exit code 1
	I1117 22:58:33.631692    6300 oci.go:670] temporary error verifying shutdown: unknown state "multinode-20211117225530-9504": docker container inspect multinode-20211117225530-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117225530-9504
	I1117 22:58:33.631692    6300 oci.go:672] temporary error: container multinode-20211117225530-9504 status is  but expect it to be exited
	I1117 22:58:33.631692    6300 retry.go:31] will retry after 1.080381816s: couldn't verify container is exited. %v: unknown state "multinode-20211117225530-9504": docker container inspect multinode-20211117225530-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117225530-9504
	I1117 22:58:34.716114    6300 cli_runner.go:115] Run: docker container inspect multinode-20211117225530-9504 --format={{.State.Status}}
	W1117 22:58:34.808421    6300 cli_runner.go:162] docker container inspect multinode-20211117225530-9504 --format={{.State.Status}} returned with exit code 1
	I1117 22:58:34.808421    6300 oci.go:670] temporary error verifying shutdown: unknown state "multinode-20211117225530-9504": docker container inspect multinode-20211117225530-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117225530-9504
	I1117 22:58:34.808699    6300 oci.go:672] temporary error: container multinode-20211117225530-9504 status is  but expect it to be exited
	I1117 22:58:34.808699    6300 retry.go:31] will retry after 1.31013006s: couldn't verify container is exited. %v: unknown state "multinode-20211117225530-9504": docker container inspect multinode-20211117225530-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117225530-9504
	I1117 22:58:36.123075    6300 cli_runner.go:115] Run: docker container inspect multinode-20211117225530-9504 --format={{.State.Status}}
	W1117 22:58:36.222865    6300 cli_runner.go:162] docker container inspect multinode-20211117225530-9504 --format={{.State.Status}} returned with exit code 1
	I1117 22:58:36.222865    6300 oci.go:670] temporary error verifying shutdown: unknown state "multinode-20211117225530-9504": docker container inspect multinode-20211117225530-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117225530-9504
	I1117 22:58:36.222865    6300 oci.go:672] temporary error: container multinode-20211117225530-9504 status is  but expect it to be exited
	I1117 22:58:36.222865    6300 retry.go:31] will retry after 1.582392691s: couldn't verify container is exited. %v: unknown state "multinode-20211117225530-9504": docker container inspect multinode-20211117225530-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117225530-9504
	I1117 22:58:37.809299    6300 cli_runner.go:115] Run: docker container inspect multinode-20211117225530-9504 --format={{.State.Status}}
	W1117 22:58:37.898975    6300 cli_runner.go:162] docker container inspect multinode-20211117225530-9504 --format={{.State.Status}} returned with exit code 1
	I1117 22:58:37.898975    6300 oci.go:670] temporary error verifying shutdown: unknown state "multinode-20211117225530-9504": docker container inspect multinode-20211117225530-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117225530-9504
	I1117 22:58:37.898975    6300 oci.go:672] temporary error: container multinode-20211117225530-9504 status is  but expect it to be exited
	I1117 22:58:37.898975    6300 retry.go:31] will retry after 2.340488664s: couldn't verify container is exited. %v: unknown state "multinode-20211117225530-9504": docker container inspect multinode-20211117225530-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117225530-9504
	I1117 22:58:40.242781    6300 cli_runner.go:115] Run: docker container inspect multinode-20211117225530-9504 --format={{.State.Status}}
	W1117 22:58:40.338865    6300 cli_runner.go:162] docker container inspect multinode-20211117225530-9504 --format={{.State.Status}} returned with exit code 1
	I1117 22:58:40.338865    6300 oci.go:670] temporary error verifying shutdown: unknown state "multinode-20211117225530-9504": docker container inspect multinode-20211117225530-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117225530-9504
	I1117 22:58:40.338865    6300 oci.go:672] temporary error: container multinode-20211117225530-9504 status is  but expect it to be exited
	I1117 22:58:40.338865    6300 retry.go:31] will retry after 4.506218855s: couldn't verify container is exited. %v: unknown state "multinode-20211117225530-9504": docker container inspect multinode-20211117225530-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117225530-9504
	I1117 22:58:44.848162    6300 cli_runner.go:115] Run: docker container inspect multinode-20211117225530-9504 --format={{.State.Status}}
	W1117 22:58:44.942670    6300 cli_runner.go:162] docker container inspect multinode-20211117225530-9504 --format={{.State.Status}} returned with exit code 1
	I1117 22:58:44.942670    6300 oci.go:670] temporary error verifying shutdown: unknown state "multinode-20211117225530-9504": docker container inspect multinode-20211117225530-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117225530-9504
	I1117 22:58:44.942670    6300 oci.go:672] temporary error: container multinode-20211117225530-9504 status is  but expect it to be exited
	I1117 22:58:44.942670    6300 retry.go:31] will retry after 3.221479586s: couldn't verify container is exited. %v: unknown state "multinode-20211117225530-9504": docker container inspect multinode-20211117225530-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117225530-9504
	I1117 22:58:48.167279    6300 cli_runner.go:115] Run: docker container inspect multinode-20211117225530-9504 --format={{.State.Status}}
	W1117 22:58:48.274233    6300 cli_runner.go:162] docker container inspect multinode-20211117225530-9504 --format={{.State.Status}} returned with exit code 1
	I1117 22:58:48.274233    6300 oci.go:670] temporary error verifying shutdown: unknown state "multinode-20211117225530-9504": docker container inspect multinode-20211117225530-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117225530-9504
	I1117 22:58:48.274233    6300 oci.go:672] temporary error: container multinode-20211117225530-9504 status is  but expect it to be exited
	I1117 22:58:48.274555    6300 oci.go:87] couldn't shut down multinode-20211117225530-9504 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "multinode-20211117225530-9504": docker container inspect multinode-20211117225530-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117225530-9504
	 
	I1117 22:58:48.277762    6300 cli_runner.go:115] Run: docker rm -f -v multinode-20211117225530-9504
	W1117 22:58:48.366583    6300 cli_runner.go:162] docker rm -f -v multinode-20211117225530-9504 returned with exit code 1
	W1117 22:58:48.367685    6300 delete.go:139] delete failed (probably ok) <nil>
	I1117 22:58:48.367685    6300 fix.go:120] Sleeping 1 second for extra luck!
	I1117 22:58:49.368015    6300 start.go:126] createHost starting for "" (driver="docker")
	I1117 22:58:49.371636    6300 out.go:203] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I1117 22:58:49.372727    6300 start.go:160] libmachine.API.Create for "multinode-20211117225530-9504" (driver="docker")
	I1117 22:58:49.372727    6300 client.go:168] LocalClient.Create starting
	I1117 22:58:49.372984    6300 main.go:130] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I1117 22:58:49.372984    6300 main.go:130] libmachine: Decoding PEM data...
	I1117 22:58:49.372984    6300 main.go:130] libmachine: Parsing certificate...
	I1117 22:58:49.372984    6300 main.go:130] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I1117 22:58:49.372984    6300 main.go:130] libmachine: Decoding PEM data...
	I1117 22:58:49.372984    6300 main.go:130] libmachine: Parsing certificate...
	I1117 22:58:49.378724    6300 cli_runner.go:115] Run: docker network inspect multinode-20211117225530-9504 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1117 22:58:49.476298    6300 network_create.go:67] Found existing network {name:multinode-20211117225530-9504 subnet:0xc00113db60 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 168 49 1] mtu:1500}
	I1117 22:58:49.476298    6300 kic.go:106] calculated static IP "192.168.49.2" for the "multinode-20211117225530-9504" container
	I1117 22:58:49.483296    6300 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
	I1117 22:58:49.589706    6300 cli_runner.go:115] Run: docker volume create multinode-20211117225530-9504 --label name.minikube.sigs.k8s.io=multinode-20211117225530-9504 --label created_by.minikube.sigs.k8s.io=true
	I1117 22:58:49.676973    6300 oci.go:102] Successfully created a docker volume multinode-20211117225530-9504
	I1117 22:58:49.679978    6300 cli_runner.go:115] Run: docker run --rm --name multinode-20211117225530-9504-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-20211117225530-9504 --entrypoint /usr/bin/test -v multinode-20211117225530-9504:/var gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -d /var/lib
	I1117 22:58:50.551966    6300 oci.go:106] Successfully prepared a docker volume multinode-20211117225530-9504
	I1117 22:58:50.551966    6300 preload.go:132] Checking if preload exists for k8s version v1.22.3 and runtime docker
	I1117 22:58:50.551966    6300 kic.go:179] Starting extracting preloaded images to volume ...
	I1117 22:58:50.556538    6300 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-20211117225530-9504:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir
	I1117 22:58:50.556538    6300 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	W1117 22:58:50.667067    6300 cli_runner.go:162] docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-20211117225530-9504:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir returned with exit code 125
	I1117 22:58:50.667067    6300 kic.go:186] Unable to extract preloaded tarball to volume: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-20211117225530-9504:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: status code not OK but 500: {"Message":"Unhandled exception: The notification platform is unavailable.\r\n\r\nThe notification platform is unavailable.\r\n","StackTrace":"   at Windows.UI.Notifications.ToastNotificationManager.CreateToastNotifier(String applicationId)\r\n   at Docker.WPF.PromptShareDirectory.<PromptUserAsync>d__0.MoveNext() in C:\\workspaces\\stable-2.3.x\\src\\github.com\\docker\\pinata\\win\\src\\Docker.WPF\\PromptShareDirectory.cs:line 25\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at Docker.ApiServices.Mounting.FileSharing.<DoShareAsync>d__6.MoveNext() in C:\\workspaces\\stable-2.3.x\\src\\github.com\\docker\\pinata\\win\\src\\Docker.ApiServices\\Mounting\\FileSharing.cs:line 80\r\n--- End of stack trace from previous location whe
re exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at Docker.ApiServices.Mounting.FileSharing.<ShareAsync>d__4.MoveNext() in C:\\workspaces\\stable-2.3.x\\src\\github.com\\docker\\pinata\\win\\src\\Docker.ApiServices\\Mounting\\FileSharing.cs:line 47\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at Docker.HttpApi.Controllers.FilesharingController.<ShareDirectory>d__2.MoveNext() in C:\\workspaces\\stable-2.3.x\\src\\github.com\\docker\\pinata\\win\\src\\Docker.HttpApi\\Controllers\\FilesharingController.cs:line 21\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.Exceptio
nDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at System.Threading.Tasks.TaskHelpersExtensions.<CastToObject>d__1`1.MoveNext()\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at System.Web.Http.Controllers.ApiControllerActionInvoker.<InvokeActionAsyncCore>d__1.MoveNext()\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at System.Web.Http.Controllers.ActionFilterResult.<ExecuteAsync>d__5.MoveNext()\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.Excepti
onServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at System.Web.Http.Dispatcher.HttpControllerDispatcher.<SendAsync>d__15.MoveNext()"}.
	See 'docker run --help'.
	I1117 22:58:50.904495    6300 info.go:263] docker info: {ID:L2W4:ZZCI:FAHF:ZZT6:NAWJ:ODFR:2ZBQ:YQ77:PYZZ:VR3E:WLUM:OBSV Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:41 OomKillDisable:true NGoroutines:53 SystemTime:2021-11-17 22:58:50.650123267 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:4.19.76-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://inde
x.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:6239399936 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:19.03.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7ad184331fa3e55e52b890ea95e65ba581ae3429 Expected:7ad184331fa3e55e52b890ea95e65ba581ae3429} RuncCommit:{ID:dc9208a3303feef5b3839f4323d9beb36df0a9dd Expected:dc9208a3303feef5b3839f4323d9beb36df0a9dd} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense:Community Engine Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	E1117 22:58:50.904495    6300 oci.go:197] error getting kernel modules path: Unable to locate kernel modules
	I1117 22:58:50.904495    6300 client.go:171] LocalClient.Create took 1.5317565s
	I1117 22:58:52.913023    6300 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1117 22:58:52.922954    6300 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117225530-9504
	W1117 22:58:53.034501    6300 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117225530-9504 returned with exit code 1
	I1117 22:58:53.034786    6300 retry.go:31] will retry after 149.242379ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20211117225530-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117225530-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117225530-9504
	I1117 22:58:53.189164    6300 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117225530-9504
	W1117 22:58:53.283066    6300 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117225530-9504 returned with exit code 1
	I1117 22:58:53.283066    6300 retry.go:31] will retry after 300.341948ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20211117225530-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117225530-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117225530-9504
	I1117 22:58:53.586853    6300 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117225530-9504
	W1117 22:58:53.676868    6300 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117225530-9504 returned with exit code 1
	I1117 22:58:53.677175    6300 retry.go:31] will retry after 571.057104ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20211117225530-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117225530-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117225530-9504
	I1117 22:58:54.252135    6300 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117225530-9504
	W1117 22:58:54.357853    6300 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117225530-9504 returned with exit code 1
	W1117 22:58:54.357853    6300 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20211117225530-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117225530-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117225530-9504
	
	W1117 22:58:54.357853    6300 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20211117225530-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117225530-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117225530-9504
	I1117 22:58:54.357853    6300 start.go:129] duration metric: createHost completed in 4.9898008s
	I1117 22:58:54.366065    6300 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1117 22:58:54.370041    6300 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117225530-9504
	W1117 22:58:54.457242    6300 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117225530-9504 returned with exit code 1
	I1117 22:58:54.457242    6300 retry.go:31] will retry after 178.565968ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20211117225530-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117225530-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117225530-9504
	I1117 22:58:54.640717    6300 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117225530-9504
	W1117 22:58:54.737642    6300 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117225530-9504 returned with exit code 1
	I1117 22:58:54.737642    6300 retry.go:31] will retry after 330.246446ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20211117225530-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117225530-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117225530-9504
	I1117 22:58:55.071370    6300 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117225530-9504
	W1117 22:58:55.162273    6300 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117225530-9504 returned with exit code 1
	I1117 22:58:55.162273    6300 retry.go:31] will retry after 460.157723ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20211117225530-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117225530-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117225530-9504
	I1117 22:58:55.626093    6300 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117225530-9504
	W1117 22:58:55.716351    6300 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117225530-9504 returned with exit code 1
	W1117 22:58:55.716606    6300 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20211117225530-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117225530-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117225530-9504
	
	W1117 22:58:55.716606    6300 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20211117225530-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117225530-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117225530-9504
	I1117 22:58:55.716606    6300 fix.go:57] fixHost completed within 24.4304955s
	I1117 22:58:55.716606    6300 start.go:80] releasing machines lock for "multinode-20211117225530-9504", held for 24.4307289s
	W1117 22:58:55.716606    6300 start.go:532] error starting host: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	W1117 22:58:55.716606    6300 out.go:241] ! StartHost failed, but will try again: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	! StartHost failed, but will try again: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	I1117 22:58:55.716606    6300 start.go:547] Will try again in 5 seconds ...
	I1117 22:59:00.716673    6300 start.go:313] acquiring machines lock for multinode-20211117225530-9504: {Name:mk4ceed01407c773b7965bcd22df69b99303385f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 22:59:00.716673    6300 start.go:317] acquired machines lock for "multinode-20211117225530-9504" in 0s
	I1117 22:59:00.717398    6300 start.go:93] Skipping create...Using existing machine configuration
	I1117 22:59:00.717398    6300 fix.go:55] fixHost starting: 
	I1117 22:59:00.724186    6300 cli_runner.go:115] Run: docker container inspect multinode-20211117225530-9504 --format={{.State.Status}}
	W1117 22:59:00.819099    6300 cli_runner.go:162] docker container inspect multinode-20211117225530-9504 --format={{.State.Status}} returned with exit code 1
	I1117 22:59:00.819099    6300 fix.go:108] recreateIfNeeded on multinode-20211117225530-9504: state= err=unknown state "multinode-20211117225530-9504": docker container inspect multinode-20211117225530-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117225530-9504
	I1117 22:59:00.819099    6300 fix.go:113] machineExists: false. err=machine does not exist
	I1117 22:59:00.823569    6300 out.go:176] * docker "multinode-20211117225530-9504" container is missing, will recreate.
	I1117 22:59:00.823569    6300 delete.go:124] DEMOLISHING multinode-20211117225530-9504 ...
	I1117 22:59:00.833310    6300 cli_runner.go:115] Run: docker container inspect multinode-20211117225530-9504 --format={{.State.Status}}
	W1117 22:59:00.923211    6300 cli_runner.go:162] docker container inspect multinode-20211117225530-9504 --format={{.State.Status}} returned with exit code 1
	W1117 22:59:00.923211    6300 stop.go:75] unable to get state: unknown state "multinode-20211117225530-9504": docker container inspect multinode-20211117225530-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117225530-9504
	I1117 22:59:00.923211    6300 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "multinode-20211117225530-9504": docker container inspect multinode-20211117225530-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117225530-9504
	I1117 22:59:00.931611    6300 cli_runner.go:115] Run: docker container inspect multinode-20211117225530-9504 --format={{.State.Status}}
	W1117 22:59:01.021055    6300 cli_runner.go:162] docker container inspect multinode-20211117225530-9504 --format={{.State.Status}} returned with exit code 1
	I1117 22:59:01.021055    6300 delete.go:82] Unable to get host status for multinode-20211117225530-9504, assuming it has already been deleted: state: unknown state "multinode-20211117225530-9504": docker container inspect multinode-20211117225530-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117225530-9504
	I1117 22:59:01.023545    6300 cli_runner.go:115] Run: docker container inspect -f {{.Id}} multinode-20211117225530-9504
	W1117 22:59:01.110632    6300 cli_runner.go:162] docker container inspect -f {{.Id}} multinode-20211117225530-9504 returned with exit code 1
	I1117 22:59:01.110632    6300 kic.go:360] could not find the container multinode-20211117225530-9504 to remove it. will try anyways
	I1117 22:59:01.113977    6300 cli_runner.go:115] Run: docker container inspect multinode-20211117225530-9504 --format={{.State.Status}}
	W1117 22:59:01.199077    6300 cli_runner.go:162] docker container inspect multinode-20211117225530-9504 --format={{.State.Status}} returned with exit code 1
	W1117 22:59:01.199077    6300 oci.go:83] error getting container status, will try to delete anyways: unknown state "multinode-20211117225530-9504": docker container inspect multinode-20211117225530-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117225530-9504
	I1117 22:59:01.202582    6300 cli_runner.go:115] Run: docker exec --privileged -t multinode-20211117225530-9504 /bin/bash -c "sudo init 0"
	W1117 22:59:01.290737    6300 cli_runner.go:162] docker exec --privileged -t multinode-20211117225530-9504 /bin/bash -c "sudo init 0" returned with exit code 1
	I1117 22:59:01.290737    6300 oci.go:658] error shutdown multinode-20211117225530-9504: docker exec --privileged -t multinode-20211117225530-9504 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: multinode-20211117225530-9504
	I1117 22:59:02.293902    6300 cli_runner.go:115] Run: docker container inspect multinode-20211117225530-9504 --format={{.State.Status}}
	W1117 22:59:02.386095    6300 cli_runner.go:162] docker container inspect multinode-20211117225530-9504 --format={{.State.Status}} returned with exit code 1
	I1117 22:59:02.386095    6300 oci.go:670] temporary error verifying shutdown: unknown state "multinode-20211117225530-9504": docker container inspect multinode-20211117225530-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117225530-9504
	I1117 22:59:02.386095    6300 oci.go:672] temporary error: container multinode-20211117225530-9504 status is  but expect it to be exited
	I1117 22:59:02.386095    6300 retry.go:31] will retry after 391.517075ms: couldn't verify container is exited. %v: unknown state "multinode-20211117225530-9504": docker container inspect multinode-20211117225530-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117225530-9504
	I1117 22:59:02.783602    6300 cli_runner.go:115] Run: docker container inspect multinode-20211117225530-9504 --format={{.State.Status}}
	W1117 22:59:02.877274    6300 cli_runner.go:162] docker container inspect multinode-20211117225530-9504 --format={{.State.Status}} returned with exit code 1
	I1117 22:59:02.877274    6300 oci.go:670] temporary error verifying shutdown: unknown state "multinode-20211117225530-9504": docker container inspect multinode-20211117225530-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117225530-9504
	I1117 22:59:02.877274    6300 oci.go:672] temporary error: container multinode-20211117225530-9504 status is  but expect it to be exited
	I1117 22:59:02.877274    6300 retry.go:31] will retry after 594.826393ms: couldn't verify container is exited. %v: unknown state "multinode-20211117225530-9504": docker container inspect multinode-20211117225530-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117225530-9504
	I1117 22:59:03.475981    6300 cli_runner.go:115] Run: docker container inspect multinode-20211117225530-9504 --format={{.State.Status}}
	W1117 22:59:03.568889    6300 cli_runner.go:162] docker container inspect multinode-20211117225530-9504 --format={{.State.Status}} returned with exit code 1
	I1117 22:59:03.568889    6300 oci.go:670] temporary error verifying shutdown: unknown state "multinode-20211117225530-9504": docker container inspect multinode-20211117225530-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117225530-9504
	I1117 22:59:03.568889    6300 oci.go:672] temporary error: container multinode-20211117225530-9504 status is  but expect it to be exited
	I1117 22:59:03.568889    6300 retry.go:31] will retry after 1.326470261s: couldn't verify container is exited. %v: unknown state "multinode-20211117225530-9504": docker container inspect multinode-20211117225530-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117225530-9504
	I1117 22:59:04.898845    6300 cli_runner.go:115] Run: docker container inspect multinode-20211117225530-9504 --format={{.State.Status}}
	W1117 22:59:04.992376    6300 cli_runner.go:162] docker container inspect multinode-20211117225530-9504 --format={{.State.Status}} returned with exit code 1
	I1117 22:59:04.992376    6300 oci.go:670] temporary error verifying shutdown: unknown state "multinode-20211117225530-9504": docker container inspect multinode-20211117225530-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117225530-9504
	I1117 22:59:04.992376    6300 oci.go:672] temporary error: container multinode-20211117225530-9504 status is  but expect it to be exited
	I1117 22:59:04.992376    6300 retry.go:31] will retry after 1.212558276s: couldn't verify container is exited. %v: unknown state "multinode-20211117225530-9504": docker container inspect multinode-20211117225530-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117225530-9504
	I1117 22:59:06.208878    6300 cli_runner.go:115] Run: docker container inspect multinode-20211117225530-9504 --format={{.State.Status}}
	W1117 22:59:06.303295    6300 cli_runner.go:162] docker container inspect multinode-20211117225530-9504 --format={{.State.Status}} returned with exit code 1
	I1117 22:59:06.303295    6300 oci.go:670] temporary error verifying shutdown: unknown state "multinode-20211117225530-9504": docker container inspect multinode-20211117225530-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117225530-9504
	I1117 22:59:06.303295    6300 oci.go:672] temporary error: container multinode-20211117225530-9504 status is  but expect it to be exited
	I1117 22:59:06.303295    6300 retry.go:31] will retry after 1.779941781s: couldn't verify container is exited. %v: unknown state "multinode-20211117225530-9504": docker container inspect multinode-20211117225530-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117225530-9504
	I1117 22:59:08.086717    6300 cli_runner.go:115] Run: docker container inspect multinode-20211117225530-9504 --format={{.State.Status}}
	W1117 22:59:08.180739    6300 cli_runner.go:162] docker container inspect multinode-20211117225530-9504 --format={{.State.Status}} returned with exit code 1
	I1117 22:59:08.180739    6300 oci.go:670] temporary error verifying shutdown: unknown state "multinode-20211117225530-9504": docker container inspect multinode-20211117225530-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117225530-9504
	I1117 22:59:08.180739    6300 oci.go:672] temporary error: container multinode-20211117225530-9504 status is  but expect it to be exited
	I1117 22:59:08.180739    6300 retry.go:31] will retry after 3.268621161s: couldn't verify container is exited. %v: unknown state "multinode-20211117225530-9504": docker container inspect multinode-20211117225530-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117225530-9504
	I1117 22:59:11.453368    6300 cli_runner.go:115] Run: docker container inspect multinode-20211117225530-9504 --format={{.State.Status}}
	W1117 22:59:11.545741    6300 cli_runner.go:162] docker container inspect multinode-20211117225530-9504 --format={{.State.Status}} returned with exit code 1
	I1117 22:59:11.545741    6300 oci.go:670] temporary error verifying shutdown: unknown state "multinode-20211117225530-9504": docker container inspect multinode-20211117225530-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117225530-9504
	I1117 22:59:11.545741    6300 oci.go:672] temporary error: container multinode-20211117225530-9504 status is  but expect it to be exited
	I1117 22:59:11.545741    6300 retry.go:31] will retry after 6.097818893s: couldn't verify container is exited. %v: unknown state "multinode-20211117225530-9504": docker container inspect multinode-20211117225530-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117225530-9504
	I1117 22:59:17.647265    6300 cli_runner.go:115] Run: docker container inspect multinode-20211117225530-9504 --format={{.State.Status}}
	W1117 22:59:17.740453    6300 cli_runner.go:162] docker container inspect multinode-20211117225530-9504 --format={{.State.Status}} returned with exit code 1
	I1117 22:59:17.740453    6300 oci.go:670] temporary error verifying shutdown: unknown state "multinode-20211117225530-9504": docker container inspect multinode-20211117225530-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117225530-9504
	I1117 22:59:17.740453    6300 oci.go:672] temporary error: container multinode-20211117225530-9504 status is  but expect it to be exited
	I1117 22:59:17.740589    6300 oci.go:87] couldn't shut down multinode-20211117225530-9504 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "multinode-20211117225530-9504": docker container inspect multinode-20211117225530-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117225530-9504
	 
	I1117 22:59:17.743375    6300 cli_runner.go:115] Run: docker rm -f -v multinode-20211117225530-9504
	W1117 22:59:17.832092    6300 cli_runner.go:162] docker rm -f -v multinode-20211117225530-9504 returned with exit code 1
	W1117 22:59:17.833019    6300 delete.go:139] delete failed (probably ok) <nil>
	I1117 22:59:17.833019    6300 fix.go:120] Sleeping 1 second for extra luck!
	I1117 22:59:18.833601    6300 start.go:126] createHost starting for "" (driver="docker")
	I1117 22:59:18.837441    6300 out.go:203] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I1117 22:59:18.838219    6300 start.go:160] libmachine.API.Create for "multinode-20211117225530-9504" (driver="docker")
	I1117 22:59:18.838219    6300 client.go:168] LocalClient.Create starting
	I1117 22:59:18.838808    6300 main.go:130] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I1117 22:59:18.839047    6300 main.go:130] libmachine: Decoding PEM data...
	I1117 22:59:18.839047    6300 main.go:130] libmachine: Parsing certificate...
	I1117 22:59:18.839047    6300 main.go:130] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I1117 22:59:18.839047    6300 main.go:130] libmachine: Decoding PEM data...
	I1117 22:59:18.839047    6300 main.go:130] libmachine: Parsing certificate...
	I1117 22:59:18.843279    6300 cli_runner.go:115] Run: docker network inspect multinode-20211117225530-9504 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1117 22:59:18.931795    6300 network_create.go:67] Found existing network {name:multinode-20211117225530-9504 subnet:0xc001177e00 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 168 49 1] mtu:1500}
	I1117 22:59:18.931795    6300 kic.go:106] calculated static IP "192.168.49.2" for the "multinode-20211117225530-9504" container
	I1117 22:59:18.937812    6300 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
	I1117 22:59:19.030453    6300 cli_runner.go:115] Run: docker volume create multinode-20211117225530-9504 --label name.minikube.sigs.k8s.io=multinode-20211117225530-9504 --label created_by.minikube.sigs.k8s.io=true
	I1117 22:59:19.121472    6300 oci.go:102] Successfully created a docker volume multinode-20211117225530-9504
	I1117 22:59:19.126001    6300 cli_runner.go:115] Run: docker run --rm --name multinode-20211117225530-9504-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-20211117225530-9504 --entrypoint /usr/bin/test -v multinode-20211117225530-9504:/var gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -d /var/lib
	I1117 22:59:20.003820    6300 oci.go:106] Successfully prepared a docker volume multinode-20211117225530-9504
	I1117 22:59:20.003820    6300 preload.go:132] Checking if preload exists for k8s version v1.22.3 and runtime docker
	I1117 22:59:20.003820    6300 kic.go:179] Starting extracting preloaded images to volume ...
	I1117 22:59:20.007879    6300 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-20211117225530-9504:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir
	I1117 22:59:20.007879    6300 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	W1117 22:59:20.135555    6300 cli_runner.go:162] docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-20211117225530-9504:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir returned with exit code 125
	I1117 22:59:20.135555    6300 kic.go:186] Unable to extract preloaded tarball to volume: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-20211117225530-9504:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: status code not OK but 500: {"Message":"Unhandled exception: The notification platform is unavailable.\r\n\r\nThe notification platform is unavailable.\r\n","StackTrace":"   at Windows.UI.Notifications.ToastNotificationManager.CreateToastNotifier(String applicationId)\r\n   at Docker.WPF.PromptShareDirectory.<PromptUserAsync>d__0.MoveNext() in C:\\workspaces\\stable-2.3.x\\src\\github.com\\docker\\pinata\\win\\src\\Docker.WPF\\PromptShareDirectory.cs:line 25\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at Docker.ApiServices.Mounting.FileSharing.<DoShareAsync>d__6.MoveNext() in C:\\workspaces\\stable-2.3.x\\src\\github.com\\docker\\pinata\\win\\src\\Docker.ApiServices\\Mounting\\FileSharing.cs:line 80\r\n--- End of stack trace from previous location whe
re exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at Docker.ApiServices.Mounting.FileSharing.<ShareAsync>d__4.MoveNext() in C:\\workspaces\\stable-2.3.x\\src\\github.com\\docker\\pinata\\win\\src\\Docker.ApiServices\\Mounting\\FileSharing.cs:line 47\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at Docker.HttpApi.Controllers.FilesharingController.<ShareDirectory>d__2.MoveNext() in C:\\workspaces\\stable-2.3.x\\src\\github.com\\docker\\pinata\\win\\src\\Docker.HttpApi\\Controllers\\FilesharingController.cs:line 21\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.Exceptio
nDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at System.Threading.Tasks.TaskHelpersExtensions.<CastToObject>d__1`1.MoveNext()\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at System.Web.Http.Controllers.ApiControllerActionInvoker.<InvokeActionAsyncCore>d__1.MoveNext()\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at System.Web.Http.Controllers.ActionFilterResult.<ExecuteAsync>d__5.MoveNext()\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.Excepti
onServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at System.Web.Http.Dispatcher.HttpControllerDispatcher.<SendAsync>d__15.MoveNext()"}.
	See 'docker run --help'.
	I1117 22:59:20.365365    6300 info.go:263] docker info: {ID:L2W4:ZZCI:FAHF:ZZT6:NAWJ:ODFR:2ZBQ:YQ77:PYZZ:VR3E:WLUM:OBSV Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:40 OomKillDisable:true NGoroutines:52 SystemTime:2021-11-17 22:59:20.101478209 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:4.19.76-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://inde
x.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:6239399936 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:19.03.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7ad184331fa3e55e52b890ea95e65ba581ae3429 Expected:7ad184331fa3e55e52b890ea95e65ba581ae3429} RuncCommit:{ID:dc9208a3303feef5b3839f4323d9beb36df0a9dd Expected:dc9208a3303feef5b3839f4323d9beb36df0a9dd} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense:Community Engine Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	E1117 22:59:20.365365    6300 oci.go:197] error getting kernel modules path: Unable to locate kernel modules
	I1117 22:59:20.365365    6300 client.go:171] LocalClient.Create took 1.5271353s
	I1117 22:59:22.375445    6300 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1117 22:59:22.380242    6300 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117225530-9504
	W1117 22:59:22.471831    6300 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117225530-9504 returned with exit code 1
	I1117 22:59:22.471831    6300 retry.go:31] will retry after 198.278561ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20211117225530-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117225530-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117225530-9504
	I1117 22:59:22.673483    6300 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117225530-9504
	W1117 22:59:22.766825    6300 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117225530-9504 returned with exit code 1
	I1117 22:59:22.767528    6300 retry.go:31] will retry after 298.905961ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20211117225530-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117225530-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117225530-9504
	I1117 22:59:23.070953    6300 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117225530-9504
	W1117 22:59:23.158515    6300 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117225530-9504 returned with exit code 1
	I1117 22:59:23.158515    6300 retry.go:31] will retry after 704.572333ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20211117225530-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117225530-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117225530-9504
	I1117 22:59:23.867877    6300 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117225530-9504
	W1117 22:59:23.958708    6300 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117225530-9504 returned with exit code 1
	W1117 22:59:23.958708    6300 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20211117225530-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117225530-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117225530-9504
	
	W1117 22:59:23.958708    6300 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20211117225530-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117225530-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117225530-9504
	I1117 22:59:23.958708    6300 start.go:129] duration metric: createHost completed in 5.1250691s
	I1117 22:59:23.973868    6300 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1117 22:59:23.976887    6300 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117225530-9504
	W1117 22:59:24.066624    6300 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117225530-9504 returned with exit code 1
	I1117 22:59:24.066949    6300 retry.go:31] will retry after 341.333754ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20211117225530-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117225530-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117225530-9504
	I1117 22:59:24.412375    6300 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117225530-9504
	W1117 22:59:24.503053    6300 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117225530-9504 returned with exit code 1
	I1117 22:59:24.503053    6300 retry.go:31] will retry after 448.769687ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20211117225530-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117225530-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117225530-9504
	I1117 22:59:24.957005    6300 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117225530-9504
	W1117 22:59:25.047836    6300 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117225530-9504 returned with exit code 1
	I1117 22:59:25.048172    6300 retry.go:31] will retry after 575.898922ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20211117225530-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117225530-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117225530-9504
	I1117 22:59:25.628734    6300 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117225530-9504
	W1117 22:59:25.726950    6300 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117225530-9504 returned with exit code 1
	W1117 22:59:25.727011    6300 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20211117225530-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117225530-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117225530-9504
	
	W1117 22:59:25.727011    6300 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20211117225530-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117225530-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117225530-9504
	I1117 22:59:25.727011    6300 fix.go:57] fixHost completed within 25.0094248s
	I1117 22:59:25.727011    6300 start.go:80] releasing machines lock for "multinode-20211117225530-9504", held for 25.0095556s
	W1117 22:59:25.727755    6300 out.go:241] * Failed to start docker container. Running "minikube delete -p multinode-20211117225530-9504" may fix it: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	* Failed to start docker container. Running "minikube delete -p multinode-20211117225530-9504" may fix it: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	I1117 22:59:25.732867    6300 out.go:176] 
	W1117 22:59:25.733054    6300 out.go:241] X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	W1117 22:59:25.733136    6300 out.go:241] * 
	* 
	W1117 22:59:25.733904    6300 out.go:241] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1117 22:59:25.736293    6300 out.go:176] 

                                                
                                                
** /stderr **
multinode_test.go:338: failed to start cluster. args "out/minikube-windows-amd64.exe start -p multinode-20211117225530-9504 --wait=true -v=8 --alsologtostderr --driver=docker" : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/RestartMultiNode]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-20211117225530-9504
helpers_test.go:235: (dbg) docker inspect multinode-20211117225530-9504:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-20211117225530-9504",
	        "Id": "499612ee6071dd6efa08c962472a0a3eb9e81c69d859854a1afad7c16b310041",
	        "Created": "2021-11-17T22:55:33.328415206Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.49.0/24",
	                    "Gateway": "192.168.49.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-20211117225530-9504 -n multinode-20211117225530-9504
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-20211117225530-9504 -n multinode-20211117225530-9504: exit status 7 (1.7666154s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 22:59:27.738985    1368 status.go:247] status error: host: state: unknown state "multinode-20211117225530-9504": docker container inspect multinode-20211117225530-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117225530-9504

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-20211117225530-9504" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/RestartMultiNode (59.35s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (82.35s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:425: (dbg) Run:  out/minikube-windows-amd64.exe node list -p multinode-20211117225530-9504
multinode_test.go:434: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-20211117225530-9504-m01 --driver=docker
multinode_test.go:434: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p multinode-20211117225530-9504-m01 --driver=docker: exit status 80 (37.3206406s)

                                                
                                                
-- stdout --
	* [multinode-20211117225530-9504-m01] minikube v1.24.0 on Microsoft Windows 10 Enterprise N 10.0.18363 Build 18363
	  - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=12739
	* Using the docker driver based on user configuration
	* Starting control plane node multinode-20211117225530-9504-m01 in cluster multinode-20211117225530-9504-m01
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=5902MB) ...
	* docker "multinode-20211117225530-9504-m01" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=5902MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 22:59:33.054831    1840 oci.go:197] error getting kernel modules path: Unable to locate kernel modules
	! StartHost failed, but will try again: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	E1117 23:00:00.427214    1840 oci.go:197] error getting kernel modules path: Unable to locate kernel modules
	* Failed to start docker container. Running "minikube delete -p multinode-20211117225530-9504-m01" may fix it: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:442: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-20211117225530-9504-m02 --driver=docker
multinode_test.go:442: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p multinode-20211117225530-9504-m02 --driver=docker: exit status 80 (37.6133559s)

                                                
                                                
-- stdout --
	* [multinode-20211117225530-9504-m02] minikube v1.24.0 on Microsoft Windows 10 Enterprise N 10.0.18363 Build 18363
	  - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=12739
	* Using the docker driver based on user configuration
	* Starting control plane node multinode-20211117225530-9504-m02 in cluster multinode-20211117225530-9504-m02
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=5902MB) ...
	* docker "multinode-20211117225530-9504-m02" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=5902MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 23:00:10.525603    9208 oci.go:197] error getting kernel modules path: Unable to locate kernel modules
	! StartHost failed, but will try again: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	E1117 23:00:38.022849    9208 oci.go:197] error getting kernel modules path: Unable to locate kernel modules
	* Failed to start docker container. Running "minikube delete -p multinode-20211117225530-9504-m02" may fix it: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:444: failed to start profile. args "out/minikube-windows-amd64.exe start -p multinode-20211117225530-9504-m02 --driver=docker" : exit status 80
multinode_test.go:449: (dbg) Run:  out/minikube-windows-amd64.exe node add -p multinode-20211117225530-9504
multinode_test.go:449: (dbg) Non-zero exit: out/minikube-windows-amd64.exe node add -p multinode-20211117225530-9504: exit status 80 (1.7451638s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "multinode-20211117225530-9504": docker container inspect multinode-20211117225530-9504 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117225530-9504
	
	* 
	╭──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                      │
	│    * If the above advice does not help, please let us know:                                                          │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                        │
	│                                                                                                                      │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                             │
	│    * Please also attach the following file to the GitHub issue:                                                      │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_node_e3f75f9fdd712fd5423563a6a11e787bf6359068_1.log    │
	│                                                                                                                      │
	╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:454: (dbg) Run:  out/minikube-windows-amd64.exe delete -p multinode-20211117225530-9504-m02
multinode_test.go:454: (dbg) Done: out/minikube-windows-amd64.exe delete -p multinode-20211117225530-9504-m02: (3.5065939s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/ValidateNameConflict]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-20211117225530-9504
helpers_test.go:235: (dbg) docker inspect multinode-20211117225530-9504:

                                                
                                                
-- stdout --
	[
	    {
	        "CreatedAt": "2021-11-17T22:55:33Z",
	        "Driver": "local",
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-20211117225530-9504"
	        },
	        "Mountpoint": "/var/lib/docker/volumes/multinode-20211117225530-9504/_data",
	        "Name": "multinode-20211117225530-9504",
	        "Options": {},
	        "Scope": "local"
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-20211117225530-9504 -n multinode-20211117225530-9504
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-20211117225530-9504 -n multinode-20211117225530-9504: exit status 7 (1.7418789s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 23:00:50.081143    6004 status.go:247] status error: host: state: unknown state "multinode-20211117225530-9504": docker container inspect multinode-20211117225530-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117225530-9504

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-20211117225530-9504" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/ValidateNameConflict (82.35s)

                                                
                                    
x
+
TestPreload (42.05s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:49: (dbg) Run:  out/minikube-windows-amd64.exe start -p test-preload-20211117230054-9504 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker --kubernetes-version=v1.17.0
preload_test.go:49: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p test-preload-20211117230054-9504 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker --kubernetes-version=v1.17.0: exit status 80 (37.3484973s)

                                                
                                                
-- stdout --
	* [test-preload-20211117230054-9504] minikube v1.24.0 on Microsoft Windows 10 Enterprise N 10.0.18363 Build 18363
	  - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=12739
	* Using the docker driver based on user configuration
	* Starting control plane node test-preload-20211117230054-9504 in cluster test-preload-20211117230054-9504
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* docker "test-preload-20211117230054-9504" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1117 23:00:54.625514    9108 out.go:297] Setting OutFile to fd 908 ...
	I1117 23:00:54.690082    9108 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 23:00:54.690082    9108 out.go:310] Setting ErrFile to fd 1008...
	I1117 23:00:54.690082    9108 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 23:00:54.701442    9108 out.go:304] Setting JSON to false
	I1117 23:00:54.702853    9108 start.go:112] hostinfo: {"hostname":"minikube2","uptime":79370,"bootTime":1637110684,"procs":127,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.18363 Build 18363","kernelVersion":"10.0.18363 Build 18363","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"29dced62-21fb-45d8-a34e-472b66ced002"}
	W1117 23:00:54.703852    9108 start.go:120] gopshost.Virtualization returned error: not implemented yet
	I1117 23:00:54.710546    9108 out.go:176] * [test-preload-20211117230054-9504] minikube v1.24.0 on Microsoft Windows 10 Enterprise N 10.0.18363 Build 18363
	I1117 23:00:54.710867    9108 notify.go:174] Checking for updates...
	I1117 23:00:54.713398    9108 out.go:176]   - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	I1117 23:00:54.716780    9108 out.go:176]   - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	I1117 23:00:54.718998    9108 out.go:176]   - MINIKUBE_LOCATION=12739
	I1117 23:00:54.720550    9108 config.go:176] Loaded profile config "multinode-20211117225530-9504-m01": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.3
	I1117 23:00:54.720550    9108 driver.go:343] Setting default libvirt URI to qemu:///system
	I1117 23:00:56.296010    9108 docker.go:132] docker version: linux-19.03.12
	I1117 23:00:56.303315    9108 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I1117 23:00:56.651015    9108 info.go:263] docker info: {ID:L2W4:ZZCI:FAHF:ZZT6:NAWJ:ODFR:2ZBQ:YQ77:PYZZ:VR3E:WLUM:OBSV Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:40 OomKillDisable:true NGoroutines:52 SystemTime:2021-11-17 23:00:56.38863516 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:4.19.76-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://index
.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:6239399936 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:19.03.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7ad184331fa3e55e52b890ea95e65ba581ae3429 Expected:7ad184331fa3e55e52b890ea95e65ba581ae3429} RuncCommit:{ID:dc9208a3303feef5b3839f4323d9beb36df0a9dd Expected:dc9208a3303feef5b3839f4323d9beb36df0a9dd} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense:Community Engine Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I1117 23:00:56.657475    9108 out.go:176] * Using the docker driver based on user configuration
	I1117 23:00:56.657475    9108 start.go:280] selected driver: docker
	I1117 23:00:56.657475    9108 start.go:775] validating driver "docker" against <nil>
	I1117 23:00:56.657475    9108 start.go:786] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I1117 23:00:56.731996    9108 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I1117 23:00:57.081844    9108 info.go:263] docker info: {ID:L2W4:ZZCI:FAHF:ZZT6:NAWJ:ODFR:2ZBQ:YQ77:PYZZ:VR3E:WLUM:OBSV Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:40 OomKillDisable:true NGoroutines:52 SystemTime:2021-11-17 23:00:56.812442644 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:4.19.76-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://inde
x.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:6239399936 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:19.03.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7ad184331fa3e55e52b890ea95e65ba581ae3429 Expected:7ad184331fa3e55e52b890ea95e65ba581ae3429} RuncCommit:{ID:dc9208a3303feef5b3839f4323d9beb36df0a9dd Expected:dc9208a3303feef5b3839f4323d9beb36df0a9dd} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense:Community Engine Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I1117 23:00:57.081844    9108 start_flags.go:268] no existing cluster config was found, will generate one from the flags 
	I1117 23:00:57.082572    9108 start_flags.go:758] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1117 23:00:57.082572    9108 cni.go:93] Creating CNI manager for ""
	I1117 23:00:57.082572    9108 cni.go:167] CNI unnecessary in this configuration, recommending no CNI
	I1117 23:00:57.082572    9108 start_flags.go:282] config:
	{Name:test-preload-20211117230054-9504 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.17.0 ClusterName:test-preload-20211117230054-9504 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRI
Socket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host}
	I1117 23:00:57.087673    9108 out.go:176] * Starting control plane node test-preload-20211117230054-9504 in cluster test-preload-20211117230054-9504
	I1117 23:00:57.087856    9108 cache.go:118] Beginning downloading kic base image for docker with docker
	I1117 23:00:57.091167    9108 out.go:176] * Pulling base image ...
	I1117 23:00:57.091335    9108 preload.go:132] Checking if preload exists for k8s version v1.17.0 and runtime docker
	I1117 23:00:57.091434    9108 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c in local docker daemon
	I1117 23:00:57.091524    9108 profile.go:147] Saving config to C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\test-preload-20211117230054-9504\config.json ...
	I1117 23:00:57.091778    9108 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\test-preload-20211117230054-9504\config.json: {Name:mkdcc309921188994a1094dd196b2bda70ab6dc4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1117 23:00:57.091941    9108 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\docker.io\kubernetesui\metrics-scraper:v1.0.7 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\docker.io\kubernetesui\metrics-scraper_v1.0.7
	I1117 23:00:57.092115    9108 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\k8s.gcr.io\pause:3.1 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\k8s.gcr.io\pause_3.1
	I1117 23:00:57.092115    9108 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\k8s.gcr.io\kube-controller-manager:v1.17.0 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\k8s.gcr.io\kube-controller-manager_v1.17.0
	I1117 23:00:57.092201    9108 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\k8s.gcr.io\kube-proxy:v1.17.0 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\k8s.gcr.io\kube-proxy_v1.17.0
	I1117 23:00:57.092201    9108 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\k8s.gcr.io\coredns:1.6.5 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\k8s.gcr.io\coredns_1.6.5
	I1117 23:00:57.092201    9108 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\k8s.gcr.io\kube-apiserver:v1.17.0 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\k8s.gcr.io\kube-apiserver_v1.17.0
	I1117 23:00:57.092201    9108 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\k8s.gcr.io\kube-scheduler:v1.17.0 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\k8s.gcr.io\kube-scheduler_v1.17.0
	I1117 23:00:57.092201    9108 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\k8s.gcr.io\etcd:3.4.3-0 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\k8s.gcr.io\etcd_3.4.3-0
	I1117 23:00:57.092201    9108 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\gcr.io\k8s-minikube\storage-provisioner:v5 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\gcr.io\k8s-minikube\storage-provisioner_v5
	I1117 23:00:57.092201    9108 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\docker.io\kubernetesui\dashboard:v2.3.1 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\docker.io\kubernetesui\dashboard_v2.3.1
	I1117 23:00:57.220507    9108 image.go:79] Found gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c in local docker daemon, skipping pull
	I1117 23:00:57.220507    9108 cache.go:140] gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c exists in daemon, skipping load
	I1117 23:00:57.220846    9108 cache.go:206] Successfully downloaded all kic artifacts
	I1117 23:00:57.221044    9108 start.go:313] acquiring machines lock for test-preload-20211117230054-9504: {Name:mkf9c361be77da87d4623f7e88facb40d32a0c0b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 23:00:57.221137    9108 start.go:317] acquired machines lock for "test-preload-20211117230054-9504" in 93.8µs
	I1117 23:00:57.221137    9108 start.go:89] Provisioning new machine with config: &{Name:test-preload-20211117230054-9504 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.17.0 ClusterName:test-preload-20211117230054-9504 Namespace:default APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.17.0 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host} &{Name: IP: Port:8443 KubernetesVersion:v1.17.0 ControlPlane:true Worker:true}
	I1117 23:00:57.221137    9108 start.go:126] createHost starting for "" (driver="docker")
	I1117 23:00:57.226012    9108 out.go:203] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I1117 23:00:57.226012    9108 start.go:160] libmachine.API.Create for "test-preload-20211117230054-9504" (driver="docker")
	I1117 23:00:57.226012    9108 client.go:168] LocalClient.Create starting
	I1117 23:00:57.226732    9108 main.go:130] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I1117 23:00:57.226732    9108 main.go:130] libmachine: Decoding PEM data...
	I1117 23:00:57.226732    9108 main.go:130] libmachine: Parsing certificate...
	I1117 23:00:57.227471    9108 main.go:130] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I1117 23:00:57.227471    9108 main.go:130] libmachine: Decoding PEM data...
	I1117 23:00:57.227471    9108 main.go:130] libmachine: Parsing certificate...
	I1117 23:00:57.233034    9108 cli_runner.go:115] Run: docker network inspect test-preload-20211117230054-9504 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1117 23:00:57.273895    9108 cache.go:107] acquiring lock: {Name:mk16b2c84e0562e7dfabdafa8a4b202b59aeeb0f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 23:00:57.273895    9108 cache.go:107] acquiring lock: {Name:mk1db3370d0dcc9154c21db791159e568fddaf45 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 23:00:57.274439    9108 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\docker.io\kubernetesui\metrics-scraper_v1.0.7 exists
	I1117 23:00:57.274554    9108 image.go:134] retrieving image: k8s.gcr.io/pause:3.1
	I1117 23:00:57.274734    9108 cache.go:96] cache image "docker.io/kubernetesui/metrics-scraper:v1.0.7" -> "C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\cache\\images\\docker.io\\kubernetesui\\metrics-scraper_v1.0.7" took 182.7311ms
	I1117 23:00:57.274798    9108 cache.go:80] save to tar file docker.io/kubernetesui/metrics-scraper:v1.0.7 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\docker.io\kubernetesui\metrics-scraper_v1.0.7 succeeded
	I1117 23:00:57.276111    9108 cache.go:107] acquiring lock: {Name:mk18106ace6176c031d584f589c44302f9454f6c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 23:00:57.276111    9108 cache.go:107] acquiring lock: {Name:mke9439de88fd7cfde7b3c89f335155fffdfe7dd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 23:00:57.276111    9108 cache.go:107] acquiring lock: {Name:mkdf20c2562e230579899c93e52622f9d60f43d6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 23:00:57.276877    9108 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\gcr.io\k8s-minikube\storage-provisioner_v5 exists
	I1117 23:00:57.277044    9108 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\cache\\images\\gcr.io\\k8s-minikube\\storage-provisioner_v5" took 184.8422ms
	I1117 23:00:57.277044    9108 image.go:134] retrieving image: k8s.gcr.io/kube-proxy:v1.17.0
	I1117 23:00:57.277044    9108 image.go:134] retrieving image: k8s.gcr.io/kube-controller-manager:v1.17.0
	I1117 23:00:57.277044    9108 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\gcr.io\k8s-minikube\storage-provisioner_v5 succeeded
	I1117 23:00:57.278034    9108 cache.go:107] acquiring lock: {Name:mk6e2bb00bff7055ecc2377dbbdbddbe323ae00a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 23:00:57.278537    9108 image.go:134] retrieving image: k8s.gcr.io/kube-apiserver:v1.17.0
	I1117 23:00:57.279633    9108 cache.go:107] acquiring lock: {Name:mka79873c4497f8822659d56d0ea202f596b4cfb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 23:00:57.279841    9108 image.go:176] found k8s.gcr.io/pause:3.1 locally: &{ref:{Repository:{Registry:{insecure:false registry:k8s.gcr.io} repository:pause} tag:3.1 original:k8s.gcr.io/pause:3.1} opener:0xc000828230 tarballImage:<nil> once:{done:0 m:{state:0 sema:0}} err:<nil>}
	I1117 23:00:57.279901    9108 cache.go:161] opening:  \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\k8s.gcr.io\pause_3.1
	I1117 23:00:57.280074    9108 image.go:134] retrieving image: k8s.gcr.io/etcd:3.4.3-0
	I1117 23:00:57.280074    9108 cache.go:107] acquiring lock: {Name:mk07753e378828d6a9b5c8273895167d2e474020 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 23:00:57.280807    9108 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\docker.io\kubernetesui\dashboard_v2.3.1 exists
	I1117 23:00:57.280807    9108 image.go:176] found k8s.gcr.io/kube-proxy:v1.17.0 locally: &{ref:{Repository:{Registry:{insecure:false registry:k8s.gcr.io} repository:kube-proxy} tag:v1.17.0 original:k8s.gcr.io/kube-proxy:v1.17.0} opener:0xc0008282a0 tarballImage:<nil> once:{done:0 m:{state:0 sema:0}} err:<nil>}
	I1117 23:00:57.280807    9108 cache.go:161] opening:  \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\k8s.gcr.io\kube-proxy_v1.17.0
	I1117 23:00:57.280945    9108 cache.go:96] cache image "docker.io/kubernetesui/dashboard:v2.3.1" -> "C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\cache\\images\\docker.io\\kubernetesui\\dashboard_v2.3.1" took 188.7431ms
	I1117 23:00:57.280945    9108 cache.go:80] save to tar file docker.io/kubernetesui/dashboard:v2.3.1 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\docker.io\kubernetesui\dashboard_v2.3.1 succeeded
	I1117 23:00:57.281279    9108 cache.go:107] acquiring lock: {Name:mk8348b42f7522551052ee0c31ab9fc66958346d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 23:00:57.281806    9108 image.go:134] retrieving image: k8s.gcr.io/coredns:1.6.5
	I1117 23:00:57.282886    9108 image.go:176] found k8s.gcr.io/kube-apiserver:v1.17.0 locally: &{ref:{Repository:{Registry:{insecure:false registry:k8s.gcr.io} repository:kube-apiserver} tag:v1.17.0 original:k8s.gcr.io/kube-apiserver:v1.17.0} opener:0xc0005f20e0 tarballImage:<nil> once:{done:0 m:{state:0 sema:0}} err:<nil>}
	I1117 23:00:57.282886    9108 cache.go:161] opening:  \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\k8s.gcr.io\kube-apiserver_v1.17.0
	I1117 23:00:57.284606    9108 cache.go:107] acquiring lock: {Name:mkf54b4a7dfac5e4caa2262e72666000cf8fcf03 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 23:00:57.284606    9108 image.go:176] found k8s.gcr.io/etcd:3.4.3-0 locally: &{ref:{Repository:{Registry:{insecure:false registry:k8s.gcr.io} repository:etcd} tag:3.4.3-0 original:k8s.gcr.io/etcd:3.4.3-0} opener:0xc0005f21c0 tarballImage:<nil> once:{done:0 m:{state:0 sema:0}} err:<nil>}
	I1117 23:00:57.284606    9108 cache.go:161] opening:  \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\k8s.gcr.io\etcd_3.4.3-0
	I1117 23:00:57.284606    9108 image.go:134] retrieving image: k8s.gcr.io/kube-scheduler:v1.17.0
	W1117 23:00:57.287612    9108 cache.go:172] failed to clean up the temp file \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\k8s.gcr.io\kube-apiserver_v1.17.0.436295216.tmp: remove \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\k8s.gcr.io\kube-apiserver_v1.17.0.436295216.tmp: The process cannot access the file because it is being used by another process.
	I1117 23:00:57.287612    9108 image.go:176] found k8s.gcr.io/kube-scheduler:v1.17.0 locally: &{ref:{Repository:{Registry:{insecure:false registry:k8s.gcr.io} repository:kube-scheduler} tag:v1.17.0 original:k8s.gcr.io/kube-scheduler:v1.17.0} opener:0xc0005f22a0 tarballImage:<nil> once:{done:0 m:{state:0 sema:0}} err:<nil>}
	I1117 23:00:57.287612    9108 cache.go:161] opening:  \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\k8s.gcr.io\kube-scheduler_v1.17.0
	I1117 23:00:57.287612    9108 cache.go:96] cache image "k8s.gcr.io/kube-apiserver:v1.17.0" -> "C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\cache\\images\\k8s.gcr.io\\kube-apiserver_v1.17.0" took 195.41ms
	I1117 23:00:57.290604    9108 image.go:176] found k8s.gcr.io/kube-controller-manager:v1.17.0 locally: &{ref:{Repository:{Registry:{insecure:false registry:k8s.gcr.io} repository:kube-controller-manager} tag:v1.17.0 original:k8s.gcr.io/kube-controller-manager:v1.17.0} opener:0xc0008283f0 tarballImage:<nil> once:{done:0 m:{state:0 sema:0}} err:<nil>}
	I1117 23:00:57.290604    9108 cache.go:161] opening:  \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\k8s.gcr.io\kube-controller-manager_v1.17.0
	W1117 23:00:57.293640    9108 cache.go:172] failed to clean up the temp file \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\k8s.gcr.io\pause_3.1.2525330666.tmp: remove \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\k8s.gcr.io\pause_3.1.2525330666.tmp: The process cannot access the file because it is being used by another process.
	I1117 23:00:57.294616    9108 cache.go:96] cache image "k8s.gcr.io/pause:3.1" -> "C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\cache\\images\\k8s.gcr.io\\pause_3.1" took 202.4994ms
	I1117 23:00:57.295615    9108 image.go:176] found k8s.gcr.io/coredns:1.6.5 locally: &{ref:{Repository:{Registry:{insecure:false registry:k8s.gcr.io} repository:coredns} tag:1.6.5 original:k8s.gcr.io/coredns:1.6.5} opener:0xc0008284d0 tarballImage:<nil> once:{done:0 m:{state:0 sema:0}} err:<nil>}
	I1117 23:00:57.295615    9108 cache.go:161] opening:  \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\k8s.gcr.io\coredns_1.6.5
	W1117 23:00:57.298611    9108 cache.go:172] failed to clean up the temp file \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\k8s.gcr.io\etcd_3.4.3-0.686993428.tmp: remove \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\k8s.gcr.io\etcd_3.4.3-0.686993428.tmp: The process cannot access the file because it is being used by another process.
	I1117 23:00:57.298611    9108 cache.go:96] cache image "k8s.gcr.io/etcd:3.4.3-0" -> "C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\cache\\images\\k8s.gcr.io\\etcd_3.4.3-0" took 206.409ms
	W1117 23:00:57.300621    9108 cache.go:172] failed to clean up the temp file \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\k8s.gcr.io\coredns_1.6.5.2878729617.tmp: remove \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\k8s.gcr.io\coredns_1.6.5.2878729617.tmp: The process cannot access the file because it is being used by another process.
	I1117 23:00:57.300621    9108 cache.go:96] cache image "k8s.gcr.io/coredns:1.6.5" -> "C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\cache\\images\\k8s.gcr.io\\coredns_1.6.5" took 208.4188ms
	W1117 23:00:57.300621    9108 cache.go:172] failed to clean up the temp file \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\k8s.gcr.io\kube-scheduler_v1.17.0.1892051102.tmp: remove \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\k8s.gcr.io\kube-scheduler_v1.17.0.1892051102.tmp: The process cannot access the file because it is being used by another process.
	I1117 23:00:57.301616    9108 cache.go:96] cache image "k8s.gcr.io/kube-scheduler:v1.17.0" -> "C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\cache\\images\\k8s.gcr.io\\kube-scheduler_v1.17.0" took 209.414ms
	W1117 23:00:57.305603    9108 cache.go:172] failed to clean up the temp file \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\k8s.gcr.io\kube-controller-manager_v1.17.0.2581730591.tmp: remove \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\k8s.gcr.io\kube-controller-manager_v1.17.0.2581730591.tmp: The process cannot access the file because it is being used by another process.
	I1117 23:00:57.306612    9108 cache.go:96] cache image "k8s.gcr.io/kube-controller-manager:v1.17.0" -> "C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\cache\\images\\k8s.gcr.io\\kube-controller-manager_v1.17.0" took 214.4094ms
	W1117 23:00:57.307599    9108 cache.go:172] failed to clean up the temp file \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\k8s.gcr.io\kube-proxy_v1.17.0.1934184239.tmp: remove \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\k8s.gcr.io\kube-proxy_v1.17.0.1934184239.tmp: The process cannot access the file because it is being used by another process.
	I1117 23:00:57.307599    9108 cache.go:96] cache image "k8s.gcr.io/kube-proxy:v1.17.0" -> "C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\cache\\images\\k8s.gcr.io\\kube-proxy_v1.17.0" took 215.3971ms
	W1117 23:00:57.339465    9108 cli_runner.go:162] docker network inspect test-preload-20211117230054-9504 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1117 23:00:57.342928    9108 network_create.go:254] running [docker network inspect test-preload-20211117230054-9504] to gather additional debugging logs...
	I1117 23:00:57.342928    9108 cli_runner.go:115] Run: docker network inspect test-preload-20211117230054-9504
	W1117 23:00:57.437732    9108 cli_runner.go:162] docker network inspect test-preload-20211117230054-9504 returned with exit code 1
	I1117 23:00:57.438072    9108 network_create.go:257] error running [docker network inspect test-preload-20211117230054-9504]: docker network inspect test-preload-20211117230054-9504: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: test-preload-20211117230054-9504
	I1117 23:00:57.438072    9108 network_create.go:259] output of [docker network inspect test-preload-20211117230054-9504]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: test-preload-20211117230054-9504
	
	** /stderr **
	I1117 23:00:57.441222    9108 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1117 23:00:57.548093    9108 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc0002c4120] misses:0}
	I1117 23:00:57.548093    9108 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1117 23:00:57.548093    9108 network_create.go:106] attempt to create docker network test-preload-20211117230054-9504 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1117 23:00:57.551541    9108 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true test-preload-20211117230054-9504
	I1117 23:00:57.750600    9108 network_create.go:90] docker network test-preload-20211117230054-9504 192.168.49.0/24 created
	I1117 23:00:57.750600    9108 kic.go:106] calculated static IP "192.168.49.2" for the "test-preload-20211117230054-9504" container
	I1117 23:00:57.758807    9108 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
	I1117 23:00:57.855644    9108 cli_runner.go:115] Run: docker volume create test-preload-20211117230054-9504 --label name.minikube.sigs.k8s.io=test-preload-20211117230054-9504 --label created_by.minikube.sigs.k8s.io=true
	I1117 23:00:57.959282    9108 oci.go:102] Successfully created a docker volume test-preload-20211117230054-9504
	I1117 23:00:57.962620    9108 cli_runner.go:115] Run: docker run --rm --name test-preload-20211117230054-9504-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=test-preload-20211117230054-9504 --entrypoint /usr/bin/test -v test-preload-20211117230054-9504:/var gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -d /var/lib
	I1117 23:00:59.033380    9108 cli_runner.go:168] Completed: docker run --rm --name test-preload-20211117230054-9504-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=test-preload-20211117230054-9504 --entrypoint /usr/bin/test -v test-preload-20211117230054-9504:/var gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -d /var/lib: (1.0706681s)
	I1117 23:00:59.033412    9108 oci.go:106] Successfully prepared a docker volume test-preload-20211117230054-9504
	I1117 23:00:59.033486    9108 preload.go:132] Checking if preload exists for k8s version v1.17.0 and runtime docker
	I1117 23:00:59.037975    9108 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I1117 23:00:59.378943    9108 info.go:263] docker info: {ID:L2W4:ZZCI:FAHF:ZZT6:NAWJ:ODFR:2ZBQ:YQ77:PYZZ:VR3E:WLUM:OBSV Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:40 OomKillDisable:true NGoroutines:52 SystemTime:2021-11-17 23:00:59.119174974 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:4.19.76-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://inde
x.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:6239399936 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:19.03.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7ad184331fa3e55e52b890ea95e65ba581ae3429 Expected:7ad184331fa3e55e52b890ea95e65ba581ae3429} RuncCommit:{ID:dc9208a3303feef5b3839f4323d9beb36df0a9dd Expected:dc9208a3303feef5b3839f4323d9beb36df0a9dd} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense:Community Engine Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	E1117 23:00:59.379309    9108 oci.go:197] error getting kernel modules path: Unable to locate kernel modules
	I1117 23:00:59.379309    9108 client.go:171] LocalClient.Create took 2.1532811s
	I1117 23:01:01.386950    9108 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1117 23:01:01.390666    9108 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20211117230054-9504
	W1117 23:01:01.490773    9108 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20211117230054-9504 returned with exit code 1
	I1117 23:01:01.491139    9108 retry.go:31] will retry after 276.165072ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "test-preload-20211117230054-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20211117230054-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: test-preload-20211117230054-9504
	I1117 23:01:01.771651    9108 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20211117230054-9504
	W1117 23:01:01.877600    9108 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20211117230054-9504 returned with exit code 1
	I1117 23:01:01.877600    9108 retry.go:31] will retry after 540.190908ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "test-preload-20211117230054-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20211117230054-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: test-preload-20211117230054-9504
	I1117 23:01:02.423404    9108 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20211117230054-9504
	W1117 23:01:02.513096    9108 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20211117230054-9504 returned with exit code 1
	I1117 23:01:02.513428    9108 retry.go:31] will retry after 655.06503ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "test-preload-20211117230054-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20211117230054-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: test-preload-20211117230054-9504
	I1117 23:01:03.173924    9108 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20211117230054-9504
	W1117 23:01:03.275476    9108 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20211117230054-9504 returned with exit code 1
	W1117 23:01:03.275879    9108 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "test-preload-20211117230054-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20211117230054-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: test-preload-20211117230054-9504
	
	W1117 23:01:03.275937    9108 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "test-preload-20211117230054-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20211117230054-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: test-preload-20211117230054-9504
	I1117 23:01:03.275937    9108 start.go:129] duration metric: createHost completed in 6.054754s
	I1117 23:01:03.276022    9108 start.go:80] releasing machines lock for "test-preload-20211117230054-9504", held for 6.0548395s
	W1117 23:01:03.276171    9108 start.go:532] error starting host: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	I1117 23:01:03.283893    9108 cli_runner.go:115] Run: docker container inspect test-preload-20211117230054-9504 --format={{.State.Status}}
	W1117 23:01:03.376403    9108 cli_runner.go:162] docker container inspect test-preload-20211117230054-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:01:03.376711    9108 delete.go:82] Unable to get host status for test-preload-20211117230054-9504, assuming it has already been deleted: state: unknown state "test-preload-20211117230054-9504": docker container inspect test-preload-20211117230054-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: test-preload-20211117230054-9504
	W1117 23:01:03.376856    9108 out.go:241] ! StartHost failed, but will try again: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	! StartHost failed, but will try again: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	I1117 23:01:03.376856    9108 start.go:547] Will try again in 5 seconds ...
	I1117 23:01:08.377601    9108 start.go:313] acquiring machines lock for test-preload-20211117230054-9504: {Name:mkf9c361be77da87d4623f7e88facb40d32a0c0b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 23:01:08.377601    9108 start.go:317] acquired machines lock for "test-preload-20211117230054-9504" in 0s
	I1117 23:01:08.377601    9108 start.go:93] Skipping create...Using existing machine configuration
	I1117 23:01:08.378136    9108 fix.go:55] fixHost starting: 
	I1117 23:01:08.385841    9108 cli_runner.go:115] Run: docker container inspect test-preload-20211117230054-9504 --format={{.State.Status}}
	W1117 23:01:08.482510    9108 cli_runner.go:162] docker container inspect test-preload-20211117230054-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:01:08.482611    9108 fix.go:108] recreateIfNeeded on test-preload-20211117230054-9504: state= err=unknown state "test-preload-20211117230054-9504": docker container inspect test-preload-20211117230054-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: test-preload-20211117230054-9504
	I1117 23:01:08.482713    9108 fix.go:113] machineExists: false. err=machine does not exist
	I1117 23:01:08.491661    9108 out.go:176] * docker "test-preload-20211117230054-9504" container is missing, will recreate.
	I1117 23:01:08.491661    9108 delete.go:124] DEMOLISHING test-preload-20211117230054-9504 ...
	I1117 23:01:08.495307    9108 cli_runner.go:115] Run: docker container inspect test-preload-20211117230054-9504 --format={{.State.Status}}
	W1117 23:01:08.596515    9108 cli_runner.go:162] docker container inspect test-preload-20211117230054-9504 --format={{.State.Status}} returned with exit code 1
	W1117 23:01:08.596515    9108 stop.go:75] unable to get state: unknown state "test-preload-20211117230054-9504": docker container inspect test-preload-20211117230054-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: test-preload-20211117230054-9504
	I1117 23:01:08.596515    9108 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "test-preload-20211117230054-9504": docker container inspect test-preload-20211117230054-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: test-preload-20211117230054-9504
	I1117 23:01:08.605821    9108 cli_runner.go:115] Run: docker container inspect test-preload-20211117230054-9504 --format={{.State.Status}}
	W1117 23:01:08.705386    9108 cli_runner.go:162] docker container inspect test-preload-20211117230054-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:01:08.705386    9108 delete.go:82] Unable to get host status for test-preload-20211117230054-9504, assuming it has already been deleted: state: unknown state "test-preload-20211117230054-9504": docker container inspect test-preload-20211117230054-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: test-preload-20211117230054-9504
	I1117 23:01:08.708496    9108 cli_runner.go:115] Run: docker container inspect -f {{.Id}} test-preload-20211117230054-9504
	W1117 23:01:08.799552    9108 cli_runner.go:162] docker container inspect -f {{.Id}} test-preload-20211117230054-9504 returned with exit code 1
	I1117 23:01:08.799721    9108 kic.go:360] could not find the container test-preload-20211117230054-9504 to remove it. will try anyways
	I1117 23:01:08.803858    9108 cli_runner.go:115] Run: docker container inspect test-preload-20211117230054-9504 --format={{.State.Status}}
	W1117 23:01:08.899440    9108 cli_runner.go:162] docker container inspect test-preload-20211117230054-9504 --format={{.State.Status}} returned with exit code 1
	W1117 23:01:08.899618    9108 oci.go:83] error getting container status, will try to delete anyways: unknown state "test-preload-20211117230054-9504": docker container inspect test-preload-20211117230054-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: test-preload-20211117230054-9504
	I1117 23:01:08.903431    9108 cli_runner.go:115] Run: docker exec --privileged -t test-preload-20211117230054-9504 /bin/bash -c "sudo init 0"
	W1117 23:01:08.993214    9108 cli_runner.go:162] docker exec --privileged -t test-preload-20211117230054-9504 /bin/bash -c "sudo init 0" returned with exit code 1
	I1117 23:01:08.993214    9108 oci.go:658] error shutdown test-preload-20211117230054-9504: docker exec --privileged -t test-preload-20211117230054-9504 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: test-preload-20211117230054-9504
	I1117 23:01:09.996864    9108 cli_runner.go:115] Run: docker container inspect test-preload-20211117230054-9504 --format={{.State.Status}}
	W1117 23:01:10.089354    9108 cli_runner.go:162] docker container inspect test-preload-20211117230054-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:01:10.089354    9108 oci.go:670] temporary error verifying shutdown: unknown state "test-preload-20211117230054-9504": docker container inspect test-preload-20211117230054-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: test-preload-20211117230054-9504
	I1117 23:01:10.089354    9108 oci.go:672] temporary error: container test-preload-20211117230054-9504 status is  but expect it to be exited
	I1117 23:01:10.089354    9108 retry.go:31] will retry after 462.318748ms: couldn't verify container is exited. %v: unknown state "test-preload-20211117230054-9504": docker container inspect test-preload-20211117230054-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: test-preload-20211117230054-9504
	I1117 23:01:10.558177    9108 cli_runner.go:115] Run: docker container inspect test-preload-20211117230054-9504 --format={{.State.Status}}
	W1117 23:01:10.649976    9108 cli_runner.go:162] docker container inspect test-preload-20211117230054-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:01:10.650169    9108 oci.go:670] temporary error verifying shutdown: unknown state "test-preload-20211117230054-9504": docker container inspect test-preload-20211117230054-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: test-preload-20211117230054-9504
	I1117 23:01:10.650169    9108 oci.go:672] temporary error: container test-preload-20211117230054-9504 status is  but expect it to be exited
	I1117 23:01:10.650169    9108 retry.go:31] will retry after 890.117305ms: couldn't verify container is exited. %v: unknown state "test-preload-20211117230054-9504": docker container inspect test-preload-20211117230054-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: test-preload-20211117230054-9504
	I1117 23:01:11.545782    9108 cli_runner.go:115] Run: docker container inspect test-preload-20211117230054-9504 --format={{.State.Status}}
	W1117 23:01:11.636264    9108 cli_runner.go:162] docker container inspect test-preload-20211117230054-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:01:11.636346    9108 oci.go:670] temporary error verifying shutdown: unknown state "test-preload-20211117230054-9504": docker container inspect test-preload-20211117230054-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: test-preload-20211117230054-9504
	I1117 23:01:11.636346    9108 oci.go:672] temporary error: container test-preload-20211117230054-9504 status is  but expect it to be exited
	I1117 23:01:11.636563    9108 retry.go:31] will retry after 636.341646ms: couldn't verify container is exited. %v: unknown state "test-preload-20211117230054-9504": docker container inspect test-preload-20211117230054-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: test-preload-20211117230054-9504
	I1117 23:01:12.278309    9108 cli_runner.go:115] Run: docker container inspect test-preload-20211117230054-9504 --format={{.State.Status}}
	W1117 23:01:12.365918    9108 cli_runner.go:162] docker container inspect test-preload-20211117230054-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:01:12.366006    9108 oci.go:670] temporary error verifying shutdown: unknown state "test-preload-20211117230054-9504": docker container inspect test-preload-20211117230054-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: test-preload-20211117230054-9504
	I1117 23:01:12.366006    9108 oci.go:672] temporary error: container test-preload-20211117230054-9504 status is  but expect it to be exited
	I1117 23:01:12.366248    9108 retry.go:31] will retry after 1.107876242s: couldn't verify container is exited. %v: unknown state "test-preload-20211117230054-9504": docker container inspect test-preload-20211117230054-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: test-preload-20211117230054-9504
	I1117 23:01:13.480598    9108 cli_runner.go:115] Run: docker container inspect test-preload-20211117230054-9504 --format={{.State.Status}}
	W1117 23:01:13.572754    9108 cli_runner.go:162] docker container inspect test-preload-20211117230054-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:01:13.573045    9108 oci.go:670] temporary error verifying shutdown: unknown state "test-preload-20211117230054-9504": docker container inspect test-preload-20211117230054-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: test-preload-20211117230054-9504
	I1117 23:01:13.573159    9108 oci.go:672] temporary error: container test-preload-20211117230054-9504 status is  but expect it to be exited
	I1117 23:01:13.573159    9108 retry.go:31] will retry after 1.511079094s: couldn't verify container is exited. %v: unknown state "test-preload-20211117230054-9504": docker container inspect test-preload-20211117230054-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: test-preload-20211117230054-9504
	I1117 23:01:15.089177    9108 cli_runner.go:115] Run: docker container inspect test-preload-20211117230054-9504 --format={{.State.Status}}
	W1117 23:01:15.177500    9108 cli_runner.go:162] docker container inspect test-preload-20211117230054-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:01:15.177500    9108 oci.go:670] temporary error verifying shutdown: unknown state "test-preload-20211117230054-9504": docker container inspect test-preload-20211117230054-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: test-preload-20211117230054-9504
	I1117 23:01:15.177500    9108 oci.go:672] temporary error: container test-preload-20211117230054-9504 status is  but expect it to be exited
	I1117 23:01:15.177500    9108 retry.go:31] will retry after 3.04096222s: couldn't verify container is exited. %v: unknown state "test-preload-20211117230054-9504": docker container inspect test-preload-20211117230054-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: test-preload-20211117230054-9504
	I1117 23:01:18.222746    9108 cli_runner.go:115] Run: docker container inspect test-preload-20211117230054-9504 --format={{.State.Status}}
	W1117 23:01:18.319204    9108 cli_runner.go:162] docker container inspect test-preload-20211117230054-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:01:18.319453    9108 oci.go:670] temporary error verifying shutdown: unknown state "test-preload-20211117230054-9504": docker container inspect test-preload-20211117230054-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: test-preload-20211117230054-9504
	I1117 23:01:18.319453    9108 oci.go:672] temporary error: container test-preload-20211117230054-9504 status is  but expect it to be exited
	I1117 23:01:18.319558    9108 retry.go:31] will retry after 5.781953173s: couldn't verify container is exited. %v: unknown state "test-preload-20211117230054-9504": docker container inspect test-preload-20211117230054-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: test-preload-20211117230054-9504
	I1117 23:01:24.106151    9108 cli_runner.go:115] Run: docker container inspect test-preload-20211117230054-9504 --format={{.State.Status}}
	W1117 23:01:24.196765    9108 cli_runner.go:162] docker container inspect test-preload-20211117230054-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:01:24.196765    9108 oci.go:670] temporary error verifying shutdown: unknown state "test-preload-20211117230054-9504": docker container inspect test-preload-20211117230054-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: test-preload-20211117230054-9504
	I1117 23:01:24.196765    9108 oci.go:672] temporary error: container test-preload-20211117230054-9504 status is  but expect it to be exited
	I1117 23:01:24.196765    9108 oci.go:87] couldn't shut down test-preload-20211117230054-9504 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "test-preload-20211117230054-9504": docker container inspect test-preload-20211117230054-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: test-preload-20211117230054-9504
	 
	I1117 23:01:24.201453    9108 cli_runner.go:115] Run: docker rm -f -v test-preload-20211117230054-9504
	W1117 23:01:24.290976    9108 cli_runner.go:162] docker rm -f -v test-preload-20211117230054-9504 returned with exit code 1
	W1117 23:01:24.292328    9108 delete.go:139] delete failed (probably ok) <nil>
	I1117 23:01:24.292328    9108 fix.go:120] Sleeping 1 second for extra luck!
	I1117 23:01:25.293007    9108 start.go:126] createHost starting for "" (driver="docker")
	I1117 23:01:25.296615    9108 out.go:203] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I1117 23:01:25.296615    9108 start.go:160] libmachine.API.Create for "test-preload-20211117230054-9504" (driver="docker")
	I1117 23:01:25.296615    9108 client.go:168] LocalClient.Create starting
	I1117 23:01:25.297200    9108 main.go:130] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I1117 23:01:25.297932    9108 main.go:130] libmachine: Decoding PEM data...
	I1117 23:01:25.297932    9108 main.go:130] libmachine: Parsing certificate...
	I1117 23:01:25.298012    9108 main.go:130] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I1117 23:01:25.298012    9108 main.go:130] libmachine: Decoding PEM data...
	I1117 23:01:25.298012    9108 main.go:130] libmachine: Parsing certificate...
	I1117 23:01:25.303507    9108 cli_runner.go:115] Run: docker network inspect test-preload-20211117230054-9504 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1117 23:01:25.398153    9108 network_create.go:67] Found existing network {name:test-preload-20211117230054-9504 subnet:0xc000e2c750 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 168 49 1] mtu:1500}
	I1117 23:01:25.398153    9108 kic.go:106] calculated static IP "192.168.49.2" for the "test-preload-20211117230054-9504" container
	I1117 23:01:25.406293    9108 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
	I1117 23:01:25.500152    9108 cli_runner.go:115] Run: docker volume create test-preload-20211117230054-9504 --label name.minikube.sigs.k8s.io=test-preload-20211117230054-9504 --label created_by.minikube.sigs.k8s.io=true
	I1117 23:01:25.590049    9108 oci.go:102] Successfully created a docker volume test-preload-20211117230054-9504
	I1117 23:01:25.594511    9108 cli_runner.go:115] Run: docker run --rm --name test-preload-20211117230054-9504-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=test-preload-20211117230054-9504 --entrypoint /usr/bin/test -v test-preload-20211117230054-9504:/var gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -d /var/lib
	I1117 23:01:26.490785    9108 oci.go:106] Successfully prepared a docker volume test-preload-20211117230054-9504
	I1117 23:01:26.490785    9108 preload.go:132] Checking if preload exists for k8s version v1.17.0 and runtime docker
	I1117 23:01:26.495221    9108 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I1117 23:01:26.838838    9108 info.go:263] docker info: {ID:L2W4:ZZCI:FAHF:ZZT6:NAWJ:ODFR:2ZBQ:YQ77:PYZZ:VR3E:WLUM:OBSV Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:40 OomKillDisable:true NGoroutines:52 SystemTime:2021-11-17 23:01:26.579420586 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:4.19.76-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://inde
x.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:6239399936 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:19.03.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7ad184331fa3e55e52b890ea95e65ba581ae3429 Expected:7ad184331fa3e55e52b890ea95e65ba581ae3429} RuncCommit:{ID:dc9208a3303feef5b3839f4323d9beb36df0a9dd Expected:dc9208a3303feef5b3839f4323d9beb36df0a9dd} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense:Community Engine Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	E1117 23:01:26.839367    9108 oci.go:197] error getting kernel modules path: Unable to locate kernel modules
	I1117 23:01:26.839590    9108 client.go:171] LocalClient.Create took 1.5429319s
	I1117 23:01:28.847629    9108 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1117 23:01:28.851242    9108 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20211117230054-9504
	W1117 23:01:28.946427    9108 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20211117230054-9504 returned with exit code 1
	I1117 23:01:28.946677    9108 retry.go:31] will retry after 178.565968ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "test-preload-20211117230054-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20211117230054-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: test-preload-20211117230054-9504
	I1117 23:01:29.130031    9108 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20211117230054-9504
	W1117 23:01:29.217218    9108 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20211117230054-9504 returned with exit code 1
	I1117 23:01:29.217643    9108 retry.go:31] will retry after 330.246446ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "test-preload-20211117230054-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20211117230054-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: test-preload-20211117230054-9504
	I1117 23:01:29.553352    9108 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20211117230054-9504
	W1117 23:01:29.643717    9108 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20211117230054-9504 returned with exit code 1
	I1117 23:01:29.643980    9108 retry.go:31] will retry after 460.157723ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "test-preload-20211117230054-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20211117230054-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: test-preload-20211117230054-9504
	I1117 23:01:30.111550    9108 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20211117230054-9504
	W1117 23:01:30.204789    9108 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20211117230054-9504 returned with exit code 1
	W1117 23:01:30.204889    9108 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "test-preload-20211117230054-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20211117230054-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: test-preload-20211117230054-9504
	
	W1117 23:01:30.205033    9108 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "test-preload-20211117230054-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20211117230054-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: test-preload-20211117230054-9504
	I1117 23:01:30.205033    9108 start.go:129] duration metric: createHost completed in 4.9118768s
	I1117 23:01:30.212849    9108 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1117 23:01:30.216324    9108 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20211117230054-9504
	W1117 23:01:30.305896    9108 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20211117230054-9504 returned with exit code 1
	I1117 23:01:30.305896    9108 retry.go:31] will retry after 195.758538ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "test-preload-20211117230054-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20211117230054-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: test-preload-20211117230054-9504
	I1117 23:01:30.506790    9108 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20211117230054-9504
	W1117 23:01:30.598570    9108 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20211117230054-9504 returned with exit code 1
	I1117 23:01:30.598866    9108 retry.go:31] will retry after 297.413196ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "test-preload-20211117230054-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20211117230054-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: test-preload-20211117230054-9504
	I1117 23:01:30.902378    9108 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20211117230054-9504
	W1117 23:01:30.992044    9108 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20211117230054-9504 returned with exit code 1
	I1117 23:01:30.992110    9108 retry.go:31] will retry after 663.23513ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "test-preload-20211117230054-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20211117230054-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: test-preload-20211117230054-9504
	I1117 23:01:31.660404    9108 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20211117230054-9504
	W1117 23:01:31.762681    9108 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20211117230054-9504 returned with exit code 1
	W1117 23:01:31.762802    9108 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "test-preload-20211117230054-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20211117230054-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: test-preload-20211117230054-9504
	
	W1117 23:01:31.762802    9108 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "test-preload-20211117230054-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20211117230054-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: test-preload-20211117230054-9504
	I1117 23:01:31.762802    9108 fix.go:57] fixHost completed within 23.3844905s
	I1117 23:01:31.762802    9108 start.go:80] releasing machines lock for "test-preload-20211117230054-9504", held for 23.3850256s
	W1117 23:01:31.763493    9108 out.go:241] * Failed to start docker container. Running "minikube delete -p test-preload-20211117230054-9504" may fix it: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	* Failed to start docker container. Running "minikube delete -p test-preload-20211117230054-9504" may fix it: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	I1117 23:01:31.767454    9108 out.go:176] 
	W1117 23:01:31.767454    9108 out.go:241] X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	W1117 23:01:31.767454    9108 out.go:241] * 
	* 
	W1117 23:01:31.769194    9108 out.go:241] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1117 23:01:31.771249    9108 out.go:176] 

                                                
                                                
** /stderr **
preload_test.go:51: out/minikube-windows-amd64.exe start -p test-preload-20211117230054-9504 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker --kubernetes-version=v1.17.0 failed: exit status 80
panic.go:642: *** TestPreload FAILED at 2021-11-17 23:01:31.8816179 +0000 GMT m=+2098.379853601
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestPreload]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect test-preload-20211117230054-9504
helpers_test.go:235: (dbg) docker inspect test-preload-20211117230054-9504:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "test-preload-20211117230054-9504",
	        "Id": "b3bbae674dd3b5b34377631fd93904dbd1dd031e00db45a51c991bac18fcc76d",
	        "Created": "2021-11-17T23:00:57.63310691Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.49.0/24",
	                    "Gateway": "192.168.49.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p test-preload-20211117230054-9504 -n test-preload-20211117230054-9504
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p test-preload-20211117230054-9504 -n test-preload-20211117230054-9504: exit status 7 (1.7644819s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 23:01:33.733356    9400 status.go:247] status error: host: state: unknown state "test-preload-20211117230054-9504": docker container inspect test-preload-20211117230054-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: test-preload-20211117230054-9504

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "test-preload-20211117230054-9504" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:175: Cleaning up "test-preload-20211117230054-9504" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p test-preload-20211117230054-9504
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p test-preload-20211117230054-9504: (2.7403581s)
--- FAIL: TestPreload (42.05s)

                                                
                                    
x
+
TestScheduledStopWindows (42.01s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:129: (dbg) Run:  out/minikube-windows-amd64.exe start -p scheduled-stop-20211117230136-9504 --memory=2048 --driver=docker
scheduled_stop_test.go:129: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p scheduled-stop-20211117230136-9504 --memory=2048 --driver=docker: exit status 80 (37.281861s)

                                                
                                                
-- stdout --
	* [scheduled-stop-20211117230136-9504] minikube v1.24.0 on Microsoft Windows 10 Enterprise N 10.0.18363 Build 18363
	  - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=12739
	* Using the docker driver based on user configuration
	* Starting control plane node scheduled-stop-20211117230136-9504 in cluster scheduled-stop-20211117230136-9504
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* docker "scheduled-stop-20211117230136-9504" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 23:01:41.350042   11196 oci.go:197] error getting kernel modules path: Unable to locate kernel modules
	! StartHost failed, but will try again: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	E1117 23:02:08.797900   11196 oci.go:197] error getting kernel modules path: Unable to locate kernel modules
	* Failed to start docker container. Running "minikube delete -p scheduled-stop-20211117230136-9504" may fix it: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
scheduled_stop_test.go:131: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [scheduled-stop-20211117230136-9504] minikube v1.24.0 on Microsoft Windows 10 Enterprise N 10.0.18363 Build 18363
	  - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=12739
	* Using the docker driver based on user configuration
	* Starting control plane node scheduled-stop-20211117230136-9504 in cluster scheduled-stop-20211117230136-9504
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* docker "scheduled-stop-20211117230136-9504" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 23:01:41.350042   11196 oci.go:197] error getting kernel modules path: Unable to locate kernel modules
	! StartHost failed, but will try again: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	E1117 23:02:08.797900   11196 oci.go:197] error getting kernel modules path: Unable to locate kernel modules
	* Failed to start docker container. Running "minikube delete -p scheduled-stop-20211117230136-9504" may fix it: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:642: *** TestScheduledStopWindows FAILED at 2021-11-17 23:02:13.7792527 +0000 GMT m=+2140.277174201
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestScheduledStopWindows]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect scheduled-stop-20211117230136-9504
helpers_test.go:235: (dbg) docker inspect scheduled-stop-20211117230136-9504:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "scheduled-stop-20211117230136-9504",
	        "Id": "9d148c4748aa66cb1fcf8d2a23b4463fa5f83666d37a536db45c1af626ad141e",
	        "Created": "2021-11-17T23:01:39.570886856Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.49.0/24",
	                    "Gateway": "192.168.49.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p scheduled-stop-20211117230136-9504 -n scheduled-stop-20211117230136-9504
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p scheduled-stop-20211117230136-9504 -n scheduled-stop-20211117230136-9504: exit status 7 (1.822227s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 23:02:15.687747    8708 status.go:247] status error: host: state: unknown state "scheduled-stop-20211117230136-9504": docker container inspect scheduled-stop-20211117230136-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: scheduled-stop-20211117230136-9504

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "scheduled-stop-20211117230136-9504" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:175: Cleaning up "scheduled-stop-20211117230136-9504" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p scheduled-stop-20211117230136-9504
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p scheduled-stop-20211117230136-9504: (2.7987989s)
--- FAIL: TestScheduledStopWindows (42.01s)

                                                
                                    
x
+
TestSkaffold (43.49s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:57: (dbg) Run:  C:\Users\jenkins.minikube2\AppData\Local\Temp\skaffold.exe2958244009 version
skaffold_test.go:61: skaffold version: v1.35.0
skaffold_test.go:64: (dbg) Run:  out/minikube-windows-amd64.exe start -p skaffold-20211117230218-9504 --memory=2600 --driver=docker
skaffold_test.go:64: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p skaffold-20211117230218-9504 --memory=2600 --driver=docker: exit status 80 (37.6889312s)

                                                
                                                
-- stdout --
	* [skaffold-20211117230218-9504] minikube v1.24.0 on Microsoft Windows 10 Enterprise N 10.0.18363 Build 18363
	  - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=12739
	* Using the docker driver based on user configuration
	* Starting control plane node skaffold-20211117230218-9504 in cluster skaffold-20211117230218-9504
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2600MB) ...
	* docker "skaffold-20211117230218-9504" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2600MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 23:02:25.019060   11040 oci.go:197] error getting kernel modules path: Unable to locate kernel modules
	! StartHost failed, but will try again: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	E1117 23:02:52.420725   11040 oci.go:197] error getting kernel modules path: Unable to locate kernel modules
	* Failed to start docker container. Running "minikube delete -p skaffold-20211117230218-9504" may fix it: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
skaffold_test.go:66: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [skaffold-20211117230218-9504] minikube v1.24.0 on Microsoft Windows 10 Enterprise N 10.0.18363 Build 18363
	  - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=12739
	* Using the docker driver based on user configuration
	* Starting control plane node skaffold-20211117230218-9504 in cluster skaffold-20211117230218-9504
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2600MB) ...
	* docker "skaffold-20211117230218-9504" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2600MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 23:02:25.019060   11040 oci.go:197] error getting kernel modules path: Unable to locate kernel modules
	! StartHost failed, but will try again: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	E1117 23:02:52.420725   11040 oci.go:197] error getting kernel modules path: Unable to locate kernel modules
	* Failed to start docker container. Running "minikube delete -p skaffold-20211117230218-9504" may fix it: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:642: *** TestSkaffold FAILED at 2021-11-17 23:02:57.4192086 +0000 GMT m=+2183.916802801
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestSkaffold]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect skaffold-20211117230218-9504
helpers_test.go:235: (dbg) docker inspect skaffold-20211117230218-9504:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "skaffold-20211117230218-9504",
	        "Id": "da96522c0e5708df0a57e39f9682c969a2c2ef268139df2b45ad3717bda9e2bc",
	        "Created": "2021-11-17T23:02:23.220833081Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.49.0/24",
	                    "Gateway": "192.168.49.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p skaffold-20211117230218-9504 -n skaffold-20211117230218-9504
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p skaffold-20211117230218-9504 -n skaffold-20211117230218-9504: exit status 7 (1.7529716s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 23:02:59.256331   11908 status.go:247] status error: host: state: unknown state "skaffold-20211117230218-9504": docker container inspect skaffold-20211117230218-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: skaffold-20211117230218-9504

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "skaffold-20211117230218-9504" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:175: Cleaning up "skaffold-20211117230218-9504" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p skaffold-20211117230218-9504
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p skaffold-20211117230218-9504: (2.7183766s)
--- FAIL: TestSkaffold (43.49s)

                                                
                                    
x
+
TestInsufficientStorage (11.35s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:51: (dbg) Run:  out/minikube-windows-amd64.exe start -p insufficient-storage-20211117230301-9504 --memory=2048 --output=json --wait=true --driver=docker
status_test.go:51: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p insufficient-storage-20211117230301-9504 --memory=2048 --output=json --wait=true --driver=docker: exit status 26 (6.941448s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"4cd0e827-ac0b-4cbb-8983-df6175d92651","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-20211117230301-9504] minikube v1.24.0 on Microsoft Windows 10 Enterprise N 10.0.18363 Build 18363","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"574f866a-d495-4485-baf8-94421b79ae29","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=C:\\Users\\jenkins.minikube2\\minikube-integration\\kubeconfig"}}
	{"specversion":"1.0","id":"28f8f74b-c12a-4a4b-acdf-48ee702e7c92","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube"}}
	{"specversion":"1.0","id":"60e0359c-15ed-4b6e-a8b3-6e9c5162181b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=12739"}}
	{"specversion":"1.0","id":"aec697ae-c3b4-423d-9c4e-6fdc4099cbd0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"fba5e9b5-99f3-4ac7-b361-66d939a1c1f4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"a009d908-4f7d-4d89-bed9-1b7144923940","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting control plane node insufficient-storage-20211117230301-9504 in cluster insufficient-storage-20211117230301-9504","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"d7fcf94b-3fc5-4f43-a138-4f3140555a18","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"e3ef140c-9044-440a-9cec-fee7352cd687","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"d970cef9-91d9-4cf7-8c97-314ef8e06f99","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100%% of capacity)","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 23:03:06.903367    4184 oci.go:197] error getting kernel modules path: Unable to locate kernel modules

                                                
                                                
** /stderr **
status_test.go:77: (dbg) Run:  out/minikube-windows-amd64.exe status -p insufficient-storage-20211117230301-9504 --output=json --layout=cluster
status_test.go:77: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status -p insufficient-storage-20211117230301-9504 --output=json --layout=cluster: exit status 7 (1.7197151s)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-20211117230301-9504","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.24.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":520,"StatusName":"Unknown"}},"Nodes":[{"Name":"insufficient-storage-20211117230301-9504","StatusCode":520,"StatusName":"Unknown","Components":{"apiserver":{"Name":"apiserver","StatusCode":520,"StatusName":"Unknown"},"kubelet":{"Name":"kubelet","StatusCode":520,"StatusName":"Unknown"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 23:03:10.638272   11300 status.go:258] status error: host: state: unknown state "insufficient-storage-20211117230301-9504": docker container inspect insufficient-storage-20211117230301-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: insufficient-storage-20211117230301-9504
	E1117 23:03:10.638272   11300 status.go:261] The "insufficient-storage-20211117230301-9504" host does not exist!

                                                
                                                
** /stderr **
status_test.go:99: incorrect node status code: 507
helpers_test.go:175: Cleaning up "insufficient-storage-20211117230301-9504" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p insufficient-storage-20211117230301-9504
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p insufficient-storage-20211117230301-9504: (2.6871407s)
--- FAIL: TestInsufficientStorage (11.35s)

                                                
                                    
x
+
TestKubernetesUpgrade (75.67s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:229: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubernetes-upgrade-20211117230530-9504 --memory=2200 --kubernetes-version=v1.14.0 --alsologtostderr -v=1 --driver=docker
version_upgrade_test.go:229: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-20211117230530-9504 --memory=2200 --kubernetes-version=v1.14.0 --alsologtostderr -v=1 --driver=docker: exit status 80 (50.2283446s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-20211117230530-9504] minikube v1.24.0 on Microsoft Windows 10 Enterprise N 10.0.18363 Build 18363
	  - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=12739
	* Using the docker driver based on user configuration
	* Starting control plane node kubernetes-upgrade-20211117230530-9504 in cluster kubernetes-upgrade-20211117230530-9504
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* docker "kubernetes-upgrade-20211117230530-9504" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1117 23:05:30.665204    8772 out.go:297] Setting OutFile to fd 1580 ...
	I1117 23:05:30.729200    8772 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 23:05:30.729200    8772 out.go:310] Setting ErrFile to fd 1588...
	I1117 23:05:30.729200    8772 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 23:05:30.741204    8772 out.go:304] Setting JSON to false
	I1117 23:05:30.743248    8772 start.go:112] hostinfo: {"hostname":"minikube2","uptime":79646,"bootTime":1637110684,"procs":133,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.18363 Build 18363","kernelVersion":"10.0.18363 Build 18363","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"29dced62-21fb-45d8-a34e-472b66ced002"}
	W1117 23:05:30.744203    8772 start.go:120] gopshost.Virtualization returned error: not implemented yet
	I1117 23:05:30.749966    8772 out.go:176] * [kubernetes-upgrade-20211117230530-9504] minikube v1.24.0 on Microsoft Windows 10 Enterprise N 10.0.18363 Build 18363
	I1117 23:05:30.749966    8772 notify.go:174] Checking for updates...
	I1117 23:05:30.752963    8772 out.go:176]   - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	I1117 23:05:30.754967    8772 out.go:176]   - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	I1117 23:05:30.756961    8772 out.go:176]   - MINIKUBE_LOCATION=12739
	I1117 23:05:30.758053    8772 config.go:176] Loaded profile config "cert-expiration-20211117230315-9504": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.3
	I1117 23:05:30.758971    8772 config.go:176] Loaded profile config "missing-upgrade-20211117230443-9504": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.0
	I1117 23:05:30.758971    8772 config.go:176] Loaded profile config "multinode-20211117225530-9504-m01": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.3
	I1117 23:05:30.758971    8772 config.go:176] Loaded profile config "running-upgrade-20211117230442-9504": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.0
	I1117 23:05:30.758971    8772 driver.go:343] Setting default libvirt URI to qemu:///system
	I1117 23:05:32.357439    8772 docker.go:132] docker version: linux-19.03.12
	I1117 23:05:32.364197    8772 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I1117 23:05:32.710266    8772 info.go:263] docker info: {ID:L2W4:ZZCI:FAHF:ZZT6:NAWJ:ODFR:2ZBQ:YQ77:PYZZ:VR3E:WLUM:OBSV Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:62 OomKillDisable:true NGoroutines:72 SystemTime:2021-11-17 23:05:32.439351554 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:4.19.76-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://inde
x.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:6239399936 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:19.03.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7ad184331fa3e55e52b890ea95e65ba581ae3429 Expected:7ad184331fa3e55e52b890ea95e65ba581ae3429} RuncCommit:{ID:dc9208a3303feef5b3839f4323d9beb36df0a9dd Expected:dc9208a3303feef5b3839f4323d9beb36df0a9dd} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense:Community Engine Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I1117 23:05:32.714453    8772 out.go:176] * Using the docker driver based on user configuration
	I1117 23:05:32.714579    8772 start.go:280] selected driver: docker
	I1117 23:05:32.714605    8772 start.go:775] validating driver "docker" against <nil>
	I1117 23:05:32.714656    8772 start.go:786] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I1117 23:05:32.786169    8772 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I1117 23:05:33.136687    8772 info.go:263] docker info: {ID:L2W4:ZZCI:FAHF:ZZT6:NAWJ:ODFR:2ZBQ:YQ77:PYZZ:VR3E:WLUM:OBSV Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:62 OomKillDisable:true NGoroutines:72 SystemTime:2021-11-17 23:05:32.862731325 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:4.19.76-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://inde
x.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:6239399936 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:19.03.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7ad184331fa3e55e52b890ea95e65ba581ae3429 Expected:7ad184331fa3e55e52b890ea95e65ba581ae3429} RuncCommit:{ID:dc9208a3303feef5b3839f4323d9beb36df0a9dd Expected:dc9208a3303feef5b3839f4323d9beb36df0a9dd} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense:Community Engine Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I1117 23:05:33.136910    8772 start_flags.go:268] no existing cluster config was found, will generate one from the flags 
	I1117 23:05:33.137482    8772 start_flags.go:740] Wait components to verify : map[apiserver:true system_pods:true]
	I1117 23:05:33.137482    8772 cni.go:93] Creating CNI manager for ""
	I1117 23:05:33.137543    8772 cni.go:167] CNI unnecessary in this configuration, recommending no CNI
	I1117 23:05:33.137543    8772 start_flags.go:282] config:
	{Name:kubernetes-upgrade-20211117230530-9504 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.14.0 ClusterName:kubernetes-upgrade-20211117230530-9504 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host}
	I1117 23:05:33.148731    8772 out.go:176] * Starting control plane node kubernetes-upgrade-20211117230530-9504 in cluster kubernetes-upgrade-20211117230530-9504
	I1117 23:05:33.148731    8772 cache.go:118] Beginning downloading kic base image for docker with docker
	I1117 23:05:33.152286    8772 out.go:176] * Pulling base image ...
	I1117 23:05:33.152490    8772 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c in local docker daemon
	I1117 23:05:33.152490    8772 preload.go:132] Checking if preload exists for k8s version v1.14.0 and runtime docker
	I1117 23:05:33.152691    8772 preload.go:148] Found local preload: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v14-v1.14.0-docker-overlay2-amd64.tar.lz4
	I1117 23:05:33.152776    8772 cache.go:57] Caching tarball of preloaded images
	I1117 23:05:33.153140    8772 preload.go:174] Found C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v14-v1.14.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1117 23:05:33.153432    8772 cache.go:60] Finished verifying existence of preloaded tar for  v1.14.0 on docker
	I1117 23:05:33.153670    8772 profile.go:147] Saving config to C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\kubernetes-upgrade-20211117230530-9504\config.json ...
	I1117 23:05:33.153841    8772 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\kubernetes-upgrade-20211117230530-9504\config.json: {Name:mk5fb61f1209980ff25ea249dae29614fb8bc14c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1117 23:05:33.254284    8772 image.go:79] Found gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c in local docker daemon, skipping pull
	I1117 23:05:33.254284    8772 cache.go:140] gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c exists in daemon, skipping load
	I1117 23:05:33.254284    8772 cache.go:206] Successfully downloaded all kic artifacts
	I1117 23:05:33.254284    8772 start.go:313] acquiring machines lock for kubernetes-upgrade-20211117230530-9504: {Name:mk1ec968bda1e8a3926c466e3972753cfa3d6cac Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 23:05:33.254284    8772 start.go:317] acquired machines lock for "kubernetes-upgrade-20211117230530-9504" in 0s
	I1117 23:05:33.254284    8772 start.go:89] Provisioning new machine with config: &{Name:kubernetes-upgrade-20211117230530-9504 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.14.0 ClusterName:kubernetes-upgrade-20211117230530-9504 Namespace:default APIServerName:mi
nikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.14.0 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host} &{Name: IP: Port:8443 KubernetesVersion:v1.14.0 ControlPlane:true Worker:true}
	I1117 23:05:33.254284    8772 start.go:126] createHost starting for "" (driver="docker")
	I1117 23:05:33.259165    8772 out.go:203] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I1117 23:05:33.259635    8772 start.go:160] libmachine.API.Create for "kubernetes-upgrade-20211117230530-9504" (driver="docker")
	I1117 23:05:33.259712    8772 client.go:168] LocalClient.Create starting
	I1117 23:05:33.260209    8772 main.go:130] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I1117 23:05:33.260415    8772 main.go:130] libmachine: Decoding PEM data...
	I1117 23:05:33.260465    8772 main.go:130] libmachine: Parsing certificate...
	I1117 23:05:33.260659    8772 main.go:130] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I1117 23:05:33.260857    8772 main.go:130] libmachine: Decoding PEM data...
	I1117 23:05:33.260857    8772 main.go:130] libmachine: Parsing certificate...
	I1117 23:05:33.265670    8772 cli_runner.go:115] Run: docker network inspect kubernetes-upgrade-20211117230530-9504 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1117 23:05:33.355645    8772 cli_runner.go:162] docker network inspect kubernetes-upgrade-20211117230530-9504 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1117 23:05:33.360494    8772 network_create.go:254] running [docker network inspect kubernetes-upgrade-20211117230530-9504] to gather additional debugging logs...
	I1117 23:05:33.360601    8772 cli_runner.go:115] Run: docker network inspect kubernetes-upgrade-20211117230530-9504
	W1117 23:05:33.451615    8772 cli_runner.go:162] docker network inspect kubernetes-upgrade-20211117230530-9504 returned with exit code 1
	I1117 23:05:33.451798    8772 network_create.go:257] error running [docker network inspect kubernetes-upgrade-20211117230530-9504]: docker network inspect kubernetes-upgrade-20211117230530-9504: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: kubernetes-upgrade-20211117230530-9504
	I1117 23:05:33.451855    8772 network_create.go:259] output of [docker network inspect kubernetes-upgrade-20211117230530-9504]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: kubernetes-upgrade-20211117230530-9504
	
	** /stderr **
	I1117 23:05:33.456382    8772 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1117 23:05:33.564108    8772 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc00078a5f8] misses:0}
	I1117 23:05:33.564108    8772 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1117 23:05:33.564108    8772 network_create.go:106] attempt to create docker network kubernetes-upgrade-20211117230530-9504 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1117 23:05:33.568836    8772 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true kubernetes-upgrade-20211117230530-9504
	I1117 23:05:35.019187    8772 cli_runner.go:168] Completed: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true kubernetes-upgrade-20211117230530-9504: (1.4503406s)
	I1117 23:05:35.019634    8772 network_create.go:90] docker network kubernetes-upgrade-20211117230530-9504 192.168.49.0/24 created
	I1117 23:05:35.019634    8772 kic.go:106] calculated static IP "192.168.49.2" for the "kubernetes-upgrade-20211117230530-9504" container
	I1117 23:05:35.027761    8772 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
	I1117 23:05:35.126782    8772 cli_runner.go:115] Run: docker volume create kubernetes-upgrade-20211117230530-9504 --label name.minikube.sigs.k8s.io=kubernetes-upgrade-20211117230530-9504 --label created_by.minikube.sigs.k8s.io=true
	I1117 23:05:35.240793    8772 oci.go:102] Successfully created a docker volume kubernetes-upgrade-20211117230530-9504
	I1117 23:05:35.243804    8772 cli_runner.go:115] Run: docker run --rm --name kubernetes-upgrade-20211117230530-9504-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubernetes-upgrade-20211117230530-9504 --entrypoint /usr/bin/test -v kubernetes-upgrade-20211117230530-9504:/var gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -d /var/lib
	I1117 23:05:39.143227    8772 cli_runner.go:168] Completed: docker run --rm --name kubernetes-upgrade-20211117230530-9504-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubernetes-upgrade-20211117230530-9504 --entrypoint /usr/bin/test -v kubernetes-upgrade-20211117230530-9504:/var gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -d /var/lib: (3.8993942s)
	I1117 23:05:39.143227    8772 oci.go:106] Successfully prepared a docker volume kubernetes-upgrade-20211117230530-9504
	I1117 23:05:39.143227    8772 preload.go:132] Checking if preload exists for k8s version v1.14.0 and runtime docker
	I1117 23:05:39.143777    8772 kic.go:179] Starting extracting preloaded images to volume ...
	I1117 23:05:39.147835    8772 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v14-v1.14.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v kubernetes-upgrade-20211117230530-9504:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir
	I1117 23:05:39.160415    8772 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	W1117 23:05:39.265658    8772 cli_runner.go:162] docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v14-v1.14.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v kubernetes-upgrade-20211117230530-9504:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir returned with exit code 125
	I1117 23:05:39.265658    8772 kic.go:186] Unable to extract preloaded tarball to volume: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v14-v1.14.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v kubernetes-upgrade-20211117230530-9504:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: status code not OK but 500: {"Message":"Unhandled exception: The notification platform is unavailable.\r\n\r\nThe notification platform is unavailable.\r\n","StackTrace":"   at Windows.UI.Notifications.ToastNotificationManager.CreateToastNotifier(String applicationId)\r\n   at Docker.WPF.PromptShareDirectory.<PromptUserAsync>d__0.MoveNext() in C:\\workspaces\\stable-2.3.x\\src\\github.com\\docker\\pinata\\win\\src\\Docker.WPF\\PromptShareDirectory.cs:line 25\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at Docker.ApiServices.Mounting.FileSharing.<DoShareAsync>d__6.MoveNext() in C:\\workspaces\\stable-2.3.x\\src\\github.com\\docker\\pinata\\win\\src\\Docker.ApiServices\\Mounting\\FileSharing.cs:line 80\r\n--- End of stack trace from previous location whe
re exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at Docker.ApiServices.Mounting.FileSharing.<ShareAsync>d__4.MoveNext() in C:\\workspaces\\stable-2.3.x\\src\\github.com\\docker\\pinata\\win\\src\\Docker.ApiServices\\Mounting\\FileSharing.cs:line 47\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at Docker.HttpApi.Controllers.FilesharingController.<ShareDirectory>d__2.MoveNext() in C:\\workspaces\\stable-2.3.x\\src\\github.com\\docker\\pinata\\win\\src\\Docker.HttpApi\\Controllers\\FilesharingController.cs:line 21\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.Exceptio
nDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at System.Threading.Tasks.TaskHelpersExtensions.<CastToObject>d__1`1.MoveNext()\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at System.Web.Http.Controllers.ApiControllerActionInvoker.<InvokeActionAsyncCore>d__1.MoveNext()\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at System.Web.Http.Controllers.ActionFilterResult.<ExecuteAsync>d__5.MoveNext()\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.Excepti
onServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at System.Web.Http.Dispatcher.HttpControllerDispatcher.<SendAsync>d__15.MoveNext()"}.
	See 'docker run --help'.
	I1117 23:05:39.511919    8772 info.go:263] docker info: {ID:L2W4:ZZCI:FAHF:ZZT6:NAWJ:ODFR:2ZBQ:YQ77:PYZZ:VR3E:WLUM:OBSV Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:53 OomKillDisable:true NGoroutines:58 SystemTime:2021-11-17 23:05:39.246455338 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:4.19.76-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://inde
x.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:6239399936 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:19.03.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7ad184331fa3e55e52b890ea95e65ba581ae3429 Expected:7ad184331fa3e55e52b890ea95e65ba581ae3429} RuncCommit:{ID:dc9208a3303feef5b3839f4323d9beb36df0a9dd Expected:dc9208a3303feef5b3839f4323d9beb36df0a9dd} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense:Community Engine Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	E1117 23:05:39.512708    8772 oci.go:197] error getting kernel modules path: Unable to locate kernel modules
	I1117 23:05:39.512740    8772 client.go:171] LocalClient.Create took 6.2529812s
	I1117 23:05:41.522618    8772 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1117 23:05:41.525160    8772 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20211117230530-9504
	W1117 23:05:41.618999    8772 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20211117230530-9504 returned with exit code 1
	I1117 23:05:41.619312    8772 retry.go:31] will retry after 276.165072ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-20211117230530-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20211117230530-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-20211117230530-9504
	I1117 23:05:41.900550    8772 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20211117230530-9504
	W1117 23:05:41.988198    8772 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20211117230530-9504 returned with exit code 1
	I1117 23:05:41.988198    8772 retry.go:31] will retry after 540.190908ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-20211117230530-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20211117230530-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-20211117230530-9504
	I1117 23:05:42.533586    8772 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20211117230530-9504
	W1117 23:05:42.625125    8772 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20211117230530-9504 returned with exit code 1
	I1117 23:05:42.625399    8772 retry.go:31] will retry after 655.06503ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-20211117230530-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20211117230530-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-20211117230530-9504
	I1117 23:05:43.285726    8772 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20211117230530-9504
	W1117 23:05:43.377655    8772 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20211117230530-9504 returned with exit code 1
	W1117 23:05:43.377797    8772 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-20211117230530-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20211117230530-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-20211117230530-9504
	
	W1117 23:05:43.377907    8772 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-20211117230530-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20211117230530-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-20211117230530-9504
	I1117 23:05:43.377965    8772 start.go:129] duration metric: createHost completed in 10.123605s
	I1117 23:05:43.378005    8772 start.go:80] releasing machines lock for "kubernetes-upgrade-20211117230530-9504", held for 10.123605s
	W1117 23:05:43.378193    8772 start.go:532] error starting host: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	I1117 23:05:43.386659    8772 cli_runner.go:115] Run: docker container inspect kubernetes-upgrade-20211117230530-9504 --format={{.State.Status}}
	W1117 23:05:43.476282    8772 cli_runner.go:162] docker container inspect kubernetes-upgrade-20211117230530-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:05:43.476590    8772 delete.go:82] Unable to get host status for kubernetes-upgrade-20211117230530-9504, assuming it has already been deleted: state: unknown state "kubernetes-upgrade-20211117230530-9504": docker container inspect kubernetes-upgrade-20211117230530-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-20211117230530-9504
	W1117 23:05:43.476805    8772 out.go:241] ! StartHost failed, but will try again: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	! StartHost failed, but will try again: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	I1117 23:05:43.476805    8772 start.go:547] Will try again in 5 seconds ...
	I1117 23:05:48.477211    8772 start.go:313] acquiring machines lock for kubernetes-upgrade-20211117230530-9504: {Name:mk1ec968bda1e8a3926c466e3972753cfa3d6cac Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 23:05:48.477384    8772 start.go:317] acquired machines lock for "kubernetes-upgrade-20211117230530-9504" in 98µs
	I1117 23:05:48.477612    8772 start.go:93] Skipping create...Using existing machine configuration
	I1117 23:05:48.477648    8772 fix.go:55] fixHost starting: 
	I1117 23:05:48.484964    8772 cli_runner.go:115] Run: docker container inspect kubernetes-upgrade-20211117230530-9504 --format={{.State.Status}}
	W1117 23:05:48.571966    8772 cli_runner.go:162] docker container inspect kubernetes-upgrade-20211117230530-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:05:48.571966    8772 fix.go:108] recreateIfNeeded on kubernetes-upgrade-20211117230530-9504: state= err=unknown state "kubernetes-upgrade-20211117230530-9504": docker container inspect kubernetes-upgrade-20211117230530-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-20211117230530-9504
	I1117 23:05:48.571966    8772 fix.go:113] machineExists: false. err=machine does not exist
	I1117 23:05:48.575964    8772 out.go:176] * docker "kubernetes-upgrade-20211117230530-9504" container is missing, will recreate.
	I1117 23:05:48.575964    8772 delete.go:124] DEMOLISHING kubernetes-upgrade-20211117230530-9504 ...
	I1117 23:05:48.581957    8772 cli_runner.go:115] Run: docker container inspect kubernetes-upgrade-20211117230530-9504 --format={{.State.Status}}
	W1117 23:05:48.679059    8772 cli_runner.go:162] docker container inspect kubernetes-upgrade-20211117230530-9504 --format={{.State.Status}} returned with exit code 1
	W1117 23:05:48.679240    8772 stop.go:75] unable to get state: unknown state "kubernetes-upgrade-20211117230530-9504": docker container inspect kubernetes-upgrade-20211117230530-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-20211117230530-9504
	I1117 23:05:48.679374    8772 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "kubernetes-upgrade-20211117230530-9504": docker container inspect kubernetes-upgrade-20211117230530-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-20211117230530-9504
	I1117 23:05:48.688027    8772 cli_runner.go:115] Run: docker container inspect kubernetes-upgrade-20211117230530-9504 --format={{.State.Status}}
	W1117 23:05:48.784291    8772 cli_runner.go:162] docker container inspect kubernetes-upgrade-20211117230530-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:05:48.784572    8772 delete.go:82] Unable to get host status for kubernetes-upgrade-20211117230530-9504, assuming it has already been deleted: state: unknown state "kubernetes-upgrade-20211117230530-9504": docker container inspect kubernetes-upgrade-20211117230530-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-20211117230530-9504
	I1117 23:05:48.789087    8772 cli_runner.go:115] Run: docker container inspect -f {{.Id}} kubernetes-upgrade-20211117230530-9504
	W1117 23:05:48.880453    8772 cli_runner.go:162] docker container inspect -f {{.Id}} kubernetes-upgrade-20211117230530-9504 returned with exit code 1
	I1117 23:05:48.880519    8772 kic.go:360] could not find the container kubernetes-upgrade-20211117230530-9504 to remove it. will try anyways
	I1117 23:05:48.884550    8772 cli_runner.go:115] Run: docker container inspect kubernetes-upgrade-20211117230530-9504 --format={{.State.Status}}
	W1117 23:05:48.968874    8772 cli_runner.go:162] docker container inspect kubernetes-upgrade-20211117230530-9504 --format={{.State.Status}} returned with exit code 1
	W1117 23:05:48.969154    8772 oci.go:83] error getting container status, will try to delete anyways: unknown state "kubernetes-upgrade-20211117230530-9504": docker container inspect kubernetes-upgrade-20211117230530-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-20211117230530-9504
	I1117 23:05:48.974131    8772 cli_runner.go:115] Run: docker exec --privileged -t kubernetes-upgrade-20211117230530-9504 /bin/bash -c "sudo init 0"
	W1117 23:05:49.069031    8772 cli_runner.go:162] docker exec --privileged -t kubernetes-upgrade-20211117230530-9504 /bin/bash -c "sudo init 0" returned with exit code 1
	I1117 23:05:49.069031    8772 oci.go:658] error shutdown kubernetes-upgrade-20211117230530-9504: docker exec --privileged -t kubernetes-upgrade-20211117230530-9504 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: kubernetes-upgrade-20211117230530-9504
	I1117 23:05:50.075975    8772 cli_runner.go:115] Run: docker container inspect kubernetes-upgrade-20211117230530-9504 --format={{.State.Status}}
	W1117 23:05:50.175810    8772 cli_runner.go:162] docker container inspect kubernetes-upgrade-20211117230530-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:05:50.176184    8772 oci.go:670] temporary error verifying shutdown: unknown state "kubernetes-upgrade-20211117230530-9504": docker container inspect kubernetes-upgrade-20211117230530-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-20211117230530-9504
	I1117 23:05:50.176237    8772 oci.go:672] temporary error: container kubernetes-upgrade-20211117230530-9504 status is  but expect it to be exited
	I1117 23:05:50.176237    8772 retry.go:31] will retry after 462.318748ms: couldn't verify container is exited. %v: unknown state "kubernetes-upgrade-20211117230530-9504": docker container inspect kubernetes-upgrade-20211117230530-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-20211117230530-9504
	I1117 23:05:50.643073    8772 cli_runner.go:115] Run: docker container inspect kubernetes-upgrade-20211117230530-9504 --format={{.State.Status}}
	W1117 23:05:50.730528    8772 cli_runner.go:162] docker container inspect kubernetes-upgrade-20211117230530-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:05:50.730671    8772 oci.go:670] temporary error verifying shutdown: unknown state "kubernetes-upgrade-20211117230530-9504": docker container inspect kubernetes-upgrade-20211117230530-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-20211117230530-9504
	I1117 23:05:50.730731    8772 oci.go:672] temporary error: container kubernetes-upgrade-20211117230530-9504 status is  but expect it to be exited
	I1117 23:05:50.730731    8772 retry.go:31] will retry after 890.117305ms: couldn't verify container is exited. %v: unknown state "kubernetes-upgrade-20211117230530-9504": docker container inspect kubernetes-upgrade-20211117230530-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-20211117230530-9504
	I1117 23:05:51.625125    8772 cli_runner.go:115] Run: docker container inspect kubernetes-upgrade-20211117230530-9504 --format={{.State.Status}}
	W1117 23:05:51.719245    8772 cli_runner.go:162] docker container inspect kubernetes-upgrade-20211117230530-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:05:51.719245    8772 oci.go:670] temporary error verifying shutdown: unknown state "kubernetes-upgrade-20211117230530-9504": docker container inspect kubernetes-upgrade-20211117230530-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-20211117230530-9504
	I1117 23:05:51.719245    8772 oci.go:672] temporary error: container kubernetes-upgrade-20211117230530-9504 status is  but expect it to be exited
	I1117 23:05:51.719402    8772 retry.go:31] will retry after 636.341646ms: couldn't verify container is exited. %v: unknown state "kubernetes-upgrade-20211117230530-9504": docker container inspect kubernetes-upgrade-20211117230530-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-20211117230530-9504
	I1117 23:05:52.360616    8772 cli_runner.go:115] Run: docker container inspect kubernetes-upgrade-20211117230530-9504 --format={{.State.Status}}
	W1117 23:05:52.451551    8772 cli_runner.go:162] docker container inspect kubernetes-upgrade-20211117230530-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:05:52.451641    8772 oci.go:670] temporary error verifying shutdown: unknown state "kubernetes-upgrade-20211117230530-9504": docker container inspect kubernetes-upgrade-20211117230530-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-20211117230530-9504
	I1117 23:05:52.451641    8772 oci.go:672] temporary error: container kubernetes-upgrade-20211117230530-9504 status is  but expect it to be exited
	I1117 23:05:52.451641    8772 retry.go:31] will retry after 1.107876242s: couldn't verify container is exited. %v: unknown state "kubernetes-upgrade-20211117230530-9504": docker container inspect kubernetes-upgrade-20211117230530-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-20211117230530-9504
	I1117 23:05:53.562800    8772 cli_runner.go:115] Run: docker container inspect kubernetes-upgrade-20211117230530-9504 --format={{.State.Status}}
	W1117 23:05:53.663131    8772 cli_runner.go:162] docker container inspect kubernetes-upgrade-20211117230530-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:05:53.663232    8772 oci.go:670] temporary error verifying shutdown: unknown state "kubernetes-upgrade-20211117230530-9504": docker container inspect kubernetes-upgrade-20211117230530-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-20211117230530-9504
	I1117 23:05:53.663394    8772 oci.go:672] temporary error: container kubernetes-upgrade-20211117230530-9504 status is  but expect it to be exited
	I1117 23:05:53.663478    8772 retry.go:31] will retry after 1.511079094s: couldn't verify container is exited. %v: unknown state "kubernetes-upgrade-20211117230530-9504": docker container inspect kubernetes-upgrade-20211117230530-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-20211117230530-9504
	I1117 23:05:55.180506    8772 cli_runner.go:115] Run: docker container inspect kubernetes-upgrade-20211117230530-9504 --format={{.State.Status}}
	W1117 23:05:55.274786    8772 cli_runner.go:162] docker container inspect kubernetes-upgrade-20211117230530-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:05:55.274941    8772 oci.go:670] temporary error verifying shutdown: unknown state "kubernetes-upgrade-20211117230530-9504": docker container inspect kubernetes-upgrade-20211117230530-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-20211117230530-9504
	I1117 23:05:55.275159    8772 oci.go:672] temporary error: container kubernetes-upgrade-20211117230530-9504 status is  but expect it to be exited
	I1117 23:05:55.275159    8772 retry.go:31] will retry after 3.04096222s: couldn't verify container is exited. %v: unknown state "kubernetes-upgrade-20211117230530-9504": docker container inspect kubernetes-upgrade-20211117230530-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-20211117230530-9504
	I1117 23:05:58.325627    8772 cli_runner.go:115] Run: docker container inspect kubernetes-upgrade-20211117230530-9504 --format={{.State.Status}}
	W1117 23:05:58.428003    8772 cli_runner.go:162] docker container inspect kubernetes-upgrade-20211117230530-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:05:58.428003    8772 oci.go:670] temporary error verifying shutdown: unknown state "kubernetes-upgrade-20211117230530-9504": docker container inspect kubernetes-upgrade-20211117230530-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-20211117230530-9504
	I1117 23:05:58.428003    8772 oci.go:672] temporary error: container kubernetes-upgrade-20211117230530-9504 status is  but expect it to be exited
	I1117 23:05:58.428003    8772 retry.go:31] will retry after 5.781953173s: couldn't verify container is exited. %v: unknown state "kubernetes-upgrade-20211117230530-9504": docker container inspect kubernetes-upgrade-20211117230530-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-20211117230530-9504
	I1117 23:06:04.214990    8772 cli_runner.go:115] Run: docker container inspect kubernetes-upgrade-20211117230530-9504 --format={{.State.Status}}
	W1117 23:06:04.310794    8772 cli_runner.go:162] docker container inspect kubernetes-upgrade-20211117230530-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:06:04.310902    8772 oci.go:670] temporary error verifying shutdown: unknown state "kubernetes-upgrade-20211117230530-9504": docker container inspect kubernetes-upgrade-20211117230530-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-20211117230530-9504
	I1117 23:06:04.310902    8772 oci.go:672] temporary error: container kubernetes-upgrade-20211117230530-9504 status is  but expect it to be exited
	I1117 23:06:04.310981    8772 oci.go:87] couldn't shut down kubernetes-upgrade-20211117230530-9504 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "kubernetes-upgrade-20211117230530-9504": docker container inspect kubernetes-upgrade-20211117230530-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-20211117230530-9504
	 
	I1117 23:06:04.314602    8772 cli_runner.go:115] Run: docker rm -f -v kubernetes-upgrade-20211117230530-9504
	W1117 23:06:04.428906    8772 cli_runner.go:162] docker rm -f -v kubernetes-upgrade-20211117230530-9504 returned with exit code 1
	W1117 23:06:04.429879    8772 delete.go:139] delete failed (probably ok) <nil>
	I1117 23:06:04.429879    8772 fix.go:120] Sleeping 1 second for extra luck!
	I1117 23:06:05.430629    8772 start.go:126] createHost starting for "" (driver="docker")
	I1117 23:06:05.435112    8772 out.go:203] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I1117 23:06:05.436190    8772 start.go:160] libmachine.API.Create for "kubernetes-upgrade-20211117230530-9504" (driver="docker")
	I1117 23:06:05.436247    8772 client.go:168] LocalClient.Create starting
	I1117 23:06:05.436858    8772 main.go:130] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I1117 23:06:05.436921    8772 main.go:130] libmachine: Decoding PEM data...
	I1117 23:06:05.436921    8772 main.go:130] libmachine: Parsing certificate...
	I1117 23:06:05.436921    8772 main.go:130] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I1117 23:06:05.436921    8772 main.go:130] libmachine: Decoding PEM data...
	I1117 23:06:05.436921    8772 main.go:130] libmachine: Parsing certificate...
	I1117 23:06:05.442612    8772 cli_runner.go:115] Run: docker network inspect kubernetes-upgrade-20211117230530-9504 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1117 23:06:05.559724    8772 network_create.go:67] Found existing network {name:kubernetes-upgrade-20211117230530-9504 subnet:0xc000aa66c0 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 168 49 1] mtu:1500}
	I1117 23:06:05.559724    8772 kic.go:106] calculated static IP "192.168.49.2" for the "kubernetes-upgrade-20211117230530-9504" container
	I1117 23:06:05.569213    8772 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
	I1117 23:06:05.673869    8772 cli_runner.go:115] Run: docker volume create kubernetes-upgrade-20211117230530-9504 --label name.minikube.sigs.k8s.io=kubernetes-upgrade-20211117230530-9504 --label created_by.minikube.sigs.k8s.io=true
	I1117 23:06:05.776883    8772 oci.go:102] Successfully created a docker volume kubernetes-upgrade-20211117230530-9504
	I1117 23:06:05.780887    8772 cli_runner.go:115] Run: docker run --rm --name kubernetes-upgrade-20211117230530-9504-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubernetes-upgrade-20211117230530-9504 --entrypoint /usr/bin/test -v kubernetes-upgrade-20211117230530-9504:/var gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -d /var/lib
	I1117 23:06:15.374366    8772 cli_runner.go:168] Completed: docker run --rm --name kubernetes-upgrade-20211117230530-9504-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubernetes-upgrade-20211117230530-9504 --entrypoint /usr/bin/test -v kubernetes-upgrade-20211117230530-9504:/var gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -d /var/lib: (9.5934071s)
	I1117 23:06:15.374366    8772 oci.go:106] Successfully prepared a docker volume kubernetes-upgrade-20211117230530-9504
	I1117 23:06:15.374619    8772 preload.go:132] Checking if preload exists for k8s version v1.14.0 and runtime docker
	I1117 23:06:15.374747    8772 kic.go:179] Starting extracting preloaded images to volume ...
	I1117 23:06:15.379992    8772 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I1117 23:06:15.380119    8772 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v14-v1.14.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v kubernetes-upgrade-20211117230530-9504:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir
	W1117 23:06:15.491247    8772 cli_runner.go:162] docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v14-v1.14.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v kubernetes-upgrade-20211117230530-9504:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir returned with exit code 125
	I1117 23:06:15.491247    8772 kic.go:186] Unable to extract preloaded tarball to volume: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v14-v1.14.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v kubernetes-upgrade-20211117230530-9504:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: status code not OK but 500: {"Message":"Unhandled exception: The notification platform is unavailable.\r\n\r\nThe notification platform is unavailable.\r\n","StackTrace":"   at Windows.UI.Notifications.ToastNotificationManager.CreateToastNotifier(String applicationId)\r\n   at Docker.WPF.PromptShareDirectory.<PromptUserAsync>d__0.MoveNext() in C:\\workspaces\\stable-2.3.x\\src\\github.com\\docker\\pinata\\win\\src\\Docker.WPF\\PromptShareDirectory.cs:line 25\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at Docker.ApiServices.Mounting.FileSharing.<DoShareAsync>d__6.MoveNext() in C:\\workspaces\\stable-2.3.x\\src\\github.com\\docker\\pinata\\win\\src\\Docker.ApiServices\\Mounting\\FileSharing.cs:line 80\r\n--- End of stack trace from previous location whe
re exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at Docker.ApiServices.Mounting.FileSharing.<ShareAsync>d__4.MoveNext() in C:\\workspaces\\stable-2.3.x\\src\\github.com\\docker\\pinata\\win\\src\\Docker.ApiServices\\Mounting\\FileSharing.cs:line 47\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at Docker.HttpApi.Controllers.FilesharingController.<ShareDirectory>d__2.MoveNext() in C:\\workspaces\\stable-2.3.x\\src\\github.com\\docker\\pinata\\win\\src\\Docker.HttpApi\\Controllers\\FilesharingController.cs:line 21\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.Exceptio
nDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at System.Threading.Tasks.TaskHelpersExtensions.<CastToObject>d__1`1.MoveNext()\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at System.Web.Http.Controllers.ApiControllerActionInvoker.<InvokeActionAsyncCore>d__1.MoveNext()\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at System.Web.Http.Controllers.ActionFilterResult.<ExecuteAsync>d__5.MoveNext()\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.Excepti
onServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at System.Web.Http.Dispatcher.HttpControllerDispatcher.<SendAsync>d__15.MoveNext()"}.
	See 'docker run --help'.
	I1117 23:06:15.759901    8772 info.go:263] docker info: {ID:L2W4:ZZCI:FAHF:ZZT6:NAWJ:ODFR:2ZBQ:YQ77:PYZZ:VR3E:WLUM:OBSV Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:74 OomKillDisable:true NGoroutines:76 SystemTime:2021-11-17 23:06:15.477761999 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:4.19.76-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://inde
x.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:6239399936 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:19.03.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7ad184331fa3e55e52b890ea95e65ba581ae3429 Expected:7ad184331fa3e55e52b890ea95e65ba581ae3429} RuncCommit:{ID:dc9208a3303feef5b3839f4323d9beb36df0a9dd Expected:dc9208a3303feef5b3839f4323d9beb36df0a9dd} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense:Community Engine Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	E1117 23:06:15.760298    8772 oci.go:197] error getting kernel modules path: Unable to locate kernel modules
	I1117 23:06:15.760298    8772 client.go:171] LocalClient.Create took 10.3239731s
	I1117 23:06:17.770915    8772 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1117 23:06:17.775100    8772 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20211117230530-9504
	W1117 23:06:17.868746    8772 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20211117230530-9504 returned with exit code 1
	I1117 23:06:17.868746    8772 retry.go:31] will retry after 178.565968ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-20211117230530-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20211117230530-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-20211117230530-9504
	I1117 23:06:18.052005    8772 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20211117230530-9504
	W1117 23:06:18.139023    8772 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20211117230530-9504 returned with exit code 1
	I1117 23:06:18.139264    8772 retry.go:31] will retry after 330.246446ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-20211117230530-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20211117230530-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-20211117230530-9504
	I1117 23:06:18.474746    8772 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20211117230530-9504
	W1117 23:06:18.571505    8772 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20211117230530-9504 returned with exit code 1
	I1117 23:06:18.571785    8772 retry.go:31] will retry after 460.157723ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-20211117230530-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20211117230530-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-20211117230530-9504
	I1117 23:06:19.038502    8772 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20211117230530-9504
	W1117 23:06:19.137484    8772 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20211117230530-9504 returned with exit code 1
	W1117 23:06:19.137484    8772 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-20211117230530-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20211117230530-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-20211117230530-9504
	
	W1117 23:06:19.137484    8772 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-20211117230530-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20211117230530-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-20211117230530-9504
	I1117 23:06:19.137484    8772 start.go:129] duration metric: createHost completed in 13.7067519s
	I1117 23:06:19.146288    8772 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1117 23:06:19.151235    8772 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20211117230530-9504
	W1117 23:06:19.236374    8772 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20211117230530-9504 returned with exit code 1
	I1117 23:06:19.236801    8772 retry.go:31] will retry after 195.758538ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-20211117230530-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20211117230530-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-20211117230530-9504
	I1117 23:06:19.437685    8772 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20211117230530-9504
	W1117 23:06:19.527526    8772 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20211117230530-9504 returned with exit code 1
	I1117 23:06:19.527526    8772 retry.go:31] will retry after 297.413196ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-20211117230530-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20211117230530-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-20211117230530-9504
	I1117 23:06:19.831464    8772 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20211117230530-9504
	W1117 23:06:19.923541    8772 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20211117230530-9504 returned with exit code 1
	I1117 23:06:19.923541    8772 retry.go:31] will retry after 663.23513ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-20211117230530-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20211117230530-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-20211117230530-9504
	I1117 23:06:20.592630    8772 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20211117230530-9504
	W1117 23:06:20.684611    8772 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20211117230530-9504 returned with exit code 1
	W1117 23:06:20.684611    8772 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-20211117230530-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20211117230530-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-20211117230530-9504
	
	W1117 23:06:20.684611    8772 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-20211117230530-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20211117230530-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-20211117230530-9504
	I1117 23:06:20.684611    8772 fix.go:57] fixHost completed within 32.2067223s
	I1117 23:06:20.684611    8772 start.go:80] releasing machines lock for "kubernetes-upgrade-20211117230530-9504", held for 32.2068819s
	W1117 23:06:20.685418    8772 out.go:241] * Failed to start docker container. Running "minikube delete -p kubernetes-upgrade-20211117230530-9504" may fix it: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	* Failed to start docker container. Running "minikube delete -p kubernetes-upgrade-20211117230530-9504" may fix it: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	I1117 23:06:20.688443    8772 out.go:176] 
	W1117 23:06:20.688736    8772 out.go:241] X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	W1117 23:06:20.688736    8772 out.go:241] * 
	* 
	W1117 23:06:20.689674    8772 out.go:241] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1117 23:06:20.695095    8772 out.go:176] 

                                                
                                                
** /stderr **
version_upgrade_test.go:231: failed to start minikube HEAD with oldest k8s version: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-20211117230530-9504 --memory=2200 --kubernetes-version=v1.14.0 --alsologtostderr -v=1 --driver=docker: exit status 80
version_upgrade_test.go:234: (dbg) Run:  out/minikube-windows-amd64.exe stop -p kubernetes-upgrade-20211117230530-9504
version_upgrade_test.go:234: (dbg) Non-zero exit: out/minikube-windows-amd64.exe stop -p kubernetes-upgrade-20211117230530-9504: exit status 82 (19.3998423s)

                                                
                                                
-- stdout --
	* Stopping node "kubernetes-upgrade-20211117230530-9504"  ...
	* Stopping node "kubernetes-upgrade-20211117230530-9504"  ...
	* Stopping node "kubernetes-upgrade-20211117230530-9504"  ...
	* Stopping node "kubernetes-upgrade-20211117230530-9504"  ...
	* Stopping node "kubernetes-upgrade-20211117230530-9504"  ...
	* Stopping node "kubernetes-upgrade-20211117230530-9504"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 23:06:24.582030    7252 daemonize_windows.go:39] error terminating scheduled stop for profile kubernetes-upgrade-20211117230530-9504: stopping schedule-stop service for profile kubernetes-upgrade-20211117230530-9504: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-20211117230530-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20211117230530-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-20211117230530-9504
	X Exiting due to GUEST_STOP_TIMEOUT: docker container inspect kubernetes-upgrade-20211117230530-9504 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-20211117230530-9504
	
	* 
	╭──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                      │
	│    * If the above advice does not help, please let us know:                                                          │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                        │
	│                                                                                                                      │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                             │
	│    * Please also attach the following file to the GitHub issue:                                                      │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_stop_eeafc14e343686f0df1f1d4295ac2d3042636ff8_2.log    │
	│                                                                                                                      │
	╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
version_upgrade_test.go:236: out/minikube-windows-amd64.exe stop -p kubernetes-upgrade-20211117230530-9504 failed: exit status 82
panic.go:642: *** TestKubernetesUpgrade FAILED at 2021-11-17 23:06:40.2176528 +0000 GMT m=+2406.713576001
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestKubernetesUpgrade]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect kubernetes-upgrade-20211117230530-9504
helpers_test.go:235: (dbg) docker inspect kubernetes-upgrade-20211117230530-9504:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "kubernetes-upgrade-20211117230530-9504",
	        "Id": "f4544fe0de72b0f69ca224af3be38f62453fa7af95e0c77d90694be1e3bab009",
	        "Created": "2021-11-17T23:05:33.65276194Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.49.0/24",
	                    "Gateway": "192.168.49.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p kubernetes-upgrade-20211117230530-9504 -n kubernetes-upgrade-20211117230530-9504
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p kubernetes-upgrade-20211117230530-9504 -n kubernetes-upgrade-20211117230530-9504: exit status 7 (1.83562s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 23:06:42.127030    9228 status.go:247] status error: host: state: unknown state "kubernetes-upgrade-20211117230530-9504": docker container inspect kubernetes-upgrade-20211117230530-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-20211117230530-9504

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "kubernetes-upgrade-20211117230530-9504" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:175: Cleaning up "kubernetes-upgrade-20211117230530-9504" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p kubernetes-upgrade-20211117230530-9504
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p kubernetes-upgrade-20211117230530-9504: (4.0121439s)
--- FAIL: TestKubernetesUpgrade (75.67s)

                                                
                                    
x
+
TestMissingContainerUpgrade (271.63s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:316: (dbg) Run:  C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube-v1.9.1.3451204043.exe start -p missing-upgrade-20211117230443-9504 --memory=2200 --driver=docker

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:316: (dbg) Done: C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube-v1.9.1.3451204043.exe start -p missing-upgrade-20211117230443-9504 --memory=2200 --driver=docker: (3m8.7297951s)
version_upgrade_test.go:325: (dbg) Run:  docker stop missing-upgrade-20211117230443-9504

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:325: (dbg) Done: docker stop missing-upgrade-20211117230443-9504: (10.6698826s)
version_upgrade_test.go:330: (dbg) Run:  docker rm missing-upgrade-20211117230443-9504
version_upgrade_test.go:336: (dbg) Run:  out/minikube-windows-amd64.exe start -p missing-upgrade-20211117230443-9504 --memory=2200 --alsologtostderr -v=1 --driver=docker

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:336: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p missing-upgrade-20211117230443-9504 --memory=2200 --alsologtostderr -v=1 --driver=docker: exit status 80 (1m4.8320276s)

                                                
                                                
-- stdout --
	* [missing-upgrade-20211117230443-9504] minikube v1.24.0 on Microsoft Windows 10 Enterprise N 10.0.18363 Build 18363
	  - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=12739
	* Kubernetes 1.22.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.22.3
	* Using the docker driver based on existing profile
	* Starting control plane node missing-upgrade-20211117230443-9504 in cluster missing-upgrade-20211117230443-9504
	* Pulling base image ...
	* docker "missing-upgrade-20211117230443-9504" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* docker "missing-upgrade-20211117230443-9504" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1117 23:08:04.071170   10920 out.go:297] Setting OutFile to fd 1588 ...
	I1117 23:08:04.143023   10920 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 23:08:04.143023   10920 out.go:310] Setting ErrFile to fd 1356...
	I1117 23:08:04.143023   10920 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 23:08:04.159067   10920 out.go:304] Setting JSON to false
	I1117 23:08:04.163434   10920 start.go:112] hostinfo: {"hostname":"minikube2","uptime":79799,"bootTime":1637110685,"procs":135,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.18363 Build 18363","kernelVersion":"10.0.18363 Build 18363","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"29dced62-21fb-45d8-a34e-472b66ced002"}
	W1117 23:08:04.163434   10920 start.go:120] gopshost.Virtualization returned error: not implemented yet
	I1117 23:08:04.169734   10920 out.go:176] * [missing-upgrade-20211117230443-9504] minikube v1.24.0 on Microsoft Windows 10 Enterprise N 10.0.18363 Build 18363
	I1117 23:08:04.169734   10920 notify.go:174] Checking for updates...
	I1117 23:08:04.175172   10920 out.go:176]   - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	I1117 23:08:04.170739   10920 preload.go:305] deleting older generation preload C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball/preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4
	I1117 23:08:04.177591   10920 out.go:176]   - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	W1117 23:08:04.177650   10920 preload.go:308] Failed to clean up older preload files, consider running `minikube delete --all --purge`
	I1117 23:08:04.179916   10920 out.go:176]   - MINIKUBE_LOCATION=12739
	I1117 23:08:04.183243   10920 config.go:176] Loaded profile config "missing-upgrade-20211117230443-9504": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.0
	I1117 23:08:04.183794   10920 start_flags.go:571] config upgrade: KicBaseImage=gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c
	I1117 23:08:04.187953   10920 out.go:176] * Kubernetes 1.22.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.22.3
	I1117 23:08:04.187953   10920 driver.go:343] Setting default libvirt URI to qemu:///system
	I1117 23:08:05.851765   10920 docker.go:132] docker version: linux-19.03.12
	I1117 23:08:05.855962   10920 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I1117 23:08:06.218660   10920 info.go:263] docker info: {ID:L2W4:ZZCI:FAHF:ZZT6:NAWJ:ODFR:2ZBQ:YQ77:PYZZ:VR3E:WLUM:OBSV Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:59 OomKillDisable:true NGoroutines:65 SystemTime:2021-11-17 23:08:05.935378178 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:4.19.76-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://inde
x.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:6239399936 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:19.03.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7ad184331fa3e55e52b890ea95e65ba581ae3429 Expected:7ad184331fa3e55e52b890ea95e65ba581ae3429} RuncCommit:{ID:dc9208a3303feef5b3839f4323d9beb36df0a9dd Expected:dc9208a3303feef5b3839f4323d9beb36df0a9dd} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense:Community Engine Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I1117 23:08:06.221658   10920 out.go:176] * Using the docker driver based on existing profile
	I1117 23:08:06.221658   10920 start.go:280] selected driver: docker
	I1117 23:08:06.221658   10920 start.go:775] validating driver "docker" against &{Name:missing-upgrade-20211117230443-9504 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.18.0 ClusterName:missing-upgrade-20211117230443-9504 Namespace: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.244.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:m01 IP:172.17.0.2 Port:8443 KubernetesVersion:v1.18.0 ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:}
	I1117 23:08:06.222660   10920 start.go:786] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I1117 23:08:06.285204   10920 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I1117 23:08:06.663494   10920 info.go:263] docker info: {ID:L2W4:ZZCI:FAHF:ZZT6:NAWJ:ODFR:2ZBQ:YQ77:PYZZ:VR3E:WLUM:OBSV Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:59 OomKillDisable:true NGoroutines:65 SystemTime:2021-11-17 23:08:06.373809865 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:4.19.76-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://inde
x.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:6239399936 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:19.03.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7ad184331fa3e55e52b890ea95e65ba581ae3429 Expected:7ad184331fa3e55e52b890ea95e65ba581ae3429} RuncCommit:{ID:dc9208a3303feef5b3839f4323d9beb36df0a9dd Expected:dc9208a3303feef5b3839f4323d9beb36df0a9dd} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense:Community Engine Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I1117 23:08:06.663494   10920 cni.go:93] Creating CNI manager for ""
	I1117 23:08:06.663494   10920 cni.go:167] CNI unnecessary in this configuration, recommending no CNI
	I1117 23:08:06.663494   10920 start_flags.go:282] config:
	{Name:missing-upgrade-20211117230443-9504 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.18.0 ClusterName:missing-upgrade-20211117230443-9504 Namespace: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket
: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.244.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:m01 IP:172.17.0.2 Port:8443 KubernetesVersion:v1.18.0 ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:}
	I1117 23:08:06.666464   10920 out.go:176] * Starting control plane node missing-upgrade-20211117230443-9504 in cluster missing-upgrade-20211117230443-9504
	I1117 23:08:06.666464   10920 cache.go:118] Beginning downloading kic base image for docker with docker
	I1117 23:08:06.671469   10920 out.go:176] * Pulling base image ...
	I1117 23:08:06.671469   10920 preload.go:132] Checking if preload exists for k8s version v1.18.0 and runtime docker
	I1117 23:08:06.671469   10920 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c in local docker daemon
	W1117 23:08:06.721798   10920 preload.go:115] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/preloaded-images-k8s-v14-v1.18.0-docker-overlay2-amd64.tar.lz4 status code: 404
	I1117 23:08:06.721798   10920 profile.go:147] Saving config to C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\missing-upgrade-20211117230443-9504\config.json ...
	I1117 23:08:06.721798   10920 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\k8s.gcr.io\kube-scheduler:v1.18.0 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\k8s.gcr.io\kube-scheduler_v1.18.0
	I1117 23:08:06.721798   10920 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\k8s.gcr.io\etcd:3.4.3-0 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\k8s.gcr.io\etcd_3.4.3-0
	I1117 23:08:06.721798   10920 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\gcr.io\k8s-minikube\storage-provisioner:v5 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\gcr.io\k8s-minikube\storage-provisioner_v5
	I1117 23:08:06.721798   10920 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\k8s.gcr.io\kube-controller-manager:v1.18.0 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\k8s.gcr.io\kube-controller-manager_v1.18.0
	I1117 23:08:06.721798   10920 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\docker.io\kubernetesui\metrics-scraper:v1.0.7 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\docker.io\kubernetesui\metrics-scraper_v1.0.7
	I1117 23:08:06.721798   10920 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\docker.io\kubernetesui\dashboard:v2.3.1 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\docker.io\kubernetesui\dashboard_v2.3.1
	I1117 23:08:06.721798   10920 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\k8s.gcr.io\kube-apiserver:v1.18.0 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\k8s.gcr.io\kube-apiserver_v1.18.0
	I1117 23:08:06.721798   10920 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\k8s.gcr.io\coredns:1.6.7 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\k8s.gcr.io\coredns_1.6.7
	I1117 23:08:06.721798   10920 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\k8s.gcr.io\kube-proxy:v1.18.0 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\k8s.gcr.io\kube-proxy_v1.18.0
	I1117 23:08:06.721798   10920 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\k8s.gcr.io\pause:3.2 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\k8s.gcr.io\pause_3.2
	I1117 23:08:06.829662   10920 image.go:79] Found gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c in local docker daemon, skipping pull
	I1117 23:08:06.829662   10920 cache.go:140] gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c exists in daemon, skipping load
	I1117 23:08:06.829783   10920 cache.go:206] Successfully downloaded all kic artifacts
	I1117 23:08:06.829899   10920 start.go:313] acquiring machines lock for missing-upgrade-20211117230443-9504: {Name:mk9a706a1ac78cd0fd765590627eca4f2bdcb716 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 23:08:06.830181   10920 start.go:317] acquired machines lock for "missing-upgrade-20211117230443-9504" in 282.2µs
	I1117 23:08:06.830346   10920 start.go:93] Skipping create...Using existing machine configuration
	I1117 23:08:06.830455   10920 fix.go:55] fixHost starting: m01
	I1117 23:08:06.845407   10920 cli_runner.go:115] Run: docker container inspect missing-upgrade-20211117230443-9504 --format={{.State.Status}}
	I1117 23:08:06.879015   10920 cache.go:107] acquiring lock: {Name:mk0123cdd636b9e6bcf113e9915cb8141d00baf9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 23:08:06.880012   10920 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\k8s.gcr.io\kube-scheduler_v1.18.0 exists
	I1117 23:08:06.880012   10920 cache.go:96] cache image "k8s.gcr.io/kube-scheduler:v1.18.0" -> "C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\cache\\images\\k8s.gcr.io\\kube-scheduler_v1.18.0" took 158.2129ms
	I1117 23:08:06.880012   10920 cache.go:80] save to tar file k8s.gcr.io/kube-scheduler:v1.18.0 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\k8s.gcr.io\kube-scheduler_v1.18.0 succeeded
	I1117 23:08:06.894018   10920 cache.go:107] acquiring lock: {Name:mkfb829e453b41c54cb64fe3f427bb725e76d3e2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 23:08:06.894018   10920 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\k8s.gcr.io\coredns_1.6.7 exists
	I1117 23:08:06.894018   10920 cache.go:96] cache image "k8s.gcr.io/coredns:1.6.7" -> "C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\cache\\images\\k8s.gcr.io\\coredns_1.6.7" took 171.2066ms
	I1117 23:08:06.894018   10920 cache.go:80] save to tar file k8s.gcr.io/coredns:1.6.7 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\k8s.gcr.io\coredns_1.6.7 succeeded
	I1117 23:08:06.901541   10920 cache.go:107] acquiring lock: {Name:mk10332ff27deea3fc719751005de62c0e33d3b2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 23:08:06.902607   10920 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\k8s.gcr.io\pause_3.2 exists
	I1117 23:08:06.902607   10920 cache.go:107] acquiring lock: {Name:mk1e6aff471a3d1096c4578693ba51b1ebe2eabd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 23:08:06.902761   10920 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\k8s.gcr.io\kube-proxy_v1.18.0 exists
	I1117 23:08:06.903355   10920 cache.go:96] cache image "k8s.gcr.io/pause:3.2" -> "C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\cache\\images\\k8s.gcr.io\\pause_3.2" took 180.5437ms
	I1117 23:08:06.903425   10920 cache.go:80] save to tar file k8s.gcr.io/pause:3.2 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\k8s.gcr.io\pause_3.2 succeeded
	I1117 23:08:06.903355   10920 cache.go:96] cache image "k8s.gcr.io/kube-proxy:v1.18.0" -> "C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\cache\\images\\k8s.gcr.io\\kube-proxy_v1.18.0" took 180.487ms
	I1117 23:08:06.903478   10920 cache.go:107] acquiring lock: {Name:mk07753e378828d6a9b5c8273895167d2e474020 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 23:08:06.904386   10920 cache.go:107] acquiring lock: {Name:mk16b2c84e0562e7dfabdafa8a4b202b59aeeb0f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 23:08:06.904386   10920 cache.go:107] acquiring lock: {Name:mk9a88325537d98574cbbeffc553f1aabb2a53e9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 23:08:06.906013   10920 cache.go:80] save to tar file k8s.gcr.io/kube-proxy:v1.18.0 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\k8s.gcr.io\kube-proxy_v1.18.0 succeeded
	I1117 23:08:06.906013   10920 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\docker.io\kubernetesui\dashboard_v2.3.1 exists
	I1117 23:08:06.906555   10920 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\docker.io\kubernetesui\metrics-scraper_v1.0.7 exists
	I1117 23:08:06.906555   10920 cache.go:107] acquiring lock: {Name:mka79873c4497f8822659d56d0ea202f596b4cfb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 23:08:06.906637   10920 cache.go:96] cache image "docker.io/kubernetesui/dashboard:v2.3.1" -> "C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\cache\\images\\docker.io\\kubernetesui\\dashboard_v2.3.1" took 184.8376ms
	I1117 23:08:06.907028   10920 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\k8s.gcr.io\kube-apiserver_v1.18.0 exists
	I1117 23:08:06.907028   10920 cache.go:80] save to tar file docker.io/kubernetesui/dashboard:v2.3.1 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\docker.io\kubernetesui\dashboard_v2.3.1 succeeded
	I1117 23:08:06.906555   10920 cache.go:107] acquiring lock: {Name:mk1eeff90fa721c50dd9d804655fbb635013f18b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 23:08:06.906637   10920 cache.go:107] acquiring lock: {Name:mke9439de88fd7cfde7b3c89f335155fffdfe7dd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 23:08:06.906717   10920 cache.go:96] cache image "docker.io/kubernetesui/metrics-scraper:v1.0.7" -> "C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\cache\\images\\docker.io\\kubernetesui\\metrics-scraper_v1.0.7" took 184.9181ms
	I1117 23:08:06.907149   10920 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\k8s.gcr.io\etcd_3.4.3-0 exists
	I1117 23:08:06.907149   10920 cache.go:80] save to tar file docker.io/kubernetesui/metrics-scraper:v1.0.7 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\docker.io\kubernetesui\metrics-scraper_v1.0.7 succeeded
	I1117 23:08:06.907313   10920 cache.go:96] cache image "k8s.gcr.io/etcd:3.4.3-0" -> "C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\cache\\images\\k8s.gcr.io\\etcd_3.4.3-0" took 185.5142ms
	I1117 23:08:06.907313   10920 cache.go:80] save to tar file k8s.gcr.io/etcd:3.4.3-0 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\k8s.gcr.io\etcd_3.4.3-0 succeeded
	I1117 23:08:06.908007   10920 cache.go:96] cache image "k8s.gcr.io/kube-apiserver:v1.18.0" -> "C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\cache\\images\\k8s.gcr.io\\kube-apiserver_v1.18.0" took 185.1961ms
	I1117 23:08:06.908078   10920 cache.go:80] save to tar file k8s.gcr.io/kube-apiserver:v1.18.0 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\k8s.gcr.io\kube-apiserver_v1.18.0 succeeded
	I1117 23:08:06.908126   10920 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\gcr.io\k8s-minikube\storage-provisioner_v5 exists
	I1117 23:08:06.908171   10920 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\k8s.gcr.io\kube-controller-manager_v1.18.0 exists
	I1117 23:08:06.908380   10920 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\cache\\images\\gcr.io\\k8s-minikube\\storage-provisioner_v5" took 186.5813ms
	I1117 23:08:06.908530   10920 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\gcr.io\k8s-minikube\storage-provisioner_v5 succeeded
	I1117 23:08:06.908632   10920 cache.go:96] cache image "k8s.gcr.io/kube-controller-manager:v1.18.0" -> "C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\cache\\images\\k8s.gcr.io\\kube-controller-manager_v1.18.0" took 186.7308ms
	I1117 23:08:06.908632   10920 cache.go:80] save to tar file k8s.gcr.io/kube-controller-manager:v1.18.0 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\k8s.gcr.io\kube-controller-manager_v1.18.0 succeeded
	I1117 23:08:06.908632   10920 cache.go:87] Successfully saved all images to host disk.
	W1117 23:08:06.975817   10920 cli_runner.go:162] docker container inspect missing-upgrade-20211117230443-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:08:06.975957   10920 fix.go:108] recreateIfNeeded on missing-upgrade-20211117230443-9504: state= err=unknown state "missing-upgrade-20211117230443-9504": docker container inspect missing-upgrade-20211117230443-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: missing-upgrade-20211117230443-9504
	I1117 23:08:06.976005   10920 fix.go:113] machineExists: false. err=machine does not exist
	I1117 23:08:06.979400   10920 out.go:176] * docker "missing-upgrade-20211117230443-9504" container is missing, will recreate.
	I1117 23:08:06.979465   10920 delete.go:124] DEMOLISHING missing-upgrade-20211117230443-9504 ...
	I1117 23:08:06.986235   10920 cli_runner.go:115] Run: docker container inspect missing-upgrade-20211117230443-9504 --format={{.State.Status}}
	W1117 23:08:07.080617   10920 cli_runner.go:162] docker container inspect missing-upgrade-20211117230443-9504 --format={{.State.Status}} returned with exit code 1
	W1117 23:08:07.080617   10920 stop.go:75] unable to get state: unknown state "missing-upgrade-20211117230443-9504": docker container inspect missing-upgrade-20211117230443-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: missing-upgrade-20211117230443-9504
	I1117 23:08:07.080617   10920 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "missing-upgrade-20211117230443-9504": docker container inspect missing-upgrade-20211117230443-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: missing-upgrade-20211117230443-9504
	I1117 23:08:07.088207   10920 cli_runner.go:115] Run: docker container inspect missing-upgrade-20211117230443-9504 --format={{.State.Status}}
	W1117 23:08:07.202500   10920 cli_runner.go:162] docker container inspect missing-upgrade-20211117230443-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:08:07.202500   10920 delete.go:82] Unable to get host status for missing-upgrade-20211117230443-9504, assuming it has already been deleted: state: unknown state "missing-upgrade-20211117230443-9504": docker container inspect missing-upgrade-20211117230443-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: missing-upgrade-20211117230443-9504
	I1117 23:08:07.206218   10920 cli_runner.go:115] Run: docker container inspect -f {{.Id}} missing-upgrade-20211117230443-9504
	W1117 23:08:07.301139   10920 cli_runner.go:162] docker container inspect -f {{.Id}} missing-upgrade-20211117230443-9504 returned with exit code 1
	I1117 23:08:07.301139   10920 kic.go:360] could not find the container missing-upgrade-20211117230443-9504 to remove it. will try anyways
	I1117 23:08:07.304150   10920 cli_runner.go:115] Run: docker container inspect missing-upgrade-20211117230443-9504 --format={{.State.Status}}
	W1117 23:08:07.401279   10920 cli_runner.go:162] docker container inspect missing-upgrade-20211117230443-9504 --format={{.State.Status}} returned with exit code 1
	W1117 23:08:07.401279   10920 oci.go:83] error getting container status, will try to delete anyways: unknown state "missing-upgrade-20211117230443-9504": docker container inspect missing-upgrade-20211117230443-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: missing-upgrade-20211117230443-9504
	I1117 23:08:07.405285   10920 cli_runner.go:115] Run: docker exec --privileged -t missing-upgrade-20211117230443-9504 /bin/bash -c "sudo init 0"
	W1117 23:08:07.508702   10920 cli_runner.go:162] docker exec --privileged -t missing-upgrade-20211117230443-9504 /bin/bash -c "sudo init 0" returned with exit code 1
	I1117 23:08:07.508702   10920 oci.go:658] error shutdown missing-upgrade-20211117230443-9504: docker exec --privileged -t missing-upgrade-20211117230443-9504 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: missing-upgrade-20211117230443-9504
	I1117 23:08:08.515287   10920 cli_runner.go:115] Run: docker container inspect missing-upgrade-20211117230443-9504 --format={{.State.Status}}
	W1117 23:08:08.615362   10920 cli_runner.go:162] docker container inspect missing-upgrade-20211117230443-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:08:08.615476   10920 oci.go:670] temporary error verifying shutdown: unknown state "missing-upgrade-20211117230443-9504": docker container inspect missing-upgrade-20211117230443-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: missing-upgrade-20211117230443-9504
	I1117 23:08:08.615583   10920 oci.go:672] temporary error: container missing-upgrade-20211117230443-9504 status is  but expect it to be exited
	I1117 23:08:08.615668   10920 retry.go:31] will retry after 552.330144ms: couldn't verify container is exited. %v: unknown state "missing-upgrade-20211117230443-9504": docker container inspect missing-upgrade-20211117230443-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: missing-upgrade-20211117230443-9504
	I1117 23:08:09.173701   10920 cli_runner.go:115] Run: docker container inspect missing-upgrade-20211117230443-9504 --format={{.State.Status}}
	W1117 23:08:09.277598   10920 cli_runner.go:162] docker container inspect missing-upgrade-20211117230443-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:08:09.277865   10920 oci.go:670] temporary error verifying shutdown: unknown state "missing-upgrade-20211117230443-9504": docker container inspect missing-upgrade-20211117230443-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: missing-upgrade-20211117230443-9504
	I1117 23:08:09.277865   10920 oci.go:672] temporary error: container missing-upgrade-20211117230443-9504 status is  but expect it to be exited
	I1117 23:08:09.278033   10920 retry.go:31] will retry after 1.080381816s: couldn't verify container is exited. %v: unknown state "missing-upgrade-20211117230443-9504": docker container inspect missing-upgrade-20211117230443-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: missing-upgrade-20211117230443-9504
	I1117 23:08:10.363329   10920 cli_runner.go:115] Run: docker container inspect missing-upgrade-20211117230443-9504 --format={{.State.Status}}
	W1117 23:08:10.466353   10920 cli_runner.go:162] docker container inspect missing-upgrade-20211117230443-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:08:10.466353   10920 oci.go:670] temporary error verifying shutdown: unknown state "missing-upgrade-20211117230443-9504": docker container inspect missing-upgrade-20211117230443-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: missing-upgrade-20211117230443-9504
	I1117 23:08:10.466353   10920 oci.go:672] temporary error: container missing-upgrade-20211117230443-9504 status is  but expect it to be exited
	I1117 23:08:10.466353   10920 retry.go:31] will retry after 1.31013006s: couldn't verify container is exited. %v: unknown state "missing-upgrade-20211117230443-9504": docker container inspect missing-upgrade-20211117230443-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: missing-upgrade-20211117230443-9504
	I1117 23:08:11.781024   10920 cli_runner.go:115] Run: docker container inspect missing-upgrade-20211117230443-9504 --format={{.State.Status}}
	W1117 23:08:11.879500   10920 cli_runner.go:162] docker container inspect missing-upgrade-20211117230443-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:08:11.879839   10920 oci.go:670] temporary error verifying shutdown: unknown state "missing-upgrade-20211117230443-9504": docker container inspect missing-upgrade-20211117230443-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: missing-upgrade-20211117230443-9504
	I1117 23:08:11.879952   10920 oci.go:672] temporary error: container missing-upgrade-20211117230443-9504 status is  but expect it to be exited
	I1117 23:08:11.880029   10920 retry.go:31] will retry after 1.582392691s: couldn't verify container is exited. %v: unknown state "missing-upgrade-20211117230443-9504": docker container inspect missing-upgrade-20211117230443-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: missing-upgrade-20211117230443-9504
	I1117 23:08:13.469836   10920 cli_runner.go:115] Run: docker container inspect missing-upgrade-20211117230443-9504 --format={{.State.Status}}
	W1117 23:08:13.571276   10920 cli_runner.go:162] docker container inspect missing-upgrade-20211117230443-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:08:13.571276   10920 oci.go:670] temporary error verifying shutdown: unknown state "missing-upgrade-20211117230443-9504": docker container inspect missing-upgrade-20211117230443-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: missing-upgrade-20211117230443-9504
	I1117 23:08:13.571276   10920 oci.go:672] temporary error: container missing-upgrade-20211117230443-9504 status is  but expect it to be exited
	I1117 23:08:13.571276   10920 retry.go:31] will retry after 2.340488664s: couldn't verify container is exited. %v: unknown state "missing-upgrade-20211117230443-9504": docker container inspect missing-upgrade-20211117230443-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: missing-upgrade-20211117230443-9504
	I1117 23:08:15.919628   10920 cli_runner.go:115] Run: docker container inspect missing-upgrade-20211117230443-9504 --format={{.State.Status}}
	W1117 23:08:16.017740   10920 cli_runner.go:162] docker container inspect missing-upgrade-20211117230443-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:08:16.017960   10920 oci.go:670] temporary error verifying shutdown: unknown state "missing-upgrade-20211117230443-9504": docker container inspect missing-upgrade-20211117230443-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: missing-upgrade-20211117230443-9504
	I1117 23:08:16.017960   10920 oci.go:672] temporary error: container missing-upgrade-20211117230443-9504 status is  but expect it to be exited
	I1117 23:08:16.018077   10920 retry.go:31] will retry after 4.506218855s: couldn't verify container is exited. %v: unknown state "missing-upgrade-20211117230443-9504": docker container inspect missing-upgrade-20211117230443-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: missing-upgrade-20211117230443-9504
	I1117 23:08:20.531544   10920 cli_runner.go:115] Run: docker container inspect missing-upgrade-20211117230443-9504 --format={{.State.Status}}
	W1117 23:08:20.626573   10920 cli_runner.go:162] docker container inspect missing-upgrade-20211117230443-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:08:20.626815   10920 oci.go:670] temporary error verifying shutdown: unknown state "missing-upgrade-20211117230443-9504": docker container inspect missing-upgrade-20211117230443-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: missing-upgrade-20211117230443-9504
	I1117 23:08:20.626815   10920 oci.go:672] temporary error: container missing-upgrade-20211117230443-9504 status is  but expect it to be exited
	I1117 23:08:20.626899   10920 retry.go:31] will retry after 3.221479586s: couldn't verify container is exited. %v: unknown state "missing-upgrade-20211117230443-9504": docker container inspect missing-upgrade-20211117230443-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: missing-upgrade-20211117230443-9504
	I1117 23:08:23.852828   10920 cli_runner.go:115] Run: docker container inspect missing-upgrade-20211117230443-9504 --format={{.State.Status}}
	W1117 23:08:23.943711   10920 cli_runner.go:162] docker container inspect missing-upgrade-20211117230443-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:08:23.943711   10920 oci.go:670] temporary error verifying shutdown: unknown state "missing-upgrade-20211117230443-9504": docker container inspect missing-upgrade-20211117230443-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: missing-upgrade-20211117230443-9504
	I1117 23:08:23.943711   10920 oci.go:672] temporary error: container missing-upgrade-20211117230443-9504 status is  but expect it to be exited
	I1117 23:08:23.943711   10920 oci.go:87] couldn't shut down missing-upgrade-20211117230443-9504 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "missing-upgrade-20211117230443-9504": docker container inspect missing-upgrade-20211117230443-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: missing-upgrade-20211117230443-9504
	 
	I1117 23:08:23.950170   10920 cli_runner.go:115] Run: docker rm -f -v missing-upgrade-20211117230443-9504
	W1117 23:08:24.039709   10920 cli_runner.go:162] docker rm -f -v missing-upgrade-20211117230443-9504 returned with exit code 1
	W1117 23:08:24.041836   10920 delete.go:139] delete failed (probably ok) <nil>
	I1117 23:08:24.041891   10920 fix.go:120] Sleeping 1 second for extra luck!
	I1117 23:08:25.042390   10920 start.go:126] createHost starting for "m01" (driver="docker")
	I1117 23:08:25.060992   10920 out.go:203] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I1117 23:08:25.061557   10920 start.go:160] libmachine.API.Create for "missing-upgrade-20211117230443-9504" (driver="docker")
	I1117 23:08:25.061557   10920 client.go:168] LocalClient.Create starting
	I1117 23:08:25.061557   10920 main.go:130] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I1117 23:08:25.061557   10920 main.go:130] libmachine: Decoding PEM data...
	I1117 23:08:25.061557   10920 main.go:130] libmachine: Parsing certificate...
	I1117 23:08:25.062789   10920 main.go:130] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I1117 23:08:25.062979   10920 main.go:130] libmachine: Decoding PEM data...
	I1117 23:08:25.062979   10920 main.go:130] libmachine: Parsing certificate...
	I1117 23:08:25.071598   10920 cli_runner.go:115] Run: docker network inspect missing-upgrade-20211117230443-9504 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1117 23:08:25.168411   10920 cli_runner.go:162] docker network inspect missing-upgrade-20211117230443-9504 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1117 23:08:25.171996   10920 network_create.go:254] running [docker network inspect missing-upgrade-20211117230443-9504] to gather additional debugging logs...
	I1117 23:08:25.171996   10920 cli_runner.go:115] Run: docker network inspect missing-upgrade-20211117230443-9504
	W1117 23:08:25.271256   10920 cli_runner.go:162] docker network inspect missing-upgrade-20211117230443-9504 returned with exit code 1
	I1117 23:08:25.271461   10920 network_create.go:257] error running [docker network inspect missing-upgrade-20211117230443-9504]: docker network inspect missing-upgrade-20211117230443-9504: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: missing-upgrade-20211117230443-9504
	I1117 23:08:25.271521   10920 network_create.go:259] output of [docker network inspect missing-upgrade-20211117230443-9504]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: missing-upgrade-20211117230443-9504
	
	** /stderr **
	I1117 23:08:25.277879   10920 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1117 23:08:25.398140   10920 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc001208238] misses:0}
	I1117 23:08:25.398140   10920 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1117 23:08:25.398140   10920 network_create.go:106] attempt to create docker network missing-upgrade-20211117230443-9504 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1117 23:08:25.402044   10920 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true missing-upgrade-20211117230443-9504
	W1117 23:08:25.492561   10920 cli_runner.go:162] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true missing-upgrade-20211117230443-9504 returned with exit code 1
	W1117 23:08:25.492701   10920 network_create.go:98] failed to create docker network missing-upgrade-20211117230443-9504 192.168.49.0/24, will retry: subnet is taken
	I1117 23:08:25.506482   10920 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc001208238] amended:false}} dirty:map[] misses:0}
	I1117 23:08:25.506482   10920 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1117 23:08:25.519083   10920 network.go:288] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc001208238] amended:true}} dirty:map[192.168.49.0:0xc001208238 192.168.58.0:0xc00010c2c8] misses:0}
	I1117 23:08:25.519083   10920 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1117 23:08:25.519083   10920 network_create.go:106] attempt to create docker network missing-upgrade-20211117230443-9504 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I1117 23:08:25.522081   10920 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true missing-upgrade-20211117230443-9504
	I1117 23:08:27.747637   10920 cli_runner.go:168] Completed: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true missing-upgrade-20211117230443-9504: (2.2255397s)
	I1117 23:08:27.747637   10920 network_create.go:90] docker network missing-upgrade-20211117230443-9504 192.168.58.0/24 created
	I1117 23:08:27.747637   10920 kic.go:106] calculated static IP "192.168.58.2" for the "missing-upgrade-20211117230443-9504" container
	I1117 23:08:27.758524   10920 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
	I1117 23:08:27.866977   10920 cli_runner.go:115] Run: docker volume create missing-upgrade-20211117230443-9504 --label name.minikube.sigs.k8s.io=missing-upgrade-20211117230443-9504 --label created_by.minikube.sigs.k8s.io=true
	I1117 23:08:27.966666   10920 oci.go:102] Successfully created a docker volume missing-upgrade-20211117230443-9504
	I1117 23:08:27.971149   10920 cli_runner.go:115] Run: docker run --rm --name missing-upgrade-20211117230443-9504-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=missing-upgrade-20211117230443-9504 --entrypoint /usr/bin/test -v missing-upgrade-20211117230443-9504:/var gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -d /var/lib
	I1117 23:08:29.985164   10920 cli_runner.go:168] Completed: docker run --rm --name missing-upgrade-20211117230443-9504-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=missing-upgrade-20211117230443-9504 --entrypoint /usr/bin/test -v missing-upgrade-20211117230443-9504:/var gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -d /var/lib: (2.013858s)
	I1117 23:08:29.985164   10920 oci.go:106] Successfully prepared a docker volume missing-upgrade-20211117230443-9504
	I1117 23:08:29.985335   10920 preload.go:132] Checking if preload exists for k8s version v1.18.0 and runtime docker
	I1117 23:08:29.990136   10920 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I1117 23:08:30.359814   10920 info.go:263] docker info: {ID:L2W4:ZZCI:FAHF:ZZT6:NAWJ:ODFR:2ZBQ:YQ77:PYZZ:VR3E:WLUM:OBSV Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:52 OomKillDisable:true NGoroutines:58 SystemTime:2021-11-17 23:08:30.075551608 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:4.19.76-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://inde
x.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:6239399936 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:19.03.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7ad184331fa3e55e52b890ea95e65ba581ae3429 Expected:7ad184331fa3e55e52b890ea95e65ba581ae3429} RuncCommit:{ID:dc9208a3303feef5b3839f4323d9beb36df0a9dd Expected:dc9208a3303feef5b3839f4323d9beb36df0a9dd} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense:Community Engine Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	E1117 23:08:30.360180   10920 oci.go:197] error getting kernel modules path: Unable to locate kernel modules
	I1117 23:08:30.360228   10920 client.go:171] LocalClient.Create took 5.2986318s
	I1117 23:08:32.368885   10920 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1117 23:08:32.372600   10920 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-20211117230443-9504
	W1117 23:08:32.467079   10920 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-20211117230443-9504 returned with exit code 1
	I1117 23:08:32.467398   10920 retry.go:31] will retry after 149.242379ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "missing-upgrade-20211117230443-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-20211117230443-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: missing-upgrade-20211117230443-9504
	I1117 23:08:32.624608   10920 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-20211117230443-9504
	W1117 23:08:32.714850   10920 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-20211117230443-9504 returned with exit code 1
	I1117 23:08:32.714850   10920 retry.go:31] will retry after 300.341948ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "missing-upgrade-20211117230443-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-20211117230443-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: missing-upgrade-20211117230443-9504
	I1117 23:08:33.021100   10920 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-20211117230443-9504
	W1117 23:08:33.124363   10920 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-20211117230443-9504 returned with exit code 1
	I1117 23:08:33.124363   10920 retry.go:31] will retry after 571.057104ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "missing-upgrade-20211117230443-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-20211117230443-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: missing-upgrade-20211117230443-9504
	I1117 23:08:33.700707   10920 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-20211117230443-9504
	W1117 23:08:33.796021   10920 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-20211117230443-9504 returned with exit code 1
	W1117 23:08:33.796021   10920 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "missing-upgrade-20211117230443-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-20211117230443-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: missing-upgrade-20211117230443-9504
	
	W1117 23:08:33.796021   10920 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "missing-upgrade-20211117230443-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-20211117230443-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: missing-upgrade-20211117230443-9504
	I1117 23:08:33.796021   10920 start.go:129] duration metric: createHost completed in 8.753509s
	I1117 23:08:33.803807   10920 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1117 23:08:33.806760   10920 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-20211117230443-9504
	W1117 23:08:33.903208   10920 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-20211117230443-9504 returned with exit code 1
	I1117 23:08:33.903310   10920 retry.go:31] will retry after 178.565968ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "missing-upgrade-20211117230443-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-20211117230443-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: missing-upgrade-20211117230443-9504
	I1117 23:08:34.087217   10920 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-20211117230443-9504
	W1117 23:08:34.176901   10920 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-20211117230443-9504 returned with exit code 1
	I1117 23:08:34.177296   10920 retry.go:31] will retry after 330.246446ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "missing-upgrade-20211117230443-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-20211117230443-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: missing-upgrade-20211117230443-9504
	I1117 23:08:34.512857   10920 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-20211117230443-9504
	W1117 23:08:34.603916   10920 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-20211117230443-9504 returned with exit code 1
	I1117 23:08:34.604189   10920 retry.go:31] will retry after 460.157723ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "missing-upgrade-20211117230443-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-20211117230443-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: missing-upgrade-20211117230443-9504
	I1117 23:08:35.069315   10920 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-20211117230443-9504
	W1117 23:08:35.175734   10920 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-20211117230443-9504 returned with exit code 1
	W1117 23:08:35.175985   10920 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "missing-upgrade-20211117230443-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-20211117230443-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: missing-upgrade-20211117230443-9504
	
	W1117 23:08:35.176048   10920 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "missing-upgrade-20211117230443-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-20211117230443-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: missing-upgrade-20211117230443-9504
	I1117 23:08:35.176048   10920 fix.go:57] fixHost completed within 28.3454202s
	I1117 23:08:35.176048   10920 start.go:80] releasing machines lock for "missing-upgrade-20211117230443-9504", held for 28.3455906s
	W1117 23:08:35.176048   10920 start.go:532] error starting host: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	W1117 23:08:35.176048   10920 out.go:241] ! StartHost failed, but will try again: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	! StartHost failed, but will try again: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	I1117 23:08:35.176573   10920 start.go:547] Will try again in 5 seconds ...
	I1117 23:08:40.178443   10920 start.go:313] acquiring machines lock for missing-upgrade-20211117230443-9504: {Name:mk9a706a1ac78cd0fd765590627eca4f2bdcb716 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 23:08:40.178443   10920 start.go:317] acquired machines lock for "missing-upgrade-20211117230443-9504" in 0s
	I1117 23:08:40.179029   10920 start.go:93] Skipping create...Using existing machine configuration
	I1117 23:08:40.179201   10920 fix.go:55] fixHost starting: m01
	I1117 23:08:40.187218   10920 cli_runner.go:115] Run: docker container inspect missing-upgrade-20211117230443-9504 --format={{.State.Status}}
	W1117 23:08:40.277251   10920 cli_runner.go:162] docker container inspect missing-upgrade-20211117230443-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:08:40.277466   10920 fix.go:108] recreateIfNeeded on missing-upgrade-20211117230443-9504: state= err=unknown state "missing-upgrade-20211117230443-9504": docker container inspect missing-upgrade-20211117230443-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: missing-upgrade-20211117230443-9504
	I1117 23:08:40.277466   10920 fix.go:113] machineExists: false. err=machine does not exist
	I1117 23:08:40.281936   10920 out.go:176] * docker "missing-upgrade-20211117230443-9504" container is missing, will recreate.
	I1117 23:08:40.281990   10920 delete.go:124] DEMOLISHING missing-upgrade-20211117230443-9504 ...
	I1117 23:08:40.291463   10920 cli_runner.go:115] Run: docker container inspect missing-upgrade-20211117230443-9504 --format={{.State.Status}}
	W1117 23:08:40.382367   10920 cli_runner.go:162] docker container inspect missing-upgrade-20211117230443-9504 --format={{.State.Status}} returned with exit code 1
	W1117 23:08:40.382499   10920 stop.go:75] unable to get state: unknown state "missing-upgrade-20211117230443-9504": docker container inspect missing-upgrade-20211117230443-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: missing-upgrade-20211117230443-9504
	I1117 23:08:40.382499   10920 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "missing-upgrade-20211117230443-9504": docker container inspect missing-upgrade-20211117230443-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: missing-upgrade-20211117230443-9504
	I1117 23:08:40.391375   10920 cli_runner.go:115] Run: docker container inspect missing-upgrade-20211117230443-9504 --format={{.State.Status}}
	W1117 23:08:40.491624   10920 cli_runner.go:162] docker container inspect missing-upgrade-20211117230443-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:08:40.491834   10920 delete.go:82] Unable to get host status for missing-upgrade-20211117230443-9504, assuming it has already been deleted: state: unknown state "missing-upgrade-20211117230443-9504": docker container inspect missing-upgrade-20211117230443-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: missing-upgrade-20211117230443-9504
	I1117 23:08:40.495483   10920 cli_runner.go:115] Run: docker container inspect -f {{.Id}} missing-upgrade-20211117230443-9504
	W1117 23:08:40.588161   10920 cli_runner.go:162] docker container inspect -f {{.Id}} missing-upgrade-20211117230443-9504 returned with exit code 1
	I1117 23:08:40.588477   10920 kic.go:360] could not find the container missing-upgrade-20211117230443-9504 to remove it. will try anyways
	I1117 23:08:40.592631   10920 cli_runner.go:115] Run: docker container inspect missing-upgrade-20211117230443-9504 --format={{.State.Status}}
	W1117 23:08:40.683797   10920 cli_runner.go:162] docker container inspect missing-upgrade-20211117230443-9504 --format={{.State.Status}} returned with exit code 1
	W1117 23:08:40.683968   10920 oci.go:83] error getting container status, will try to delete anyways: unknown state "missing-upgrade-20211117230443-9504": docker container inspect missing-upgrade-20211117230443-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: missing-upgrade-20211117230443-9504
	I1117 23:08:40.686661   10920 cli_runner.go:115] Run: docker exec --privileged -t missing-upgrade-20211117230443-9504 /bin/bash -c "sudo init 0"
	W1117 23:08:40.781911   10920 cli_runner.go:162] docker exec --privileged -t missing-upgrade-20211117230443-9504 /bin/bash -c "sudo init 0" returned with exit code 1
	I1117 23:08:40.781911   10920 oci.go:658] error shutdown missing-upgrade-20211117230443-9504: docker exec --privileged -t missing-upgrade-20211117230443-9504 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: missing-upgrade-20211117230443-9504
	I1117 23:08:41.787949   10920 cli_runner.go:115] Run: docker container inspect missing-upgrade-20211117230443-9504 --format={{.State.Status}}
	W1117 23:08:41.887413   10920 cli_runner.go:162] docker container inspect missing-upgrade-20211117230443-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:08:41.887413   10920 oci.go:670] temporary error verifying shutdown: unknown state "missing-upgrade-20211117230443-9504": docker container inspect missing-upgrade-20211117230443-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: missing-upgrade-20211117230443-9504
	I1117 23:08:41.887413   10920 oci.go:672] temporary error: container missing-upgrade-20211117230443-9504 status is  but expect it to be exited
	I1117 23:08:41.887413   10920 retry.go:31] will retry after 391.517075ms: couldn't verify container is exited. %v: unknown state "missing-upgrade-20211117230443-9504": docker container inspect missing-upgrade-20211117230443-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: missing-upgrade-20211117230443-9504
	I1117 23:08:42.286613   10920 cli_runner.go:115] Run: docker container inspect missing-upgrade-20211117230443-9504 --format={{.State.Status}}
	W1117 23:08:42.393611   10920 cli_runner.go:162] docker container inspect missing-upgrade-20211117230443-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:08:42.393611   10920 oci.go:670] temporary error verifying shutdown: unknown state "missing-upgrade-20211117230443-9504": docker container inspect missing-upgrade-20211117230443-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: missing-upgrade-20211117230443-9504
	I1117 23:08:42.393611   10920 oci.go:672] temporary error: container missing-upgrade-20211117230443-9504 status is  but expect it to be exited
	I1117 23:08:42.393611   10920 retry.go:31] will retry after 594.826393ms: couldn't verify container is exited. %v: unknown state "missing-upgrade-20211117230443-9504": docker container inspect missing-upgrade-20211117230443-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: missing-upgrade-20211117230443-9504
	I1117 23:08:42.992940   10920 cli_runner.go:115] Run: docker container inspect missing-upgrade-20211117230443-9504 --format={{.State.Status}}
	W1117 23:08:43.092298   10920 cli_runner.go:162] docker container inspect missing-upgrade-20211117230443-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:08:43.092561   10920 oci.go:670] temporary error verifying shutdown: unknown state "missing-upgrade-20211117230443-9504": docker container inspect missing-upgrade-20211117230443-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: missing-upgrade-20211117230443-9504
	I1117 23:08:43.092561   10920 oci.go:672] temporary error: container missing-upgrade-20211117230443-9504 status is  but expect it to be exited
	I1117 23:08:43.092727   10920 retry.go:31] will retry after 1.326470261s: couldn't verify container is exited. %v: unknown state "missing-upgrade-20211117230443-9504": docker container inspect missing-upgrade-20211117230443-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: missing-upgrade-20211117230443-9504
	I1117 23:08:44.424381   10920 cli_runner.go:115] Run: docker container inspect missing-upgrade-20211117230443-9504 --format={{.State.Status}}
	W1117 23:08:44.517675   10920 cli_runner.go:162] docker container inspect missing-upgrade-20211117230443-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:08:44.517987   10920 oci.go:670] temporary error verifying shutdown: unknown state "missing-upgrade-20211117230443-9504": docker container inspect missing-upgrade-20211117230443-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: missing-upgrade-20211117230443-9504
	I1117 23:08:44.517987   10920 oci.go:672] temporary error: container missing-upgrade-20211117230443-9504 status is  but expect it to be exited
	I1117 23:08:44.518050   10920 retry.go:31] will retry after 1.212558276s: couldn't verify container is exited. %v: unknown state "missing-upgrade-20211117230443-9504": docker container inspect missing-upgrade-20211117230443-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: missing-upgrade-20211117230443-9504
	I1117 23:08:45.744337   10920 cli_runner.go:115] Run: docker container inspect missing-upgrade-20211117230443-9504 --format={{.State.Status}}
	W1117 23:08:45.908454   10920 cli_runner.go:162] docker container inspect missing-upgrade-20211117230443-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:08:45.908454   10920 oci.go:670] temporary error verifying shutdown: unknown state "missing-upgrade-20211117230443-9504": docker container inspect missing-upgrade-20211117230443-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: missing-upgrade-20211117230443-9504
	I1117 23:08:45.908454   10920 oci.go:672] temporary error: container missing-upgrade-20211117230443-9504 status is  but expect it to be exited
	I1117 23:08:45.908454   10920 retry.go:31] will retry after 1.779941781s: couldn't verify container is exited. %v: unknown state "missing-upgrade-20211117230443-9504": docker container inspect missing-upgrade-20211117230443-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: missing-upgrade-20211117230443-9504
	I1117 23:08:47.693326   10920 cli_runner.go:115] Run: docker container inspect missing-upgrade-20211117230443-9504 --format={{.State.Status}}
	W1117 23:08:47.788709   10920 cli_runner.go:162] docker container inspect missing-upgrade-20211117230443-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:08:47.788782   10920 oci.go:670] temporary error verifying shutdown: unknown state "missing-upgrade-20211117230443-9504": docker container inspect missing-upgrade-20211117230443-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: missing-upgrade-20211117230443-9504
	I1117 23:08:47.788782   10920 oci.go:672] temporary error: container missing-upgrade-20211117230443-9504 status is  but expect it to be exited
	I1117 23:08:47.788865   10920 retry.go:31] will retry after 3.268621161s: couldn't verify container is exited. %v: unknown state "missing-upgrade-20211117230443-9504": docker container inspect missing-upgrade-20211117230443-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: missing-upgrade-20211117230443-9504
	I1117 23:08:51.062915   10920 cli_runner.go:115] Run: docker container inspect missing-upgrade-20211117230443-9504 --format={{.State.Status}}
	W1117 23:08:51.162127   10920 cli_runner.go:162] docker container inspect missing-upgrade-20211117230443-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:08:51.162219   10920 oci.go:670] temporary error verifying shutdown: unknown state "missing-upgrade-20211117230443-9504": docker container inspect missing-upgrade-20211117230443-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: missing-upgrade-20211117230443-9504
	I1117 23:08:51.162219   10920 oci.go:672] temporary error: container missing-upgrade-20211117230443-9504 status is  but expect it to be exited
	I1117 23:08:51.162336   10920 retry.go:31] will retry after 6.097818893s: couldn't verify container is exited. %v: unknown state "missing-upgrade-20211117230443-9504": docker container inspect missing-upgrade-20211117230443-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: missing-upgrade-20211117230443-9504
	I1117 23:08:57.266054   10920 cli_runner.go:115] Run: docker container inspect missing-upgrade-20211117230443-9504 --format={{.State.Status}}
	W1117 23:08:57.365624   10920 cli_runner.go:162] docker container inspect missing-upgrade-20211117230443-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:08:57.365832   10920 oci.go:670] temporary error verifying shutdown: unknown state "missing-upgrade-20211117230443-9504": docker container inspect missing-upgrade-20211117230443-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: missing-upgrade-20211117230443-9504
	I1117 23:08:57.365832   10920 oci.go:672] temporary error: container missing-upgrade-20211117230443-9504 status is  but expect it to be exited
	I1117 23:08:57.365905   10920 oci.go:87] couldn't shut down missing-upgrade-20211117230443-9504 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "missing-upgrade-20211117230443-9504": docker container inspect missing-upgrade-20211117230443-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: missing-upgrade-20211117230443-9504
	 
	I1117 23:08:57.369823   10920 cli_runner.go:115] Run: docker rm -f -v missing-upgrade-20211117230443-9504
	W1117 23:08:57.473857   10920 cli_runner.go:162] docker rm -f -v missing-upgrade-20211117230443-9504 returned with exit code 1
	W1117 23:08:57.474830   10920 delete.go:139] delete failed (probably ok) <nil>
	I1117 23:08:57.474830   10920 fix.go:120] Sleeping 1 second for extra luck!
	I1117 23:08:58.475264   10920 start.go:126] createHost starting for "m01" (driver="docker")
	I1117 23:08:58.479881   10920 out.go:203] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I1117 23:08:58.479881   10920 start.go:160] libmachine.API.Create for "missing-upgrade-20211117230443-9504" (driver="docker")
	I1117 23:08:58.479881   10920 client.go:168] LocalClient.Create starting
	I1117 23:08:58.480858   10920 main.go:130] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I1117 23:08:58.481089   10920 main.go:130] libmachine: Decoding PEM data...
	I1117 23:08:58.481089   10920 main.go:130] libmachine: Parsing certificate...
	I1117 23:08:58.481272   10920 main.go:130] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I1117 23:08:58.481344   10920 main.go:130] libmachine: Decoding PEM data...
	I1117 23:08:58.481344   10920 main.go:130] libmachine: Parsing certificate...
	I1117 23:08:58.486956   10920 cli_runner.go:115] Run: docker network inspect missing-upgrade-20211117230443-9504 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1117 23:08:58.592168   10920 cli_runner.go:162] docker network inspect missing-upgrade-20211117230443-9504 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1117 23:08:58.595189   10920 network_create.go:254] running [docker network inspect missing-upgrade-20211117230443-9504] to gather additional debugging logs...
	I1117 23:08:58.595189   10920 cli_runner.go:115] Run: docker network inspect missing-upgrade-20211117230443-9504
	W1117 23:08:58.694428   10920 cli_runner.go:162] docker network inspect missing-upgrade-20211117230443-9504 returned with exit code 1
	I1117 23:08:58.694593   10920 network_create.go:257] error running [docker network inspect missing-upgrade-20211117230443-9504]: docker network inspect missing-upgrade-20211117230443-9504: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: missing-upgrade-20211117230443-9504
	I1117 23:08:58.694865   10920 network_create.go:259] output of [docker network inspect missing-upgrade-20211117230443-9504]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: missing-upgrade-20211117230443-9504
	
	** /stderr **
	I1117 23:08:58.699531   10920 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1117 23:08:58.825300   10920 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc001208238] amended:true}} dirty:map[192.168.49.0:0xc001208238 192.168.58.0:0xc00010c2c8] misses:0}
	I1117 23:08:58.825300   10920 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1117 23:08:58.838294   10920 network.go:279] skipping subnet 192.168.58.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc001208238] amended:true}} dirty:map[192.168.49.0:0xc001208238 192.168.58.0:0xc00010c2c8] misses:1}
	I1117 23:08:58.838294   10920 network.go:238] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1117 23:08:58.849294   10920 network.go:288] reserving subnet 192.168.67.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc001208238] amended:true}} dirty:map[192.168.49.0:0xc001208238 192.168.58.0:0xc00010c2c8 192.168.67.0:0xc0005aa3b8] misses:1}
	I1117 23:08:58.849294   10920 network.go:235] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1117 23:08:58.849294   10920 network_create.go:106] attempt to create docker network missing-upgrade-20211117230443-9504 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
	I1117 23:08:58.853309   10920 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true missing-upgrade-20211117230443-9504
	I1117 23:08:59.063573   10920 network_create.go:90] docker network missing-upgrade-20211117230443-9504 192.168.67.0/24 created
	I1117 23:08:59.063573   10920 kic.go:106] calculated static IP "192.168.67.2" for the "missing-upgrade-20211117230443-9504" container
	I1117 23:08:59.074450   10920 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
	I1117 23:08:59.179062   10920 cli_runner.go:115] Run: docker volume create missing-upgrade-20211117230443-9504 --label name.minikube.sigs.k8s.io=missing-upgrade-20211117230443-9504 --label created_by.minikube.sigs.k8s.io=true
	I1117 23:08:59.274701   10920 oci.go:102] Successfully created a docker volume missing-upgrade-20211117230443-9504
	I1117 23:08:59.277717   10920 cli_runner.go:115] Run: docker run --rm --name missing-upgrade-20211117230443-9504-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=missing-upgrade-20211117230443-9504 --entrypoint /usr/bin/test -v missing-upgrade-20211117230443-9504:/var gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -d /var/lib
	I1117 23:09:02.884626   10920 cli_runner.go:168] Completed: docker run --rm --name missing-upgrade-20211117230443-9504-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=missing-upgrade-20211117230443-9504 --entrypoint /usr/bin/test -v missing-upgrade-20211117230443-9504:/var gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -d /var/lib: (3.6068819s)
	I1117 23:09:02.884626   10920 oci.go:106] Successfully prepared a docker volume missing-upgrade-20211117230443-9504
	I1117 23:09:02.884626   10920 preload.go:132] Checking if preload exists for k8s version v1.18.0 and runtime docker
	I1117 23:09:02.888624   10920 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I1117 23:09:03.302415   10920 info.go:263] docker info: {ID:L2W4:ZZCI:FAHF:ZZT6:NAWJ:ODFR:2ZBQ:YQ77:PYZZ:VR3E:WLUM:OBSV Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:50 OomKillDisable:true NGoroutines:60 SystemTime:2021-11-17 23:09:02.984906036 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:4.19.76-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://inde
x.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:6239399936 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:19.03.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7ad184331fa3e55e52b890ea95e65ba581ae3429 Expected:7ad184331fa3e55e52b890ea95e65ba581ae3429} RuncCommit:{ID:dc9208a3303feef5b3839f4323d9beb36df0a9dd Expected:dc9208a3303feef5b3839f4323d9beb36df0a9dd} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense:Community Engine Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	E1117 23:09:03.302415   10920 oci.go:197] error getting kernel modules path: Unable to locate kernel modules
	I1117 23:09:03.302415   10920 client.go:171] LocalClient.Create took 4.8224976s
	I1117 23:09:05.310725   10920 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1117 23:09:05.314154   10920 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-20211117230443-9504
	W1117 23:09:05.422449   10920 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-20211117230443-9504 returned with exit code 1
	I1117 23:09:05.422587   10920 retry.go:31] will retry after 198.278561ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "missing-upgrade-20211117230443-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-20211117230443-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: missing-upgrade-20211117230443-9504
	I1117 23:09:05.626403   10920 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-20211117230443-9504
	W1117 23:09:05.719266   10920 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-20211117230443-9504 returned with exit code 1
	I1117 23:09:05.719482   10920 retry.go:31] will retry after 298.905961ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "missing-upgrade-20211117230443-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-20211117230443-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: missing-upgrade-20211117230443-9504
	I1117 23:09:06.023434   10920 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-20211117230443-9504
	W1117 23:09:06.114808   10920 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-20211117230443-9504 returned with exit code 1
	I1117 23:09:06.115130   10920 retry.go:31] will retry after 704.572333ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "missing-upgrade-20211117230443-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-20211117230443-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: missing-upgrade-20211117230443-9504
	I1117 23:09:06.825575   10920 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-20211117230443-9504
	W1117 23:09:06.917928   10920 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-20211117230443-9504 returned with exit code 1
	W1117 23:09:06.918119   10920 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "missing-upgrade-20211117230443-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-20211117230443-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: missing-upgrade-20211117230443-9504
	
	W1117 23:09:06.918164   10920 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "missing-upgrade-20211117230443-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-20211117230443-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: missing-upgrade-20211117230443-9504
	I1117 23:09:06.918164   10920 start.go:129] duration metric: createHost completed in 8.4427176s
	I1117 23:09:06.925422   10920 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1117 23:09:06.928837   10920 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-20211117230443-9504
	W1117 23:09:07.017151   10920 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-20211117230443-9504 returned with exit code 1
	I1117 23:09:07.017311   10920 retry.go:31] will retry after 341.333754ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "missing-upgrade-20211117230443-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-20211117230443-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: missing-upgrade-20211117230443-9504
	I1117 23:09:07.364485   10920 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-20211117230443-9504
	W1117 23:09:07.456227   10920 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-20211117230443-9504 returned with exit code 1
	I1117 23:09:07.456494   10920 retry.go:31] will retry after 448.769687ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "missing-upgrade-20211117230443-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-20211117230443-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: missing-upgrade-20211117230443-9504
	I1117 23:09:07.910024   10920 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-20211117230443-9504
	W1117 23:09:08.001111   10920 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-20211117230443-9504 returned with exit code 1
	I1117 23:09:08.001183   10920 retry.go:31] will retry after 575.898922ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "missing-upgrade-20211117230443-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-20211117230443-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: missing-upgrade-20211117230443-9504
	I1117 23:09:08.583065   10920 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-20211117230443-9504
	W1117 23:09:08.674775   10920 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-20211117230443-9504 returned with exit code 1
	W1117 23:09:08.675193   10920 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "missing-upgrade-20211117230443-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-20211117230443-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: missing-upgrade-20211117230443-9504
	
	W1117 23:09:08.675257   10920 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "missing-upgrade-20211117230443-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-20211117230443-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: missing-upgrade-20211117230443-9504
	I1117 23:09:08.675257   10920 fix.go:57] fixHost completed within 28.495842s
	I1117 23:09:08.675257   10920 start.go:80] releasing machines lock for "missing-upgrade-20211117230443-9504", held for 28.4966003s
	W1117 23:09:08.675712   10920 out.go:241] * Failed to start docker container. Running "minikube delete -p missing-upgrade-20211117230443-9504" may fix it: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	* Failed to start docker container. Running "minikube delete -p missing-upgrade-20211117230443-9504" may fix it: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	I1117 23:09:08.679964   10920 out.go:176] 
	W1117 23:09:08.680669   10920 out.go:241] X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	W1117 23:09:08.680669   10920 out.go:241] * 
	* 
	W1117 23:09:08.683055   10920 out.go:241] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1117 23:09:08.685075   10920 out.go:176] 

                                                
                                                
** /stderr **
version_upgrade_test.go:338: failed missing container upgrade from v1.9.1. args: out/minikube-windows-amd64.exe start -p missing-upgrade-20211117230443-9504 --memory=2200 --alsologtostderr -v=1 --driver=docker : exit status 80
version_upgrade_test.go:340: *** TestMissingContainerUpgrade FAILED at 2021-11-17 23:09:08.8297369 +0000 GMT m=+2555.324548701
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMissingContainerUpgrade]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect missing-upgrade-20211117230443-9504
helpers_test.go:235: (dbg) docker inspect missing-upgrade-20211117230443-9504:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "missing-upgrade-20211117230443-9504",
	        "Id": "f998c02f10683e356a272d0df67b0622648ff08cf214edf7b60057e0103d76f3",
	        "Created": "2021-11-17T23:08:58.939203231Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.67.0/24",
	                    "Gateway": "192.168.67.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p missing-upgrade-20211117230443-9504 -n missing-upgrade-20211117230443-9504
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p missing-upgrade-20211117230443-9504 -n missing-upgrade-20211117230443-9504: exit status 7 (1.8113596s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 23:09:10.749262    5152 status.go:247] status error: host: state: unknown state "missing-upgrade-20211117230443-9504": docker container inspect missing-upgrade-20211117230443-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: missing-upgrade-20211117230443-9504

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "missing-upgrade-20211117230443-9504" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:175: Cleaning up "missing-upgrade-20211117230443-9504" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p missing-upgrade-20211117230443-9504
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p missing-upgrade-20211117230443-9504: (4.2678827s)
--- FAIL: TestMissingContainerUpgrade (271.63s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (40.47s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start

                                                
                                                
=== CONT  TestNoKubernetes/serial/Start
no_kubernetes_test.go:78: (dbg) Run:  out/minikube-windows-amd64.exe start -p NoKubernetes-20211117230313-9504 --no-kubernetes --driver=docker

                                                
                                                
=== CONT  TestNoKubernetes/serial/Start
no_kubernetes_test.go:78: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p NoKubernetes-20211117230313-9504 --no-kubernetes --driver=docker: exit status 80 (38.3470778s)

                                                
                                                
-- stdout --
	* [NoKubernetes-20211117230313-9504] minikube v1.24.0 on Microsoft Windows 10 Enterprise N 10.0.18363 Build 18363
	  - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=12739
	* Using the docker driver based on user configuration
	* Starting minikube without Kubernetes NoKubernetes-20211117230313-9504 in cluster NoKubernetes-20211117230313-9504
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=5902MB) ...
	* docker "NoKubernetes-20211117230313-9504" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=5902MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 23:03:19.074192    9440 oci.go:197] error getting kernel modules path: Unable to locate kernel modules
	! StartHost failed, but will try again: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	E1117 23:03:46.702760    9440 oci.go:197] error getting kernel modules path: Unable to locate kernel modules
	* Failed to start docker container. Running "minikube delete -p NoKubernetes-20211117230313-9504" may fix it: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:80: failed to start minikube with args: "out/minikube-windows-amd64.exe start -p NoKubernetes-20211117230313-9504 --no-kubernetes --driver=docker" : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestNoKubernetes/serial/Start]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect NoKubernetes-20211117230313-9504

                                                
                                                
=== CONT  TestNoKubernetes/serial/Start
helpers_test.go:235: (dbg) docker inspect NoKubernetes-20211117230313-9504:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "NoKubernetes-20211117230313-9504",
	        "Id": "08a498fed0eb394cf26d86d7ef9a7cd8f967165d8ad4a32d4853b924de374168",
	        "Created": "2021-11-17T23:03:17.091017856Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.67.0/24",
	                    "Gateway": "192.168.67.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true"
	        }
	    }
	]

                                                
                                                
-- /stdout --

                                                
                                                
=== CONT  TestNoKubernetes/serial/Start
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p NoKubernetes-20211117230313-9504 -n NoKubernetes-20211117230313-9504

                                                
                                                
=== CONT  TestNoKubernetes/serial/Start
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p NoKubernetes-20211117230313-9504 -n NoKubernetes-20211117230313-9504: exit status 7 (1.9402536s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 23:03:53.798932   10156 status.go:247] status error: host: state: unknown state "NoKubernetes-20211117230313-9504": docker container inspect NoKubernetes-20211117230313-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: NoKubernetes-20211117230313-9504

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-20211117230313-9504" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestNoKubernetes/serial/Start (40.47s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (17.17s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:100: (dbg) Run:  out/minikube-windows-amd64.exe stop -p NoKubernetes-20211117230313-9504
no_kubernetes_test.go:100: (dbg) Non-zero exit: out/minikube-windows-amd64.exe stop -p NoKubernetes-20211117230313-9504: exit status 82 (15.2040723s)

                                                
                                                
-- stdout --
	* Stopping node "NoKubernetes-20211117230313-9504"  ...
	* Stopping node "NoKubernetes-20211117230313-9504"  ...
	* Stopping node "NoKubernetes-20211117230313-9504"  ...
	* Stopping node "NoKubernetes-20211117230313-9504"  ...
	* Stopping node "NoKubernetes-20211117230313-9504"  ...
	* Stopping node "NoKubernetes-20211117230313-9504"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 23:04:03.863882    6400 daemonize_windows.go:39] error terminating scheduled stop for profile NoKubernetes-20211117230313-9504: stopping schedule-stop service for profile NoKubernetes-20211117230313-9504: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "NoKubernetes-20211117230313-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" NoKubernetes-20211117230313-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: NoKubernetes-20211117230313-9504
	X Exiting due to GUEST_STOP_TIMEOUT: docker container inspect NoKubernetes-20211117230313-9504 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: NoKubernetes-20211117230313-9504
	
	* 
	╭──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                      │
	│    * If the above advice does not help, please let us know:                                                          │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                        │
	│                                                                                                                      │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                             │
	│    * Please also attach the following file to the GitHub issue:                                                      │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_stop_eeafc14e343686f0df1f1d4295ac2d3042636ff8_2.log    │
	│                                                                                                                      │
	╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:102: Failed to stop minikube "out/minikube-windows-amd64.exe stop -p NoKubernetes-20211117230313-9504" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestNoKubernetes/serial/Stop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect NoKubernetes-20211117230313-9504
helpers_test.go:235: (dbg) docker inspect NoKubernetes-20211117230313-9504:

                                                
                                                
-- stdout --
	[
	    {
	        "CreatedAt": "2021-11-17T23:03:17Z",
	        "Driver": "local",
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "NoKubernetes-20211117230313-9504"
	        },
	        "Mountpoint": "/var/lib/docker/volumes/NoKubernetes-20211117230313-9504/_data",
	        "Name": "NoKubernetes-20211117230313-9504",
	        "Options": {},
	        "Scope": "local"
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p NoKubernetes-20211117230313-9504 -n NoKubernetes-20211117230313-9504
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p NoKubernetes-20211117230313-9504 -n NoKubernetes-20211117230313-9504: exit status 7 (1.8481233s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 23:04:17.225733    8004 status.go:247] status error: host: state: unknown state "NoKubernetes-20211117230313-9504": docker container inspect NoKubernetes-20211117230313-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: NoKubernetes-20211117230313-9504

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-20211117230313-9504" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestNoKubernetes/serial/Stop (17.17s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (67.11s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe start -p NoKubernetes-20211117230313-9504 --driver=docker

                                                
                                                
=== CONT  TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:133: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p NoKubernetes-20211117230313-9504 --driver=docker: exit status 80 (1m5.159438s)

                                                
                                                
-- stdout --
	* [NoKubernetes-20211117230313-9504] minikube v1.24.0 on Microsoft Windows 10 Enterprise N 10.0.18363 Build 18363
	  - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=12739
	* Using the docker driver based on existing profile
	* Starting minikube without Kubernetes NoKubernetes-20211117230313-9504 in cluster NoKubernetes-20211117230313-9504
	* Pulling base image ...
	* docker "NoKubernetes-20211117230313-9504" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=5902MB) ...
	* docker "NoKubernetes-20211117230313-9504" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=5902MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 23:04:40.355045   10336 oci.go:197] error getting kernel modules path: Unable to locate kernel modules
	! StartHost failed, but will try again: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	E1117 23:05:16.903590   10336 oci.go:197] error getting kernel modules path: Unable to locate kernel modules
	* Failed to start docker container. Running "minikube delete -p NoKubernetes-20211117230313-9504" may fix it: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:135: failed to start minikube with args: "out/minikube-windows-amd64.exe start -p NoKubernetes-20211117230313-9504 --driver=docker" : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestNoKubernetes/serial/StartNoArgs]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect NoKubernetes-20211117230313-9504
helpers_test.go:235: (dbg) docker inspect NoKubernetes-20211117230313-9504:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "NoKubernetes-20211117230313-9504",
	        "Id": "81215cd7b805b758e99f64f91bd2f6145b85300c9949aae6780f6a2155eaaa50",
	        "Created": "2021-11-17T23:05:10.068492325Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.76.0/24",
	                    "Gateway": "192.168.76.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p NoKubernetes-20211117230313-9504 -n NoKubernetes-20211117230313-9504
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p NoKubernetes-20211117230313-9504 -n NoKubernetes-20211117230313-9504: exit status 7 (1.8413924s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 23:05:24.329773    6704 status.go:247] status error: host: state: unknown state "NoKubernetes-20211117230313-9504": docker container inspect NoKubernetes-20211117230313-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: NoKubernetes-20211117230313-9504

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-20211117230313-9504" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestNoKubernetes/serial/StartNoArgs (67.11s)

                                                
                                    
x
+
TestPause/serial/Start (43.35s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:78: (dbg) Run:  out/minikube-windows-amd64.exe start -p pause-20211117230855-9504 --memory=2048 --install-addons=false --wait=all --driver=docker

                                                
                                                
=== CONT  TestPause/serial/Start
pause_test.go:78: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p pause-20211117230855-9504 --memory=2048 --install-addons=false --wait=all --driver=docker: exit status 80 (41.345241s)

                                                
                                                
-- stdout --
	* [pause-20211117230855-9504] minikube v1.24.0 on Microsoft Windows 10 Enterprise N 10.0.18363 Build 18363
	  - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=12739
	* Using the docker driver based on user configuration
	* Starting control plane node pause-20211117230855-9504 in cluster pause-20211117230855-9504
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* docker "pause-20211117230855-9504" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 23:09:03.282829    8748 oci.go:197] error getting kernel modules path: Unable to locate kernel modules
	! StartHost failed, but will try again: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	E1117 23:09:31.978011    8748 oci.go:197] error getting kernel modules path: Unable to locate kernel modules
	* Failed to start docker container. Running "minikube delete -p pause-20211117230855-9504" may fix it: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
pause_test.go:80: failed to start minikube with args: "out/minikube-windows-amd64.exe start -p pause-20211117230855-9504 --memory=2048 --install-addons=false --wait=all --driver=docker" : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestPause/serial/Start]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect pause-20211117230855-9504

                                                
                                                
=== CONT  TestPause/serial/Start
helpers_test.go:235: (dbg) docker inspect pause-20211117230855-9504:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "pause-20211117230855-9504",
	        "Id": "8a7a91fa46cd5f93f840efdf6367b2224f17ae20782e3b3cb3b78e481b3b47a7",
	        "Created": "2021-11-17T23:09:30.334392083Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.58.0/24",
	                    "Gateway": "192.168.58.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true"
	        }
	    }
	]

                                                
                                                
-- /stdout --

                                                
                                                
=== CONT  TestPause/serial/Start
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p pause-20211117230855-9504 -n pause-20211117230855-9504

                                                
                                                
=== CONT  TestPause/serial/Start
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p pause-20211117230855-9504 -n pause-20211117230855-9504: exit status 7 (1.8263808s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 23:09:38.969315    1796 status.go:247] status error: host: state: unknown state "pause-20211117230855-9504": docker container inspect pause-20211117230855-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: pause-20211117230855-9504

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "pause-20211117230855-9504" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestPause/serial/Start (43.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (41.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:99: (dbg) Run:  out/minikube-windows-amd64.exe start -p auto-20211117230313-9504 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --driver=docker

                                                
                                                
=== CONT  TestNetworkPlugins/group/auto/Start
net_test.go:99: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p auto-20211117230313-9504 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --driver=docker: exit status 80 (41.308599s)

                                                
                                                
-- stdout --
	* [auto-20211117230313-9504] minikube v1.24.0 on Microsoft Windows 10 Enterprise N 10.0.18363 Build 18363
	  - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=12739
	* Using the docker driver based on user configuration
	* Starting control plane node auto-20211117230313-9504 in cluster auto-20211117230313-9504
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* docker "auto-20211117230313-9504" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1117 23:08:55.963759    9204 out.go:297] Setting OutFile to fd 1516 ...
	I1117 23:08:56.042367    9204 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 23:08:56.042367    9204 out.go:310] Setting ErrFile to fd 1848...
	I1117 23:08:56.042367    9204 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 23:08:56.054864    9204 out.go:304] Setting JSON to false
	I1117 23:08:56.059066    9204 start.go:112] hostinfo: {"hostname":"minikube2","uptime":79851,"bootTime":1637110685,"procs":133,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.18363 Build 18363","kernelVersion":"10.0.18363 Build 18363","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"29dced62-21fb-45d8-a34e-472b66ced002"}
	W1117 23:08:56.059066    9204 start.go:120] gopshost.Virtualization returned error: not implemented yet
	I1117 23:08:56.063538    9204 out.go:176] * [auto-20211117230313-9504] minikube v1.24.0 on Microsoft Windows 10 Enterprise N 10.0.18363 Build 18363
	I1117 23:08:56.063538    9204 notify.go:174] Checking for updates...
	I1117 23:08:56.067447    9204 out.go:176]   - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	I1117 23:08:56.069744    9204 out.go:176]   - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	I1117 23:08:56.071767    9204 out.go:176]   - MINIKUBE_LOCATION=12739
	I1117 23:08:56.075499    9204 config.go:176] Loaded profile config "missing-upgrade-20211117230443-9504": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.0
	I1117 23:08:56.075995    9204 config.go:176] Loaded profile config "multinode-20211117225530-9504-m01": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.3
	I1117 23:08:56.078619    9204 config.go:176] Loaded profile config "stopped-upgrade-20211117230646-9504": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.0
	I1117 23:08:56.078619    9204 driver.go:343] Setting default libvirt URI to qemu:///system
	I1117 23:08:57.697165    9204 docker.go:132] docker version: linux-19.03.12
	I1117 23:08:57.701172    9204 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I1117 23:08:58.060939    9204 info.go:263] docker info: {ID:L2W4:ZZCI:FAHF:ZZT6:NAWJ:ODFR:2ZBQ:YQ77:PYZZ:VR3E:WLUM:OBSV Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:46 OomKillDisable:true NGoroutines:55 SystemTime:2021-11-17 23:08:57.789858421 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:4.19.76-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://inde
x.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:6239399936 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:19.03.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7ad184331fa3e55e52b890ea95e65ba581ae3429 Expected:7ad184331fa3e55e52b890ea95e65ba581ae3429} RuncCommit:{ID:dc9208a3303feef5b3839f4323d9beb36df0a9dd Expected:dc9208a3303feef5b3839f4323d9beb36df0a9dd} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense:Community Engine Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I1117 23:08:58.065934    9204 out.go:176] * Using the docker driver based on user configuration
	I1117 23:08:58.065934    9204 start.go:280] selected driver: docker
	I1117 23:08:58.065934    9204 start.go:775] validating driver "docker" against <nil>
	I1117 23:08:58.065934    9204 start.go:786] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I1117 23:08:58.130529    9204 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I1117 23:08:58.511562    9204 info.go:263] docker info: {ID:L2W4:ZZCI:FAHF:ZZT6:NAWJ:ODFR:2ZBQ:YQ77:PYZZ:VR3E:WLUM:OBSV Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:46 OomKillDisable:true NGoroutines:55 SystemTime:2021-11-17 23:08:58.215814012 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:4.19.76-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://inde
x.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:6239399936 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:19.03.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7ad184331fa3e55e52b890ea95e65ba581ae3429 Expected:7ad184331fa3e55e52b890ea95e65ba581ae3429} RuncCommit:{ID:dc9208a3303feef5b3839f4323d9beb36df0a9dd Expected:dc9208a3303feef5b3839f4323d9beb36df0a9dd} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense:Community Engine Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I1117 23:08:58.511562    9204 start_flags.go:268] no existing cluster config was found, will generate one from the flags 
	I1117 23:08:58.512564    9204 start_flags.go:758] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1117 23:08:58.512564    9204 cni.go:93] Creating CNI manager for ""
	I1117 23:08:58.512564    9204 cni.go:167] CNI unnecessary in this configuration, recommending no CNI
	I1117 23:08:58.512564    9204 start_flags.go:282] config:
	{Name:auto-20211117230313-9504 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.3 ClusterName:auto-20211117230313-9504 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkP
lugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host}
	I1117 23:08:58.516564    9204 out.go:176] * Starting control plane node auto-20211117230313-9504 in cluster auto-20211117230313-9504
	I1117 23:08:58.516564    9204 cache.go:118] Beginning downloading kic base image for docker with docker
	I1117 23:08:58.518563    9204 out.go:176] * Pulling base image ...
	I1117 23:08:58.519573    9204 preload.go:132] Checking if preload exists for k8s version v1.22.3 and runtime docker
	I1117 23:08:58.519573    9204 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c in local docker daemon
	I1117 23:08:58.519573    9204 preload.go:148] Found local preload: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4
	I1117 23:08:58.519573    9204 cache.go:57] Caching tarball of preloaded images
	I1117 23:08:58.519573    9204 preload.go:174] Found C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1117 23:08:58.519573    9204 cache.go:60] Finished verifying existence of preloaded tar for  v1.22.3 on docker
	I1117 23:08:58.520559    9204 profile.go:147] Saving config to C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\auto-20211117230313-9504\config.json ...
	I1117 23:08:58.520559    9204 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\auto-20211117230313-9504\config.json: {Name:mkc9b11e666ef24bd4d69485382b7bd29e1db6bd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1117 23:08:58.626170    9204 image.go:79] Found gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c in local docker daemon, skipping pull
	I1117 23:08:58.626170    9204 cache.go:140] gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c exists in daemon, skipping load
	I1117 23:08:58.626170    9204 cache.go:206] Successfully downloaded all kic artifacts
	I1117 23:08:58.626170    9204 start.go:313] acquiring machines lock for auto-20211117230313-9504: {Name:mkc04c626b1d0512d45236488263924597be9dfd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 23:08:58.626170    9204 start.go:317] acquired machines lock for "auto-20211117230313-9504" in 0s
	I1117 23:08:58.626170    9204 start.go:89] Provisioning new machine with config: &{Name:auto-20211117230313-9504 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.3 ClusterName:auto-20211117230313-9504 Namespace:default APIServerName:minikubeCA APIServerNames:[] A
PIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.22.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host} &{Name: IP: Port:8443 KubernetesVersion:v1.22.3 ControlPlane:true Worker:true}
	I1117 23:08:58.626170    9204 start.go:126] createHost starting for "" (driver="docker")
	I1117 23:08:58.630165    9204 out.go:203] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I1117 23:08:58.630165    9204 start.go:160] libmachine.API.Create for "auto-20211117230313-9504" (driver="docker")
	I1117 23:08:58.630165    9204 client.go:168] LocalClient.Create starting
	I1117 23:08:58.630165    9204 main.go:130] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I1117 23:08:58.631181    9204 main.go:130] libmachine: Decoding PEM data...
	I1117 23:08:58.631181    9204 main.go:130] libmachine: Parsing certificate...
	I1117 23:08:58.631181    9204 main.go:130] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I1117 23:08:58.631181    9204 main.go:130] libmachine: Decoding PEM data...
	I1117 23:08:58.631181    9204 main.go:130] libmachine: Parsing certificate...
	I1117 23:08:58.637183    9204 cli_runner.go:115] Run: docker network inspect auto-20211117230313-9504 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1117 23:08:58.739984    9204 cli_runner.go:162] docker network inspect auto-20211117230313-9504 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1117 23:08:58.743988    9204 network_create.go:254] running [docker network inspect auto-20211117230313-9504] to gather additional debugging logs...
	I1117 23:08:58.743988    9204 cli_runner.go:115] Run: docker network inspect auto-20211117230313-9504
	W1117 23:08:58.845305    9204 cli_runner.go:162] docker network inspect auto-20211117230313-9504 returned with exit code 1
	I1117 23:08:58.845305    9204 network_create.go:257] error running [docker network inspect auto-20211117230313-9504]: docker network inspect auto-20211117230313-9504: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: auto-20211117230313-9504
	I1117 23:08:58.845305    9204 network_create.go:259] output of [docker network inspect auto-20211117230313-9504]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: auto-20211117230313-9504
	
	** /stderr **
	I1117 23:08:58.849294    9204 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1117 23:08:58.969557    9204 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc000660430] misses:0}
	I1117 23:08:58.969557    9204 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1117 23:08:58.969557    9204 network_create.go:106] attempt to create docker network auto-20211117230313-9504 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1117 23:08:58.973542    9204 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true auto-20211117230313-9504
	W1117 23:08:59.064848    9204 cli_runner.go:162] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true auto-20211117230313-9504 returned with exit code 1
	W1117 23:08:59.065244    9204 network_create.go:98] failed to create docker network auto-20211117230313-9504 192.168.49.0/24, will retry: subnet is taken
	I1117 23:08:59.081947    9204 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000660430] amended:false}} dirty:map[] misses:0}
	I1117 23:08:59.081947    9204 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1117 23:08:59.096472    9204 network.go:288] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000660430] amended:true}} dirty:map[192.168.49.0:0xc000660430 192.168.58.0:0xc000006930] misses:0}
	I1117 23:08:59.096533    9204 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1117 23:08:59.096533    9204 network_create.go:106] attempt to create docker network auto-20211117230313-9504 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I1117 23:08:59.099679    9204 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true auto-20211117230313-9504
	I1117 23:08:59.309300    9204 network_create.go:90] docker network auto-20211117230313-9504 192.168.58.0/24 created
	I1117 23:08:59.309463    9204 kic.go:106] calculated static IP "192.168.58.2" for the "auto-20211117230313-9504" container
	I1117 23:08:59.318218    9204 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
	I1117 23:08:59.430514    9204 cli_runner.go:115] Run: docker volume create auto-20211117230313-9504 --label name.minikube.sigs.k8s.io=auto-20211117230313-9504 --label created_by.minikube.sigs.k8s.io=true
	I1117 23:09:00.200122    9204 oci.go:102] Successfully created a docker volume auto-20211117230313-9504
	I1117 23:09:00.203104    9204 cli_runner.go:115] Run: docker run --rm --name auto-20211117230313-9504-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-20211117230313-9504 --entrypoint /usr/bin/test -v auto-20211117230313-9504:/var gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -d /var/lib
	I1117 23:09:02.919612    9204 cli_runner.go:168] Completed: docker run --rm --name auto-20211117230313-9504-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-20211117230313-9504 --entrypoint /usr/bin/test -v auto-20211117230313-9504:/var gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -d /var/lib: (2.7164873s)
	I1117 23:09:02.919753    9204 oci.go:106] Successfully prepared a docker volume auto-20211117230313-9504
	I1117 23:09:02.919753    9204 preload.go:132] Checking if preload exists for k8s version v1.22.3 and runtime docker
	I1117 23:09:02.919753    9204 kic.go:179] Starting extracting preloaded images to volume ...
	I1117 23:09:02.924609    9204 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v auto-20211117230313-9504:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir
	I1117 23:09:02.925606    9204 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	W1117 23:09:03.064352    9204 cli_runner.go:162] docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v auto-20211117230313-9504:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir returned with exit code 125
	I1117 23:09:03.064454    9204 kic.go:186] Unable to extract preloaded tarball to volume: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v auto-20211117230313-9504:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: status code not OK but 500: {"Message":"Unhandled exception: The notification platform is unavailable.\r\n\r\nThe notification platform is unavailable.\r\n","StackTrace":"   at Windows.UI.Notifications.ToastNotificationManager.CreateToastNotifier(String applicationId)\r\n   at Docker.WPF.PromptShareDirectory.<PromptUserAsync>d__0.MoveNext() in C:\\workspaces\\stable-2.3.x\\src\\github.com\\docker\\pinata\\win\\src\\Docker.WPF\\PromptShareDirectory.cs:line 25\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at Docker.ApiServices.Mounting.FileSharing.<DoShareAsync>d__6.MoveNext() in C:\\workspaces\\stable-2.3.x\\src\\github.com\\docker\\pinata\\win\\src\\Docker.ApiServices\\Mounting\\FileSharing.cs:line 80\r\n--- End of stack trace from previous location whe
re exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at Docker.ApiServices.Mounting.FileSharing.<ShareAsync>d__4.MoveNext() in C:\\workspaces\\stable-2.3.x\\src\\github.com\\docker\\pinata\\win\\src\\Docker.ApiServices\\Mounting\\FileSharing.cs:line 47\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at Docker.HttpApi.Controllers.FilesharingController.<ShareDirectory>d__2.MoveNext() in C:\\workspaces\\stable-2.3.x\\src\\github.com\\docker\\pinata\\win\\src\\Docker.HttpApi\\Controllers\\FilesharingController.cs:line 21\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.Exceptio
nDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at System.Threading.Tasks.TaskHelpersExtensions.<CastToObject>d__1`1.MoveNext()\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at System.Web.Http.Controllers.ApiControllerActionInvoker.<InvokeActionAsyncCore>d__1.MoveNext()\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at System.Web.Http.Controllers.ActionFilterResult.<ExecuteAsync>d__5.MoveNext()\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.Excepti
onServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at System.Web.Http.Dispatcher.HttpControllerDispatcher.<SendAsync>d__15.MoveNext()"}.
	See 'docker run --help'.
	I1117 23:09:03.314835    9204 info.go:263] docker info: {ID:L2W4:ZZCI:FAHF:ZZT6:NAWJ:ODFR:2ZBQ:YQ77:PYZZ:VR3E:WLUM:OBSV Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:46 OomKillDisable:true NGoroutines:55 SystemTime:2021-11-17 23:09:03.024831335 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:4.19.76-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://inde
x.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:6239399936 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:19.03.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7ad184331fa3e55e52b890ea95e65ba581ae3429 Expected:7ad184331fa3e55e52b890ea95e65ba581ae3429} RuncCommit:{ID:dc9208a3303feef5b3839f4323d9beb36df0a9dd Expected:dc9208a3303feef5b3839f4323d9beb36df0a9dd} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense:Community Engine Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	E1117 23:09:03.315055    9204 oci.go:197] error getting kernel modules path: Unable to locate kernel modules
	I1117 23:09:03.315152    9204 client.go:171] LocalClient.Create took 4.6849518s
	I1117 23:09:05.326987    9204 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1117 23:09:05.329936    9204 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20211117230313-9504
	W1117 23:09:05.432718    9204 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20211117230313-9504 returned with exit code 1
	I1117 23:09:05.432718    9204 retry.go:31] will retry after 276.165072ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "auto-20211117230313-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20211117230313-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-20211117230313-9504
	I1117 23:09:05.714938    9204 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20211117230313-9504
	W1117 23:09:05.804753    9204 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20211117230313-9504 returned with exit code 1
	I1117 23:09:05.804753    9204 retry.go:31] will retry after 540.190908ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "auto-20211117230313-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20211117230313-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-20211117230313-9504
	I1117 23:09:06.352465    9204 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20211117230313-9504
	W1117 23:09:06.441153    9204 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20211117230313-9504 returned with exit code 1
	I1117 23:09:06.441153    9204 retry.go:31] will retry after 655.06503ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "auto-20211117230313-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20211117230313-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-20211117230313-9504
	I1117 23:09:07.101546    9204 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20211117230313-9504
	W1117 23:09:07.193548    9204 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20211117230313-9504 returned with exit code 1
	W1117 23:09:07.193774    9204 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "auto-20211117230313-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20211117230313-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-20211117230313-9504
	
	W1117 23:09:07.193846    9204 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "auto-20211117230313-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20211117230313-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-20211117230313-9504
	I1117 23:09:07.193911    9204 start.go:129] duration metric: createHost completed in 8.5676763s
	I1117 23:09:07.193911    9204 start.go:80] releasing machines lock for "auto-20211117230313-9504", held for 8.5676763s
	W1117 23:09:07.194176    9204 start.go:532] error starting host: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	I1117 23:09:07.201609    9204 cli_runner.go:115] Run: docker container inspect auto-20211117230313-9504 --format={{.State.Status}}
	W1117 23:09:07.296613    9204 cli_runner.go:162] docker container inspect auto-20211117230313-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:09:07.296774    9204 delete.go:82] Unable to get host status for auto-20211117230313-9504, assuming it has already been deleted: state: unknown state "auto-20211117230313-9504": docker container inspect auto-20211117230313-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-20211117230313-9504
	W1117 23:09:07.296837    9204 out.go:241] ! StartHost failed, but will try again: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	! StartHost failed, but will try again: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	I1117 23:09:07.296837    9204 start.go:547] Will try again in 5 seconds ...
	I1117 23:09:12.297666    9204 start.go:313] acquiring machines lock for auto-20211117230313-9504: {Name:mkc04c626b1d0512d45236488263924597be9dfd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 23:09:12.297666    9204 start.go:317] acquired machines lock for "auto-20211117230313-9504" in 0s
	I1117 23:09:12.297666    9204 start.go:93] Skipping create...Using existing machine configuration
	I1117 23:09:12.297666    9204 fix.go:55] fixHost starting: 
	I1117 23:09:12.307360    9204 cli_runner.go:115] Run: docker container inspect auto-20211117230313-9504 --format={{.State.Status}}
	W1117 23:09:12.396416    9204 cli_runner.go:162] docker container inspect auto-20211117230313-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:09:12.396416    9204 fix.go:108] recreateIfNeeded on auto-20211117230313-9504: state= err=unknown state "auto-20211117230313-9504": docker container inspect auto-20211117230313-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-20211117230313-9504
	I1117 23:09:12.396416    9204 fix.go:113] machineExists: false. err=machine does not exist
	I1117 23:09:12.399116    9204 out.go:176] * docker "auto-20211117230313-9504" container is missing, will recreate.
	I1117 23:09:12.399188    9204 delete.go:124] DEMOLISHING auto-20211117230313-9504 ...
	I1117 23:09:12.408485    9204 cli_runner.go:115] Run: docker container inspect auto-20211117230313-9504 --format={{.State.Status}}
	W1117 23:09:12.498767    9204 cli_runner.go:162] docker container inspect auto-20211117230313-9504 --format={{.State.Status}} returned with exit code 1
	W1117 23:09:12.498767    9204 stop.go:75] unable to get state: unknown state "auto-20211117230313-9504": docker container inspect auto-20211117230313-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-20211117230313-9504
	I1117 23:09:12.498767    9204 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "auto-20211117230313-9504": docker container inspect auto-20211117230313-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-20211117230313-9504
	I1117 23:09:12.514957    9204 cli_runner.go:115] Run: docker container inspect auto-20211117230313-9504 --format={{.State.Status}}
	W1117 23:09:12.603555    9204 cli_runner.go:162] docker container inspect auto-20211117230313-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:09:12.603843    9204 delete.go:82] Unable to get host status for auto-20211117230313-9504, assuming it has already been deleted: state: unknown state "auto-20211117230313-9504": docker container inspect auto-20211117230313-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-20211117230313-9504
	I1117 23:09:12.609987    9204 cli_runner.go:115] Run: docker container inspect -f {{.Id}} auto-20211117230313-9504
	W1117 23:09:12.699715    9204 cli_runner.go:162] docker container inspect -f {{.Id}} auto-20211117230313-9504 returned with exit code 1
	I1117 23:09:12.700007    9204 kic.go:360] could not find the container auto-20211117230313-9504 to remove it. will try anyways
	I1117 23:09:12.704305    9204 cli_runner.go:115] Run: docker container inspect auto-20211117230313-9504 --format={{.State.Status}}
	W1117 23:09:12.813958    9204 cli_runner.go:162] docker container inspect auto-20211117230313-9504 --format={{.State.Status}} returned with exit code 1
	W1117 23:09:12.813958    9204 oci.go:83] error getting container status, will try to delete anyways: unknown state "auto-20211117230313-9504": docker container inspect auto-20211117230313-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-20211117230313-9504
	I1117 23:09:12.816526    9204 cli_runner.go:115] Run: docker exec --privileged -t auto-20211117230313-9504 /bin/bash -c "sudo init 0"
	W1117 23:09:12.911772    9204 cli_runner.go:162] docker exec --privileged -t auto-20211117230313-9504 /bin/bash -c "sudo init 0" returned with exit code 1
	I1117 23:09:12.911886    9204 oci.go:658] error shutdown auto-20211117230313-9504: docker exec --privileged -t auto-20211117230313-9504 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: auto-20211117230313-9504
	I1117 23:09:13.917102    9204 cli_runner.go:115] Run: docker container inspect auto-20211117230313-9504 --format={{.State.Status}}
	W1117 23:09:14.018527    9204 cli_runner.go:162] docker container inspect auto-20211117230313-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:09:14.018527    9204 oci.go:670] temporary error verifying shutdown: unknown state "auto-20211117230313-9504": docker container inspect auto-20211117230313-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-20211117230313-9504
	I1117 23:09:14.018780    9204 oci.go:672] temporary error: container auto-20211117230313-9504 status is  but expect it to be exited
	I1117 23:09:14.018843    9204 retry.go:31] will retry after 462.318748ms: couldn't verify container is exited. %v: unknown state "auto-20211117230313-9504": docker container inspect auto-20211117230313-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-20211117230313-9504
	I1117 23:09:14.487677    9204 cli_runner.go:115] Run: docker container inspect auto-20211117230313-9504 --format={{.State.Status}}
	W1117 23:09:14.591974    9204 cli_runner.go:162] docker container inspect auto-20211117230313-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:09:14.592134    9204 oci.go:670] temporary error verifying shutdown: unknown state "auto-20211117230313-9504": docker container inspect auto-20211117230313-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-20211117230313-9504
	I1117 23:09:14.592179    9204 oci.go:672] temporary error: container auto-20211117230313-9504 status is  but expect it to be exited
	I1117 23:09:14.592179    9204 retry.go:31] will retry after 890.117305ms: couldn't verify container is exited. %v: unknown state "auto-20211117230313-9504": docker container inspect auto-20211117230313-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-20211117230313-9504
	I1117 23:09:15.489940    9204 cli_runner.go:115] Run: docker container inspect auto-20211117230313-9504 --format={{.State.Status}}
	W1117 23:09:15.580264    9204 cli_runner.go:162] docker container inspect auto-20211117230313-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:09:15.580467    9204 oci.go:670] temporary error verifying shutdown: unknown state "auto-20211117230313-9504": docker container inspect auto-20211117230313-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-20211117230313-9504
	I1117 23:09:15.580467    9204 oci.go:672] temporary error: container auto-20211117230313-9504 status is  but expect it to be exited
	I1117 23:09:15.580467    9204 retry.go:31] will retry after 636.341646ms: couldn't verify container is exited. %v: unknown state "auto-20211117230313-9504": docker container inspect auto-20211117230313-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-20211117230313-9504
	I1117 23:09:16.222072    9204 cli_runner.go:115] Run: docker container inspect auto-20211117230313-9504 --format={{.State.Status}}
	W1117 23:09:16.322359    9204 cli_runner.go:162] docker container inspect auto-20211117230313-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:09:16.322631    9204 oci.go:670] temporary error verifying shutdown: unknown state "auto-20211117230313-9504": docker container inspect auto-20211117230313-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-20211117230313-9504
	I1117 23:09:16.322696    9204 oci.go:672] temporary error: container auto-20211117230313-9504 status is  but expect it to be exited
	I1117 23:09:16.322696    9204 retry.go:31] will retry after 1.107876242s: couldn't verify container is exited. %v: unknown state "auto-20211117230313-9504": docker container inspect auto-20211117230313-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-20211117230313-9504
	I1117 23:09:17.436158    9204 cli_runner.go:115] Run: docker container inspect auto-20211117230313-9504 --format={{.State.Status}}
	W1117 23:09:17.531340    9204 cli_runner.go:162] docker container inspect auto-20211117230313-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:09:17.531419    9204 oci.go:670] temporary error verifying shutdown: unknown state "auto-20211117230313-9504": docker container inspect auto-20211117230313-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-20211117230313-9504
	I1117 23:09:17.531548    9204 oci.go:672] temporary error: container auto-20211117230313-9504 status is  but expect it to be exited
	I1117 23:09:17.531548    9204 retry.go:31] will retry after 1.511079094s: couldn't verify container is exited. %v: unknown state "auto-20211117230313-9504": docker container inspect auto-20211117230313-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-20211117230313-9504
	I1117 23:09:19.047907    9204 cli_runner.go:115] Run: docker container inspect auto-20211117230313-9504 --format={{.State.Status}}
	W1117 23:09:19.142921    9204 cli_runner.go:162] docker container inspect auto-20211117230313-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:09:19.143220    9204 oci.go:670] temporary error verifying shutdown: unknown state "auto-20211117230313-9504": docker container inspect auto-20211117230313-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-20211117230313-9504
	I1117 23:09:19.143220    9204 oci.go:672] temporary error: container auto-20211117230313-9504 status is  but expect it to be exited
	I1117 23:09:19.143339    9204 retry.go:31] will retry after 3.04096222s: couldn't verify container is exited. %v: unknown state "auto-20211117230313-9504": docker container inspect auto-20211117230313-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-20211117230313-9504
	I1117 23:09:22.190407    9204 cli_runner.go:115] Run: docker container inspect auto-20211117230313-9504 --format={{.State.Status}}
	W1117 23:09:22.920244    9204 cli_runner.go:162] docker container inspect auto-20211117230313-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:09:22.920244    9204 oci.go:670] temporary error verifying shutdown: unknown state "auto-20211117230313-9504": docker container inspect auto-20211117230313-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-20211117230313-9504
	I1117 23:09:22.920244    9204 oci.go:672] temporary error: container auto-20211117230313-9504 status is  but expect it to be exited
	I1117 23:09:22.920597    9204 retry.go:31] will retry after 5.781953173s: couldn't verify container is exited. %v: unknown state "auto-20211117230313-9504": docker container inspect auto-20211117230313-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-20211117230313-9504
	I1117 23:09:28.707599    9204 cli_runner.go:115] Run: docker container inspect auto-20211117230313-9504 --format={{.State.Status}}
	W1117 23:09:28.808120    9204 cli_runner.go:162] docker container inspect auto-20211117230313-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:09:28.808353    9204 oci.go:670] temporary error verifying shutdown: unknown state "auto-20211117230313-9504": docker container inspect auto-20211117230313-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-20211117230313-9504
	I1117 23:09:28.808353    9204 oci.go:672] temporary error: container auto-20211117230313-9504 status is  but expect it to be exited
	I1117 23:09:28.808353    9204 oci.go:87] couldn't shut down auto-20211117230313-9504 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "auto-20211117230313-9504": docker container inspect auto-20211117230313-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-20211117230313-9504
	 
	I1117 23:09:28.814605    9204 cli_runner.go:115] Run: docker rm -f -v auto-20211117230313-9504
	W1117 23:09:28.902687    9204 cli_runner.go:162] docker rm -f -v auto-20211117230313-9504 returned with exit code 1
	W1117 23:09:28.903995    9204 delete.go:139] delete failed (probably ok) <nil>
	I1117 23:09:28.904065    9204 fix.go:120] Sleeping 1 second for extra luck!
	I1117 23:09:29.905550    9204 start.go:126] createHost starting for "" (driver="docker")
	I1117 23:09:29.910163    9204 out.go:203] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I1117 23:09:29.910490    9204 start.go:160] libmachine.API.Create for "auto-20211117230313-9504" (driver="docker")
	I1117 23:09:29.910490    9204 client.go:168] LocalClient.Create starting
	I1117 23:09:29.911012    9204 main.go:130] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I1117 23:09:29.911012    9204 main.go:130] libmachine: Decoding PEM data...
	I1117 23:09:29.911012    9204 main.go:130] libmachine: Parsing certificate...
	I1117 23:09:29.911012    9204 main.go:130] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I1117 23:09:29.911012    9204 main.go:130] libmachine: Decoding PEM data...
	I1117 23:09:29.911566    9204 main.go:130] libmachine: Parsing certificate...
	I1117 23:09:29.916462    9204 cli_runner.go:115] Run: docker network inspect auto-20211117230313-9504 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1117 23:09:30.010755    9204 cli_runner.go:162] docker network inspect auto-20211117230313-9504 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1117 23:09:30.015126    9204 network_create.go:254] running [docker network inspect auto-20211117230313-9504] to gather additional debugging logs...
	I1117 23:09:30.015259    9204 cli_runner.go:115] Run: docker network inspect auto-20211117230313-9504
	W1117 23:09:30.124925    9204 cli_runner.go:162] docker network inspect auto-20211117230313-9504 returned with exit code 1
	I1117 23:09:30.124974    9204 network_create.go:257] error running [docker network inspect auto-20211117230313-9504]: docker network inspect auto-20211117230313-9504: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: auto-20211117230313-9504
	I1117 23:09:30.125053    9204 network_create.go:259] output of [docker network inspect auto-20211117230313-9504]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: auto-20211117230313-9504
	
	** /stderr **
	I1117 23:09:30.129017    9204 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1117 23:09:30.237228    9204 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000660430] amended:true}} dirty:map[192.168.49.0:0xc000660430 192.168.58.0:0xc000006930] misses:0}
	I1117 23:09:30.237228    9204 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1117 23:09:30.249225    9204 network.go:279] skipping subnet 192.168.58.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000660430] amended:true}} dirty:map[192.168.49.0:0xc000660430 192.168.58.0:0xc000006930] misses:1}
	I1117 23:09:30.249225    9204 network.go:238] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1117 23:09:30.261222    9204 network.go:288] reserving subnet 192.168.67.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000660430] amended:true}} dirty:map[192.168.49.0:0xc000660430 192.168.58.0:0xc000006930 192.168.67.0:0xc000006b80] misses:1}
	I1117 23:09:30.261222    9204 network.go:235] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1117 23:09:30.261222    9204 network_create.go:106] attempt to create docker network auto-20211117230313-9504 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
	I1117 23:09:30.264222    9204 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true auto-20211117230313-9504
	I1117 23:09:30.537191    9204 network_create.go:90] docker network auto-20211117230313-9504 192.168.67.0/24 created
	I1117 23:09:30.537334    9204 kic.go:106] calculated static IP "192.168.67.2" for the "auto-20211117230313-9504" container
	I1117 23:09:30.543577    9204 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
	I1117 23:09:30.645967    9204 cli_runner.go:115] Run: docker volume create auto-20211117230313-9504 --label name.minikube.sigs.k8s.io=auto-20211117230313-9504 --label created_by.minikube.sigs.k8s.io=true
	I1117 23:09:30.736712    9204 oci.go:102] Successfully created a docker volume auto-20211117230313-9504
	I1117 23:09:30.741739    9204 cli_runner.go:115] Run: docker run --rm --name auto-20211117230313-9504-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-20211117230313-9504 --entrypoint /usr/bin/test -v auto-20211117230313-9504:/var gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -d /var/lib
	I1117 23:09:31.663918    9204 oci.go:106] Successfully prepared a docker volume auto-20211117230313-9504
	I1117 23:09:31.663918    9204 preload.go:132] Checking if preload exists for k8s version v1.22.3 and runtime docker
	I1117 23:09:31.664117    9204 kic.go:179] Starting extracting preloaded images to volume ...
	I1117 23:09:31.668556    9204 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I1117 23:09:31.669851    9204 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v auto-20211117230313-9504:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir
	W1117 23:09:31.799237    9204 cli_runner.go:162] docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v auto-20211117230313-9504:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir returned with exit code 125
	I1117 23:09:31.799340    9204 kic.go:186] Unable to extract preloaded tarball to volume: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v auto-20211117230313-9504:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: status code not OK but 500: {"Message":"Unhandled exception: The notification platform is unavailable.\r\n\r\nThe notification platform is unavailable.\r\n","StackTrace":"   at Windows.UI.Notifications.ToastNotificationManager.CreateToastNotifier(String applicationId)\r\n   at Docker.WPF.PromptShareDirectory.<PromptUserAsync>d__0.MoveNext() in C:\\workspaces\\stable-2.3.x\\src\\github.com\\docker\\pinata\\win\\src\\Docker.WPF\\PromptShareDirectory.cs:line 25\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at Docker.ApiServices.Mounting.FileSharing.<DoShareAsync>d__6.MoveNext() in C:\\workspaces\\stable-2.3.x\\src\\github.com\\docker\\pinata\\win\\src\\Docker.ApiServices\\Mounting\\FileSharing.cs:line 80\r\n--- End of stack trace from previous location whe
re exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at Docker.ApiServices.Mounting.FileSharing.<ShareAsync>d__4.MoveNext() in C:\\workspaces\\stable-2.3.x\\src\\github.com\\docker\\pinata\\win\\src\\Docker.ApiServices\\Mounting\\FileSharing.cs:line 47\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at Docker.HttpApi.Controllers.FilesharingController.<ShareDirectory>d__2.MoveNext() in C:\\workspaces\\stable-2.3.x\\src\\github.com\\docker\\pinata\\win\\src\\Docker.HttpApi\\Controllers\\FilesharingController.cs:line 21\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.Exceptio
nDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at System.Threading.Tasks.TaskHelpersExtensions.<CastToObject>d__1`1.MoveNext()\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at System.Web.Http.Controllers.ApiControllerActionInvoker.<InvokeActionAsyncCore>d__1.MoveNext()\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at System.Web.Http.Controllers.ActionFilterResult.<ExecuteAsync>d__5.MoveNext()\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.Excepti
onServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at System.Web.Http.Dispatcher.HttpControllerDispatcher.<SendAsync>d__15.MoveNext()"}.
	See 'docker run --help'.
	I1117 23:09:32.041043    9204 info.go:263] docker info: {ID:L2W4:ZZCI:FAHF:ZZT6:NAWJ:ODFR:2ZBQ:YQ77:PYZZ:VR3E:WLUM:OBSV Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:47 OomKillDisable:true NGoroutines:56 SystemTime:2021-11-17 23:09:31.768824156 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:4.19.76-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://inde
x.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:6239399936 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:19.03.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7ad184331fa3e55e52b890ea95e65ba581ae3429 Expected:7ad184331fa3e55e52b890ea95e65ba581ae3429} RuncCommit:{ID:dc9208a3303feef5b3839f4323d9beb36df0a9dd Expected:dc9208a3303feef5b3839f4323d9beb36df0a9dd} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense:Community Engine Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	E1117 23:09:32.041347    9204 oci.go:197] error getting kernel modules path: Unable to locate kernel modules
	I1117 23:09:32.041347    9204 client.go:171] LocalClient.Create took 2.1308404s
	I1117 23:09:34.050544    9204 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1117 23:09:34.054535    9204 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20211117230313-9504
	W1117 23:09:34.164252    9204 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20211117230313-9504 returned with exit code 1
	I1117 23:09:34.164399    9204 retry.go:31] will retry after 178.565968ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "auto-20211117230313-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20211117230313-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-20211117230313-9504
	I1117 23:09:34.347884    9204 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20211117230313-9504
	W1117 23:09:34.442975    9204 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20211117230313-9504 returned with exit code 1
	I1117 23:09:34.443174    9204 retry.go:31] will retry after 330.246446ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "auto-20211117230313-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20211117230313-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-20211117230313-9504
	I1117 23:09:34.778553    9204 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20211117230313-9504
	W1117 23:09:34.875506    9204 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20211117230313-9504 returned with exit code 1
	I1117 23:09:34.875729    9204 retry.go:31] will retry after 460.157723ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "auto-20211117230313-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20211117230313-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-20211117230313-9504
	I1117 23:09:35.340094    9204 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20211117230313-9504
	W1117 23:09:35.446069    9204 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20211117230313-9504 returned with exit code 1
	W1117 23:09:35.446069    9204 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "auto-20211117230313-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20211117230313-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-20211117230313-9504
	
	W1117 23:09:35.446069    9204 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "auto-20211117230313-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20211117230313-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-20211117230313-9504
	I1117 23:09:35.446069    9204 start.go:129] duration metric: createHost completed in 5.540477s
	I1117 23:09:35.453070    9204 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1117 23:09:35.456064    9204 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20211117230313-9504
	W1117 23:09:35.562285    9204 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20211117230313-9504 returned with exit code 1
	I1117 23:09:35.562285    9204 retry.go:31] will retry after 195.758538ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "auto-20211117230313-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20211117230313-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-20211117230313-9504
	I1117 23:09:35.764306    9204 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20211117230313-9504
	W1117 23:09:35.858828    9204 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20211117230313-9504 returned with exit code 1
	I1117 23:09:35.859248    9204 retry.go:31] will retry after 297.413196ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "auto-20211117230313-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20211117230313-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-20211117230313-9504
	I1117 23:09:36.163813    9204 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20211117230313-9504
	W1117 23:09:36.262065    9204 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20211117230313-9504 returned with exit code 1
	I1117 23:09:36.262065    9204 retry.go:31] will retry after 663.23513ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "auto-20211117230313-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20211117230313-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-20211117230313-9504
	I1117 23:09:36.930828    9204 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20211117230313-9504
	W1117 23:09:37.032830    9204 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20211117230313-9504 returned with exit code 1
	W1117 23:09:37.032830    9204 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "auto-20211117230313-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20211117230313-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-20211117230313-9504
	
	W1117 23:09:37.032830    9204 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "auto-20211117230313-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20211117230313-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-20211117230313-9504
	I1117 23:09:37.032830    9204 fix.go:57] fixHost completed within 24.7349779s
	I1117 23:09:37.032830    9204 start.go:80] releasing machines lock for "auto-20211117230313-9504", held for 24.7349779s
	W1117 23:09:37.033827    9204 out.go:241] * Failed to start docker container. Running "minikube delete -p auto-20211117230313-9504" may fix it: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	* Failed to start docker container. Running "minikube delete -p auto-20211117230313-9504" may fix it: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	I1117 23:09:37.037832    9204 out.go:176] 
	W1117 23:09:37.037832    9204 out.go:241] X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	W1117 23:09:37.037832    9204 out.go:241] * 
	* 
	W1117 23:09:37.039832    9204 out.go:241] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1117 23:09:37.042853    9204 out.go:176] 

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/auto/Start
net_test.go:101: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/auto/Start (41.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (42.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:99: (dbg) Run:  out/minikube-windows-amd64.exe start -p false-20211117230315-9504 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=false --driver=docker

                                                
                                                
=== CONT  TestNetworkPlugins/group/false/Start
net_test.go:99: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p false-20211117230315-9504 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=false --driver=docker: exit status 80 (42.294221s)

                                                
                                                
-- stdout --
	* [false-20211117230315-9504] minikube v1.24.0 on Microsoft Windows 10 Enterprise N 10.0.18363 Build 18363
	  - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=12739
	* Using the docker driver based on user configuration
	* Starting control plane node false-20211117230315-9504 in cluster false-20211117230315-9504
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* docker "false-20211117230315-9504" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1117 23:09:15.212049    6844 out.go:297] Setting OutFile to fd 1684 ...
	I1117 23:09:15.277376    6844 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 23:09:15.277376    6844 out.go:310] Setting ErrFile to fd 1488...
	I1117 23:09:15.277376    6844 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 23:09:15.289941    6844 out.go:304] Setting JSON to false
	I1117 23:09:15.292253    6844 start.go:112] hostinfo: {"hostname":"minikube2","uptime":79871,"bootTime":1637110684,"procs":132,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.18363 Build 18363","kernelVersion":"10.0.18363 Build 18363","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"29dced62-21fb-45d8-a34e-472b66ced002"}
	W1117 23:09:15.292253    6844 start.go:120] gopshost.Virtualization returned error: not implemented yet
	I1117 23:09:15.299150    6844 out.go:176] * [false-20211117230315-9504] minikube v1.24.0 on Microsoft Windows 10 Enterprise N 10.0.18363 Build 18363
	I1117 23:09:15.299150    6844 notify.go:174] Checking for updates...
	I1117 23:09:15.302416    6844 out.go:176]   - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	I1117 23:09:15.305811    6844 out.go:176]   - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	I1117 23:09:15.308042    6844 out.go:176]   - MINIKUBE_LOCATION=12739
	I1117 23:09:15.311037    6844 config.go:176] Loaded profile config "auto-20211117230313-9504": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.3
	I1117 23:09:15.311454    6844 config.go:176] Loaded profile config "multinode-20211117225530-9504-m01": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.3
	I1117 23:09:15.311803    6844 config.go:176] Loaded profile config "pause-20211117230855-9504": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.3
	I1117 23:09:15.312119    6844 config.go:176] Loaded profile config "stopped-upgrade-20211117230646-9504": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.0
	I1117 23:09:15.312341    6844 driver.go:343] Setting default libvirt URI to qemu:///system
	I1117 23:09:16.966567    6844 docker.go:132] docker version: linux-19.03.12
	I1117 23:09:16.971104    6844 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I1117 23:09:17.331061    6844 info.go:263] docker info: {ID:L2W4:ZZCI:FAHF:ZZT6:NAWJ:ODFR:2ZBQ:YQ77:PYZZ:VR3E:WLUM:OBSV Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:46 OomKillDisable:true NGoroutines:55 SystemTime:2021-11-17 23:09:17.050571558 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:4.19.76-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://inde
x.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:6239399936 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:19.03.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7ad184331fa3e55e52b890ea95e65ba581ae3429 Expected:7ad184331fa3e55e52b890ea95e65ba581ae3429} RuncCommit:{ID:dc9208a3303feef5b3839f4323d9beb36df0a9dd Expected:dc9208a3303feef5b3839f4323d9beb36df0a9dd} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense:Community Engine Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I1117 23:09:17.341792    6844 out.go:176] * Using the docker driver based on user configuration
	I1117 23:09:17.341792    6844 start.go:280] selected driver: docker
	I1117 23:09:17.341792    6844 start.go:775] validating driver "docker" against <nil>
	I1117 23:09:17.341792    6844 start.go:786] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I1117 23:09:17.405280    6844 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I1117 23:09:17.780063    6844 info.go:263] docker info: {ID:L2W4:ZZCI:FAHF:ZZT6:NAWJ:ODFR:2ZBQ:YQ77:PYZZ:VR3E:WLUM:OBSV Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:46 OomKillDisable:true NGoroutines:55 SystemTime:2021-11-17 23:09:17.492207596 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:4.19.76-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://inde
x.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:6239399936 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:19.03.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7ad184331fa3e55e52b890ea95e65ba581ae3429 Expected:7ad184331fa3e55e52b890ea95e65ba581ae3429} RuncCommit:{ID:dc9208a3303feef5b3839f4323d9beb36df0a9dd Expected:dc9208a3303feef5b3839f4323d9beb36df0a9dd} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense:Community Engine Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I1117 23:09:17.780063    6844 start_flags.go:268] no existing cluster config was found, will generate one from the flags 
	I1117 23:09:17.780063    6844 start_flags.go:758] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1117 23:09:17.780063    6844 cni.go:93] Creating CNI manager for "false"
	I1117 23:09:17.780063    6844 start_flags.go:282] config:
	{Name:false-20211117230315-9504 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.3 ClusterName:false-20211117230315-9504 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: Networ
kPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host}
	I1117 23:09:17.783976    6844 out.go:176] * Starting control plane node false-20211117230315-9504 in cluster false-20211117230315-9504
	I1117 23:09:17.784127    6844 cache.go:118] Beginning downloading kic base image for docker with docker
	I1117 23:09:17.786884    6844 out.go:176] * Pulling base image ...
	I1117 23:09:17.786996    6844 preload.go:132] Checking if preload exists for k8s version v1.22.3 and runtime docker
	I1117 23:09:17.787101    6844 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c in local docker daemon
	I1117 23:09:17.787101    6844 preload.go:148] Found local preload: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4
	I1117 23:09:17.787221    6844 cache.go:57] Caching tarball of preloaded images
	I1117 23:09:17.787425    6844 preload.go:174] Found C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1117 23:09:17.787425    6844 cache.go:60] Finished verifying existence of preloaded tar for  v1.22.3 on docker
	I1117 23:09:17.787425    6844 profile.go:147] Saving config to C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\false-20211117230315-9504\config.json ...
	I1117 23:09:17.787425    6844 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\false-20211117230315-9504\config.json: {Name:mk1874f75fb877084028a0d3c6e4379485dbed9d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1117 23:09:17.903442    6844 image.go:79] Found gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c in local docker daemon, skipping pull
	I1117 23:09:17.903442    6844 cache.go:140] gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c exists in daemon, skipping load
	I1117 23:09:17.903587    6844 cache.go:206] Successfully downloaded all kic artifacts
	I1117 23:09:17.903587    6844 start.go:313] acquiring machines lock for false-20211117230315-9504: {Name:mk0e006a7dc8226f15d4d911c4adf0b0dee16787 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 23:09:17.904021    6844 start.go:317] acquired machines lock for "false-20211117230315-9504" in 108.3µs
	I1117 23:09:17.904247    6844 start.go:89] Provisioning new machine with config: &{Name:false-20211117230315-9504 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.3 ClusterName:false-20211117230315-9504 Namespace:default APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.22.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host} &{Name: IP: Port:8443 KubernetesVersion:v1.22.3 ControlPlane:true Worker:true}
	I1117 23:09:17.904466    6844 start.go:126] createHost starting for "" (driver="docker")
	I1117 23:09:17.907541    6844 out.go:203] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I1117 23:09:17.908212    6844 start.go:160] libmachine.API.Create for "false-20211117230315-9504" (driver="docker")
	I1117 23:09:17.908212    6844 client.go:168] LocalClient.Create starting
	I1117 23:09:17.909083    6844 main.go:130] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I1117 23:09:17.909391    6844 main.go:130] libmachine: Decoding PEM data...
	I1117 23:09:17.909391    6844 main.go:130] libmachine: Parsing certificate...
	I1117 23:09:17.909644    6844 main.go:130] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I1117 23:09:17.909924    6844 main.go:130] libmachine: Decoding PEM data...
	I1117 23:09:17.910051    6844 main.go:130] libmachine: Parsing certificate...
	I1117 23:09:17.916756    6844 cli_runner.go:115] Run: docker network inspect false-20211117230315-9504 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1117 23:09:18.024969    6844 cli_runner.go:162] docker network inspect false-20211117230315-9504 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1117 23:09:18.029714    6844 network_create.go:254] running [docker network inspect false-20211117230315-9504] to gather additional debugging logs...
	I1117 23:09:18.029770    6844 cli_runner.go:115] Run: docker network inspect false-20211117230315-9504
	W1117 23:09:18.127397    6844 cli_runner.go:162] docker network inspect false-20211117230315-9504 returned with exit code 1
	I1117 23:09:18.127397    6844 network_create.go:257] error running [docker network inspect false-20211117230315-9504]: docker network inspect false-20211117230315-9504: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: false-20211117230315-9504
	I1117 23:09:18.127397    6844 network_create.go:259] output of [docker network inspect false-20211117230315-9504]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: false-20211117230315-9504
	
	** /stderr **
	I1117 23:09:18.133827    6844 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1117 23:09:18.245084    6844 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc0004ac408] misses:0}
	I1117 23:09:18.246170    6844 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1117 23:09:18.246235    6844 network_create.go:106] attempt to create docker network false-20211117230315-9504 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1117 23:09:18.250759    6844 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true false-20211117230315-9504
	I1117 23:09:18.462058    6844 network_create.go:90] docker network false-20211117230315-9504 192.168.49.0/24 created
	I1117 23:09:18.462058    6844 kic.go:106] calculated static IP "192.168.49.2" for the "false-20211117230315-9504" container
	I1117 23:09:18.475338    6844 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
	I1117 23:09:18.576141    6844 cli_runner.go:115] Run: docker volume create false-20211117230315-9504 --label name.minikube.sigs.k8s.io=false-20211117230315-9504 --label created_by.minikube.sigs.k8s.io=true
	I1117 23:09:18.684199    6844 oci.go:102] Successfully created a docker volume false-20211117230315-9504
	I1117 23:09:18.692066    6844 cli_runner.go:115] Run: docker run --rm --name false-20211117230315-9504-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=false-20211117230315-9504 --entrypoint /usr/bin/test -v false-20211117230315-9504:/var gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -d /var/lib
	I1117 23:09:23.853057    6844 cli_runner.go:168] Completed: docker run --rm --name false-20211117230315-9504-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=false-20211117230315-9504 --entrypoint /usr/bin/test -v false-20211117230315-9504:/var gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -d /var/lib: (5.1608056s)
	I1117 23:09:23.853356    6844 oci.go:106] Successfully prepared a docker volume false-20211117230315-9504
	I1117 23:09:23.853486    6844 preload.go:132] Checking if preload exists for k8s version v1.22.3 and runtime docker
	I1117 23:09:23.853542    6844 kic.go:179] Starting extracting preloaded images to volume ...
	I1117 23:09:23.858657    6844 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v false-20211117230315-9504:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir
	I1117 23:09:23.860080    6844 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	W1117 23:09:23.972852    6844 cli_runner.go:162] docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v false-20211117230315-9504:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir returned with exit code 125
	I1117 23:09:23.972919    6844 kic.go:186] Unable to extract preloaded tarball to volume: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v false-20211117230315-9504:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: status code not OK but 500: {"Message":"Unhandled exception: The notification platform is unavailable.\r\n\r\nThe notification platform is unavailable.\r\n","StackTrace":"   at Windows.UI.Notifications.ToastNotificationManager.CreateToastNotifier(String applicationId)\r\n   at Docker.WPF.PromptShareDirectory.<PromptUserAsync>d__0.MoveNext() in C:\\workspaces\\stable-2.3.x\\src\\github.com\\docker\\pinata\\win\\src\\Docker.WPF\\PromptShareDirectory.cs:line 25\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at Docker.ApiServices.Mounting.FileSharing.<DoShareAsync>d__6.MoveNext() in C:\\workspaces\\stable-2.3.x\\src\\github.com\\docker\\pinata\\win\\src\\Docker.ApiServices\\Mounting\\FileSharing.cs:line 80\r\n--- End of stack trace from previous location whe
re exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at Docker.ApiServices.Mounting.FileSharing.<ShareAsync>d__4.MoveNext() in C:\\workspaces\\stable-2.3.x\\src\\github.com\\docker\\pinata\\win\\src\\Docker.ApiServices\\Mounting\\FileSharing.cs:line 47\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at Docker.HttpApi.Controllers.FilesharingController.<ShareDirectory>d__2.MoveNext() in C:\\workspaces\\stable-2.3.x\\src\\github.com\\docker\\pinata\\win\\src\\Docker.HttpApi\\Controllers\\FilesharingController.cs:line 21\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.Exceptio
nDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at System.Threading.Tasks.TaskHelpersExtensions.<CastToObject>d__1`1.MoveNext()\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at System.Web.Http.Controllers.ApiControllerActionInvoker.<InvokeActionAsyncCore>d__1.MoveNext()\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at System.Web.Http.Controllers.ActionFilterResult.<ExecuteAsync>d__5.MoveNext()\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.Excepti
onServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at System.Web.Http.Dispatcher.HttpControllerDispatcher.<SendAsync>d__15.MoveNext()"}.
	See 'docker run --help'.
	I1117 23:09:24.240910    6844 info.go:263] docker info: {ID:L2W4:ZZCI:FAHF:ZZT6:NAWJ:ODFR:2ZBQ:YQ77:PYZZ:VR3E:WLUM:OBSV Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:47 OomKillDisable:true NGoroutines:56 SystemTime:2021-11-17 23:09:23.955028416 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:4.19.76-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://inde
x.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:6239399936 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:19.03.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7ad184331fa3e55e52b890ea95e65ba581ae3429 Expected:7ad184331fa3e55e52b890ea95e65ba581ae3429} RuncCommit:{ID:dc9208a3303feef5b3839f4323d9beb36df0a9dd Expected:dc9208a3303feef5b3839f4323d9beb36df0a9dd} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense:Community Engine Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	E1117 23:09:24.241246    6844 oci.go:197] error getting kernel modules path: Unable to locate kernel modules
	I1117 23:09:24.241354    6844 client.go:171] LocalClient.Create took 6.3329696s
	I1117 23:09:26.248941    6844 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1117 23:09:26.252644    6844 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20211117230315-9504
	W1117 23:09:26.343909    6844 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20211117230315-9504 returned with exit code 1
	I1117 23:09:26.344272    6844 retry.go:31] will retry after 276.165072ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "false-20211117230315-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20211117230315-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-20211117230315-9504
	I1117 23:09:26.625472    6844 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20211117230315-9504
	W1117 23:09:26.721417    6844 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20211117230315-9504 returned with exit code 1
	I1117 23:09:26.721417    6844 retry.go:31] will retry after 540.190908ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "false-20211117230315-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20211117230315-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-20211117230315-9504
	I1117 23:09:27.267806    6844 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20211117230315-9504
	W1117 23:09:27.368816    6844 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20211117230315-9504 returned with exit code 1
	I1117 23:09:27.369180    6844 retry.go:31] will retry after 655.06503ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "false-20211117230315-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20211117230315-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-20211117230315-9504
	I1117 23:09:28.029272    6844 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20211117230315-9504
	W1117 23:09:28.123656    6844 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20211117230315-9504 returned with exit code 1
	W1117 23:09:28.124073    6844 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "false-20211117230315-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20211117230315-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-20211117230315-9504
	
	W1117 23:09:28.124131    6844 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "false-20211117230315-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20211117230315-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-20211117230315-9504
	I1117 23:09:28.124131    6844 start.go:129] duration metric: createHost completed in 10.2195885s
	I1117 23:09:28.124197    6844 start.go:80] releasing machines lock for "false-20211117230315-9504", held for 10.2200993s
	W1117 23:09:28.124351    6844 start.go:532] error starting host: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	I1117 23:09:28.133037    6844 cli_runner.go:115] Run: docker container inspect false-20211117230315-9504 --format={{.State.Status}}
	W1117 23:09:28.228774    6844 cli_runner.go:162] docker container inspect false-20211117230315-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:09:28.228774    6844 delete.go:82] Unable to get host status for false-20211117230315-9504, assuming it has already been deleted: state: unknown state "false-20211117230315-9504": docker container inspect false-20211117230315-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-20211117230315-9504
	W1117 23:09:28.228774    6844 out.go:241] ! StartHost failed, but will try again: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	! StartHost failed, but will try again: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	I1117 23:09:28.228774    6844 start.go:547] Will try again in 5 seconds ...
	I1117 23:09:33.229468    6844 start.go:313] acquiring machines lock for false-20211117230315-9504: {Name:mk0e006a7dc8226f15d4d911c4adf0b0dee16787 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 23:09:33.229877    6844 start.go:317] acquired machines lock for "false-20211117230315-9504" in 191.2µs
	I1117 23:09:33.230066    6844 start.go:93] Skipping create...Using existing machine configuration
	I1117 23:09:33.230066    6844 fix.go:55] fixHost starting: 
	I1117 23:09:33.239161    6844 cli_runner.go:115] Run: docker container inspect false-20211117230315-9504 --format={{.State.Status}}
	W1117 23:09:33.337502    6844 cli_runner.go:162] docker container inspect false-20211117230315-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:09:33.337502    6844 fix.go:108] recreateIfNeeded on false-20211117230315-9504: state= err=unknown state "false-20211117230315-9504": docker container inspect false-20211117230315-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-20211117230315-9504
	I1117 23:09:33.337891    6844 fix.go:113] machineExists: false. err=machine does not exist
	I1117 23:09:33.341622    6844 out.go:176] * docker "false-20211117230315-9504" container is missing, will recreate.
	I1117 23:09:33.341693    6844 delete.go:124] DEMOLISHING false-20211117230315-9504 ...
	I1117 23:09:33.349782    6844 cli_runner.go:115] Run: docker container inspect false-20211117230315-9504 --format={{.State.Status}}
	W1117 23:09:33.440325    6844 cli_runner.go:162] docker container inspect false-20211117230315-9504 --format={{.State.Status}} returned with exit code 1
	W1117 23:09:33.440325    6844 stop.go:75] unable to get state: unknown state "false-20211117230315-9504": docker container inspect false-20211117230315-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-20211117230315-9504
	I1117 23:09:33.440538    6844 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "false-20211117230315-9504": docker container inspect false-20211117230315-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-20211117230315-9504
	I1117 23:09:33.448666    6844 cli_runner.go:115] Run: docker container inspect false-20211117230315-9504 --format={{.State.Status}}
	W1117 23:09:33.546020    6844 cli_runner.go:162] docker container inspect false-20211117230315-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:09:33.546275    6844 delete.go:82] Unable to get host status for false-20211117230315-9504, assuming it has already been deleted: state: unknown state "false-20211117230315-9504": docker container inspect false-20211117230315-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-20211117230315-9504
	I1117 23:09:33.550659    6844 cli_runner.go:115] Run: docker container inspect -f {{.Id}} false-20211117230315-9504
	W1117 23:09:33.649351    6844 cli_runner.go:162] docker container inspect -f {{.Id}} false-20211117230315-9504 returned with exit code 1
	I1117 23:09:33.649512    6844 kic.go:360] could not find the container false-20211117230315-9504 to remove it. will try anyways
	I1117 23:09:33.654234    6844 cli_runner.go:115] Run: docker container inspect false-20211117230315-9504 --format={{.State.Status}}
	W1117 23:09:33.780475    6844 cli_runner.go:162] docker container inspect false-20211117230315-9504 --format={{.State.Status}} returned with exit code 1
	W1117 23:09:33.780475    6844 oci.go:83] error getting container status, will try to delete anyways: unknown state "false-20211117230315-9504": docker container inspect false-20211117230315-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-20211117230315-9504
	I1117 23:09:33.783472    6844 cli_runner.go:115] Run: docker exec --privileged -t false-20211117230315-9504 /bin/bash -c "sudo init 0"
	W1117 23:09:33.886346    6844 cli_runner.go:162] docker exec --privileged -t false-20211117230315-9504 /bin/bash -c "sudo init 0" returned with exit code 1
	I1117 23:09:33.886346    6844 oci.go:658] error shutdown false-20211117230315-9504: docker exec --privileged -t false-20211117230315-9504 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: false-20211117230315-9504
	I1117 23:09:34.891352    6844 cli_runner.go:115] Run: docker container inspect false-20211117230315-9504 --format={{.State.Status}}
	W1117 23:09:34.988089    6844 cli_runner.go:162] docker container inspect false-20211117230315-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:09:34.988089    6844 oci.go:670] temporary error verifying shutdown: unknown state "false-20211117230315-9504": docker container inspect false-20211117230315-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-20211117230315-9504
	I1117 23:09:34.988089    6844 oci.go:672] temporary error: container false-20211117230315-9504 status is  but expect it to be exited
	I1117 23:09:34.988089    6844 retry.go:31] will retry after 462.318748ms: couldn't verify container is exited. %v: unknown state "false-20211117230315-9504": docker container inspect false-20211117230315-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-20211117230315-9504
	I1117 23:09:35.455070    6844 cli_runner.go:115] Run: docker container inspect false-20211117230315-9504 --format={{.State.Status}}
	W1117 23:09:35.550815    6844 cli_runner.go:162] docker container inspect false-20211117230315-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:09:35.550899    6844 oci.go:670] temporary error verifying shutdown: unknown state "false-20211117230315-9504": docker container inspect false-20211117230315-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-20211117230315-9504
	I1117 23:09:35.550899    6844 oci.go:672] temporary error: container false-20211117230315-9504 status is  but expect it to be exited
	I1117 23:09:35.551001    6844 retry.go:31] will retry after 890.117305ms: couldn't verify container is exited. %v: unknown state "false-20211117230315-9504": docker container inspect false-20211117230315-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-20211117230315-9504
	I1117 23:09:36.446460    6844 cli_runner.go:115] Run: docker container inspect false-20211117230315-9504 --format={{.State.Status}}
	W1117 23:09:36.539717    6844 cli_runner.go:162] docker container inspect false-20211117230315-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:09:36.540033    6844 oci.go:670] temporary error verifying shutdown: unknown state "false-20211117230315-9504": docker container inspect false-20211117230315-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-20211117230315-9504
	I1117 23:09:36.540033    6844 oci.go:672] temporary error: container false-20211117230315-9504 status is  but expect it to be exited
	I1117 23:09:36.540103    6844 retry.go:31] will retry after 636.341646ms: couldn't verify container is exited. %v: unknown state "false-20211117230315-9504": docker container inspect false-20211117230315-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-20211117230315-9504
	I1117 23:09:37.181169    6844 cli_runner.go:115] Run: docker container inspect false-20211117230315-9504 --format={{.State.Status}}
	W1117 23:09:37.291954    6844 cli_runner.go:162] docker container inspect false-20211117230315-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:09:37.292206    6844 oci.go:670] temporary error verifying shutdown: unknown state "false-20211117230315-9504": docker container inspect false-20211117230315-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-20211117230315-9504
	I1117 23:09:37.292206    6844 oci.go:672] temporary error: container false-20211117230315-9504 status is  but expect it to be exited
	I1117 23:09:37.292433    6844 retry.go:31] will retry after 1.107876242s: couldn't verify container is exited. %v: unknown state "false-20211117230315-9504": docker container inspect false-20211117230315-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-20211117230315-9504
	I1117 23:09:38.405240    6844 cli_runner.go:115] Run: docker container inspect false-20211117230315-9504 --format={{.State.Status}}
	W1117 23:09:38.498913    6844 cli_runner.go:162] docker container inspect false-20211117230315-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:09:38.499173    6844 oci.go:670] temporary error verifying shutdown: unknown state "false-20211117230315-9504": docker container inspect false-20211117230315-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-20211117230315-9504
	I1117 23:09:38.499223    6844 oci.go:672] temporary error: container false-20211117230315-9504 status is  but expect it to be exited
	I1117 23:09:38.499223    6844 retry.go:31] will retry after 1.511079094s: couldn't verify container is exited. %v: unknown state "false-20211117230315-9504": docker container inspect false-20211117230315-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-20211117230315-9504
	I1117 23:09:40.015263    6844 cli_runner.go:115] Run: docker container inspect false-20211117230315-9504 --format={{.State.Status}}
	W1117 23:09:40.114857    6844 cli_runner.go:162] docker container inspect false-20211117230315-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:09:40.115008    6844 oci.go:670] temporary error verifying shutdown: unknown state "false-20211117230315-9504": docker container inspect false-20211117230315-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-20211117230315-9504
	I1117 23:09:40.115008    6844 oci.go:672] temporary error: container false-20211117230315-9504 status is  but expect it to be exited
	I1117 23:09:40.115008    6844 retry.go:31] will retry after 3.04096222s: couldn't verify container is exited. %v: unknown state "false-20211117230315-9504": docker container inspect false-20211117230315-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-20211117230315-9504
	I1117 23:09:43.162141    6844 cli_runner.go:115] Run: docker container inspect false-20211117230315-9504 --format={{.State.Status}}
	W1117 23:09:43.260646    6844 cli_runner.go:162] docker container inspect false-20211117230315-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:09:43.260646    6844 oci.go:670] temporary error verifying shutdown: unknown state "false-20211117230315-9504": docker container inspect false-20211117230315-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-20211117230315-9504
	I1117 23:09:43.260646    6844 oci.go:672] temporary error: container false-20211117230315-9504 status is  but expect it to be exited
	I1117 23:09:43.260646    6844 retry.go:31] will retry after 5.781953173s: couldn't verify container is exited. %v: unknown state "false-20211117230315-9504": docker container inspect false-20211117230315-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-20211117230315-9504
	I1117 23:09:49.047562    6844 cli_runner.go:115] Run: docker container inspect false-20211117230315-9504 --format={{.State.Status}}
	W1117 23:09:49.140286    6844 cli_runner.go:162] docker container inspect false-20211117230315-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:09:49.140286    6844 oci.go:670] temporary error verifying shutdown: unknown state "false-20211117230315-9504": docker container inspect false-20211117230315-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-20211117230315-9504
	I1117 23:09:49.140286    6844 oci.go:672] temporary error: container false-20211117230315-9504 status is  but expect it to be exited
	I1117 23:09:49.140544    6844 oci.go:87] couldn't shut down false-20211117230315-9504 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "false-20211117230315-9504": docker container inspect false-20211117230315-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-20211117230315-9504
	 
	I1117 23:09:49.144561    6844 cli_runner.go:115] Run: docker rm -f -v false-20211117230315-9504
	W1117 23:09:49.230267    6844 cli_runner.go:162] docker rm -f -v false-20211117230315-9504 returned with exit code 1
	W1117 23:09:49.231534    6844 delete.go:139] delete failed (probably ok) <nil>
	I1117 23:09:49.231534    6844 fix.go:120] Sleeping 1 second for extra luck!
	I1117 23:09:50.232703    6844 start.go:126] createHost starting for "" (driver="docker")
	I1117 23:09:50.235693    6844 out.go:203] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I1117 23:09:50.236086    6844 start.go:160] libmachine.API.Create for "false-20211117230315-9504" (driver="docker")
	I1117 23:09:50.236171    6844 client.go:168] LocalClient.Create starting
	I1117 23:09:50.236579    6844 main.go:130] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I1117 23:09:50.236834    6844 main.go:130] libmachine: Decoding PEM data...
	I1117 23:09:50.236861    6844 main.go:130] libmachine: Parsing certificate...
	I1117 23:09:50.236946    6844 main.go:130] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I1117 23:09:50.237202    6844 main.go:130] libmachine: Decoding PEM data...
	I1117 23:09:50.237310    6844 main.go:130] libmachine: Parsing certificate...
	I1117 23:09:50.241559    6844 cli_runner.go:115] Run: docker network inspect false-20211117230315-9504 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1117 23:09:50.340184    6844 cli_runner.go:162] docker network inspect false-20211117230315-9504 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1117 23:09:50.344845    6844 network_create.go:254] running [docker network inspect false-20211117230315-9504] to gather additional debugging logs...
	I1117 23:09:50.344976    6844 cli_runner.go:115] Run: docker network inspect false-20211117230315-9504
	W1117 23:09:50.443881    6844 cli_runner.go:162] docker network inspect false-20211117230315-9504 returned with exit code 1
	I1117 23:09:50.443881    6844 network_create.go:257] error running [docker network inspect false-20211117230315-9504]: docker network inspect false-20211117230315-9504: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: false-20211117230315-9504
	I1117 23:09:50.443881    6844 network_create.go:259] output of [docker network inspect false-20211117230315-9504]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: false-20211117230315-9504
	
	** /stderr **
	I1117 23:09:50.448047    6844 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1117 23:09:50.550000    6844 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0004ac408] amended:false}} dirty:map[] misses:0}
	I1117 23:09:50.550000    6844 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1117 23:09:50.562378    6844 network.go:288] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0004ac408] amended:true}} dirty:map[192.168.49.0:0xc0004ac408 192.168.58.0:0xc000106350] misses:0}
	I1117 23:09:50.562493    6844 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1117 23:09:50.562493    6844 network_create.go:106] attempt to create docker network false-20211117230315-9504 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I1117 23:09:50.566646    6844 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true false-20211117230315-9504
	W1117 23:09:50.666855    6844 cli_runner.go:162] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true false-20211117230315-9504 returned with exit code 1
	W1117 23:09:50.666855    6844 network_create.go:98] failed to create docker network false-20211117230315-9504 192.168.58.0/24, will retry: subnet is taken
	I1117 23:09:50.678846    6844 network.go:279] skipping subnet 192.168.58.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0004ac408] amended:true}} dirty:map[192.168.49.0:0xc0004ac408 192.168.58.0:0xc000106350] misses:1}
	I1117 23:09:50.679781    6844 network.go:238] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1117 23:09:50.691541    6844 network.go:288] reserving subnet 192.168.67.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0004ac408] amended:true}} dirty:map[192.168.49.0:0xc0004ac408 192.168.58.0:0xc000106350 192.168.67.0:0xc0001063d8] misses:1}
	I1117 23:09:50.691613    6844 network.go:235] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1117 23:09:50.691613    6844 network_create.go:106] attempt to create docker network false-20211117230315-9504 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
	I1117 23:09:50.695291    6844 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true false-20211117230315-9504
	I1117 23:09:50.915019    6844 network_create.go:90] docker network false-20211117230315-9504 192.168.67.0/24 created
	I1117 23:09:50.915204    6844 kic.go:106] calculated static IP "192.168.67.2" for the "false-20211117230315-9504" container
	I1117 23:09:50.924275    6844 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
	I1117 23:09:51.023949    6844 cli_runner.go:115] Run: docker volume create false-20211117230315-9504 --label name.minikube.sigs.k8s.io=false-20211117230315-9504 --label created_by.minikube.sigs.k8s.io=true
	I1117 23:09:51.115590    6844 oci.go:102] Successfully created a docker volume false-20211117230315-9504
	I1117 23:09:51.120374    6844 cli_runner.go:115] Run: docker run --rm --name false-20211117230315-9504-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=false-20211117230315-9504 --entrypoint /usr/bin/test -v false-20211117230315-9504:/var gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -d /var/lib
	I1117 23:09:51.986424    6844 oci.go:106] Successfully prepared a docker volume false-20211117230315-9504
	I1117 23:09:51.988662    6844 preload.go:132] Checking if preload exists for k8s version v1.22.3 and runtime docker
	I1117 23:09:51.988727    6844 kic.go:179] Starting extracting preloaded images to volume ...
	I1117 23:09:51.990752    6844 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I1117 23:09:51.996407    6844 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v false-20211117230315-9504:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir
	W1117 23:09:52.113014    6844 cli_runner.go:162] docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v false-20211117230315-9504:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir returned with exit code 125
	I1117 23:09:52.113138    6844 kic.go:186] Unable to extract preloaded tarball to volume: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v false-20211117230315-9504:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: status code not OK but 500: {"Message":"Unhandled exception: The notification platform is unavailable.\r\n\r\nThe notification platform is unavailable.\r\n","StackTrace":"   at Windows.UI.Notifications.ToastNotificationManager.CreateToastNotifier(String applicationId)\r\n   at Docker.WPF.PromptShareDirectory.<PromptUserAsync>d__0.MoveNext() in C:\\workspaces\\stable-2.3.x\\src\\github.com\\docker\\pinata\\win\\src\\Docker.WPF\\PromptShareDirectory.cs:line 25\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at Docker.ApiServices.Mounting.FileSharing.<DoShareAsync>d__6.MoveNext() in C:\\workspaces\\stable-2.3.x\\src\\github.com\\docker\\pinata\\win\\src\\Docker.ApiServices\\Mounting\\FileSharing.cs:line 80\r\n--- End of stack trace from previous location whe
re exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at Docker.ApiServices.Mounting.FileSharing.<ShareAsync>d__4.MoveNext() in C:\\workspaces\\stable-2.3.x\\src\\github.com\\docker\\pinata\\win\\src\\Docker.ApiServices\\Mounting\\FileSharing.cs:line 47\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at Docker.HttpApi.Controllers.FilesharingController.<ShareDirectory>d__2.MoveNext() in C:\\workspaces\\stable-2.3.x\\src\\github.com\\docker\\pinata\\win\\src\\Docker.HttpApi\\Controllers\\FilesharingController.cs:line 21\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.Exceptio
nDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at System.Threading.Tasks.TaskHelpersExtensions.<CastToObject>d__1`1.MoveNext()\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at System.Web.Http.Controllers.ApiControllerActionInvoker.<InvokeActionAsyncCore>d__1.MoveNext()\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at System.Web.Http.Controllers.ActionFilterResult.<ExecuteAsync>d__5.MoveNext()\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.Excepti
onServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at System.Web.Http.Dispatcher.HttpControllerDispatcher.<SendAsync>d__15.MoveNext()"}.
	See 'docker run --help'.
	I1117 23:09:52.348395    6844 info.go:263] docker info: {ID:L2W4:ZZCI:FAHF:ZZT6:NAWJ:ODFR:2ZBQ:YQ77:PYZZ:VR3E:WLUM:OBSV Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:40 OomKillDisable:true NGoroutines:52 SystemTime:2021-11-17 23:09:52.080263818 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:4.19.76-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://inde
x.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:6239399936 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:19.03.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7ad184331fa3e55e52b890ea95e65ba581ae3429 Expected:7ad184331fa3e55e52b890ea95e65ba581ae3429} RuncCommit:{ID:dc9208a3303feef5b3839f4323d9beb36df0a9dd Expected:dc9208a3303feef5b3839f4323d9beb36df0a9dd} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense:Community Engine Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	E1117 23:09:52.348888    6844 oci.go:197] error getting kernel modules path: Unable to locate kernel modules
	I1117 23:09:52.348941    6844 client.go:171] LocalClient.Create took 2.112754s
	I1117 23:09:54.357148    6844 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1117 23:09:54.359832    6844 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20211117230315-9504
	W1117 23:09:54.449612    6844 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20211117230315-9504 returned with exit code 1
	I1117 23:09:54.449612    6844 retry.go:31] will retry after 178.565968ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "false-20211117230315-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20211117230315-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-20211117230315-9504
	I1117 23:09:54.634135    6844 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20211117230315-9504
	W1117 23:09:54.733310    6844 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20211117230315-9504 returned with exit code 1
	I1117 23:09:54.733310    6844 retry.go:31] will retry after 330.246446ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "false-20211117230315-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20211117230315-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-20211117230315-9504
	I1117 23:09:55.068195    6844 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20211117230315-9504
	W1117 23:09:55.154252    6844 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20211117230315-9504 returned with exit code 1
	I1117 23:09:55.154348    6844 retry.go:31] will retry after 460.157723ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "false-20211117230315-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20211117230315-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-20211117230315-9504
	I1117 23:09:55.620180    6844 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20211117230315-9504
	W1117 23:09:55.718158    6844 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20211117230315-9504 returned with exit code 1
	W1117 23:09:55.718158    6844 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "false-20211117230315-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20211117230315-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-20211117230315-9504
	
	W1117 23:09:55.718158    6844 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "false-20211117230315-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20211117230315-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-20211117230315-9504
	I1117 23:09:55.718158    6844 start.go:129] duration metric: createHost completed in 5.485356s
	I1117 23:09:55.726412    6844 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1117 23:09:55.730341    6844 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20211117230315-9504
	W1117 23:09:55.833489    6844 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20211117230315-9504 returned with exit code 1
	I1117 23:09:55.833639    6844 retry.go:31] will retry after 195.758538ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "false-20211117230315-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20211117230315-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-20211117230315-9504
	I1117 23:09:56.033353    6844 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20211117230315-9504
	W1117 23:09:56.145065    6844 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20211117230315-9504 returned with exit code 1
	I1117 23:09:56.145441    6844 retry.go:31] will retry after 297.413196ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "false-20211117230315-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20211117230315-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-20211117230315-9504
	I1117 23:09:56.448261    6844 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20211117230315-9504
	W1117 23:09:56.535696    6844 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20211117230315-9504 returned with exit code 1
	I1117 23:09:56.535938    6844 retry.go:31] will retry after 663.23513ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "false-20211117230315-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20211117230315-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-20211117230315-9504
	I1117 23:09:57.203512    6844 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20211117230315-9504
	W1117 23:09:57.299667    6844 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20211117230315-9504 returned with exit code 1
	W1117 23:09:57.299824    6844 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "false-20211117230315-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20211117230315-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-20211117230315-9504
	
	W1117 23:09:57.299891    6844 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "false-20211117230315-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20211117230315-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-20211117230315-9504
	I1117 23:09:57.299919    6844 fix.go:57] fixHost completed within 24.0696448s
	I1117 23:09:57.299919    6844 start.go:80] releasing machines lock for "false-20211117230315-9504", held for 24.0698618s
	W1117 23:09:57.300062    6844 out.go:241] * Failed to start docker container. Running "minikube delete -p false-20211117230315-9504" may fix it: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	* Failed to start docker container. Running "minikube delete -p false-20211117230315-9504" may fix it: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	I1117 23:09:57.304250    6844 out.go:176] 
	W1117 23:09:57.304960    6844 out.go:241] X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	W1117 23:09:57.304960    6844 out.go:241] * 
	* 
	W1117 23:09:57.305833    6844 out.go:241] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1117 23:09:57.308678    6844 out.go:176] 

                                                
                                                
** /stderr **
net_test.go:101: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/false/Start (42.38s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (60.97s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:90: (dbg) Run:  out/minikube-windows-amd64.exe start -p pause-20211117230855-9504 --alsologtostderr -v=1 --driver=docker

                                                
                                                
=== CONT  TestPause/serial/SecondStartNoReconfiguration
pause_test.go:90: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p pause-20211117230855-9504 --alsologtostderr -v=1 --driver=docker: exit status 80 (58.9755958s)

                                                
                                                
-- stdout --
	* [pause-20211117230855-9504] minikube v1.24.0 on Microsoft Windows 10 Enterprise N 10.0.18363 Build 18363
	  - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=12739
	* Using the docker driver based on existing profile
	* Starting control plane node pause-20211117230855-9504 in cluster pause-20211117230855-9504
	* Pulling base image ...
	* docker "pause-20211117230855-9504" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* docker "pause-20211117230855-9504" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1117 23:09:39.178418    7172 out.go:297] Setting OutFile to fd 1404 ...
	I1117 23:09:39.242456    7172 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 23:09:39.242456    7172 out.go:310] Setting ErrFile to fd 1684...
	I1117 23:09:39.243435    7172 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 23:09:39.253444    7172 out.go:304] Setting JSON to false
	I1117 23:09:39.255466    7172 start.go:112] hostinfo: {"hostname":"minikube2","uptime":79895,"bootTime":1637110684,"procs":132,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.18363 Build 18363","kernelVersion":"10.0.18363 Build 18363","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"29dced62-21fb-45d8-a34e-472b66ced002"}
	W1117 23:09:39.256433    7172 start.go:120] gopshost.Virtualization returned error: not implemented yet
	I1117 23:09:39.259453    7172 out.go:176] * [pause-20211117230855-9504] minikube v1.24.0 on Microsoft Windows 10 Enterprise N 10.0.18363 Build 18363
	I1117 23:09:39.259453    7172 notify.go:174] Checking for updates...
	I1117 23:09:39.262432    7172 out.go:176]   - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	I1117 23:09:39.267438    7172 out.go:176]   - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	I1117 23:09:39.269453    7172 out.go:176]   - MINIKUBE_LOCATION=12739
	I1117 23:09:39.269453    7172 config.go:176] Loaded profile config "pause-20211117230855-9504": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.3
	I1117 23:09:39.270425    7172 driver.go:343] Setting default libvirt URI to qemu:///system
	I1117 23:09:40.955894    7172 docker.go:132] docker version: linux-19.03.12
	I1117 23:09:40.960498    7172 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I1117 23:09:41.317829    7172 info.go:263] docker info: {ID:L2W4:ZZCI:FAHF:ZZT6:NAWJ:ODFR:2ZBQ:YQ77:PYZZ:VR3E:WLUM:OBSV Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:46 OomKillDisable:true NGoroutines:55 SystemTime:2021-11-17 23:09:41.046815446 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:4.19.76-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://inde
x.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:6239399936 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:19.03.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7ad184331fa3e55e52b890ea95e65ba581ae3429 Expected:7ad184331fa3e55e52b890ea95e65ba581ae3429} RuncCommit:{ID:dc9208a3303feef5b3839f4323d9beb36df0a9dd Expected:dc9208a3303feef5b3839f4323d9beb36df0a9dd} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense:Community Engine Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I1117 23:09:41.321424    7172 out.go:176] * Using the docker driver based on existing profile
	I1117 23:09:41.321424    7172 start.go:280] selected driver: docker
	I1117 23:09:41.321424    7172 start.go:775] validating driver "docker" against &{Name:pause-20211117230855-9504 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.3 ClusterName:pause-20211117230855-9504 Namespace:default APIServerName:minikubeCA APIServerNames:[] A
PIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.22.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host}
	I1117 23:09:41.321424    7172 start.go:786] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I1117 23:09:41.335366    7172 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I1117 23:09:41.715040    7172 info.go:263] docker info: {ID:L2W4:ZZCI:FAHF:ZZT6:NAWJ:ODFR:2ZBQ:YQ77:PYZZ:VR3E:WLUM:OBSV Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:46 OomKillDisable:true NGoroutines:55 SystemTime:2021-11-17 23:09:41.424566376 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:4.19.76-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://inde
x.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:6239399936 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:19.03.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7ad184331fa3e55e52b890ea95e65ba581ae3429 Expected:7ad184331fa3e55e52b890ea95e65ba581ae3429} RuncCommit:{ID:dc9208a3303feef5b3839f4323d9beb36df0a9dd Expected:dc9208a3303feef5b3839f4323d9beb36df0a9dd} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense:Community Engine Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I1117 23:09:41.757747    7172 cni.go:93] Creating CNI manager for ""
	I1117 23:09:41.757747    7172 cni.go:167] CNI unnecessary in this configuration, recommending no CNI
	I1117 23:09:41.757747    7172 start_flags.go:282] config:
	{Name:pause-20211117230855-9504 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.3 ClusterName:pause-20211117230855-9504 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: Networ
kPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.22.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host}
	I1117 23:09:41.762735    7172 out.go:176] * Starting control plane node pause-20211117230855-9504 in cluster pause-20211117230855-9504
	I1117 23:09:41.762735    7172 cache.go:118] Beginning downloading kic base image for docker with docker
	I1117 23:09:41.764738    7172 out.go:176] * Pulling base image ...
	I1117 23:09:41.764738    7172 preload.go:132] Checking if preload exists for k8s version v1.22.3 and runtime docker
	I1117 23:09:41.764738    7172 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c in local docker daemon
	I1117 23:09:41.764738    7172 preload.go:148] Found local preload: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4
	I1117 23:09:41.764738    7172 cache.go:57] Caching tarball of preloaded images
	I1117 23:09:41.764738    7172 preload.go:174] Found C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1117 23:09:41.765773    7172 cache.go:60] Finished verifying existence of preloaded tar for  v1.22.3 on docker
	I1117 23:09:41.765773    7172 profile.go:147] Saving config to C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\pause-20211117230855-9504\config.json ...
	I1117 23:09:41.867414    7172 image.go:79] Found gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c in local docker daemon, skipping pull
	I1117 23:09:41.867414    7172 cache.go:140] gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c exists in daemon, skipping load
	I1117 23:09:41.867414    7172 cache.go:206] Successfully downloaded all kic artifacts
	I1117 23:09:41.867414    7172 start.go:313] acquiring machines lock for pause-20211117230855-9504: {Name:mk4766a808f0af8ff59bac8ab80591f3e9d63384 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 23:09:41.867414    7172 start.go:317] acquired machines lock for "pause-20211117230855-9504" in 0s
	I1117 23:09:41.867414    7172 start.go:93] Skipping create...Using existing machine configuration
	I1117 23:09:41.867414    7172 fix.go:55] fixHost starting: 
	I1117 23:09:41.874417    7172 cli_runner.go:115] Run: docker container inspect pause-20211117230855-9504 --format={{.State.Status}}
	W1117 23:09:41.972162    7172 cli_runner.go:162] docker container inspect pause-20211117230855-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:09:41.972353    7172 fix.go:108] recreateIfNeeded on pause-20211117230855-9504: state= err=unknown state "pause-20211117230855-9504": docker container inspect pause-20211117230855-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: pause-20211117230855-9504
	I1117 23:09:41.972353    7172 fix.go:113] machineExists: false. err=machine does not exist
	I1117 23:09:41.974647    7172 out.go:176] * docker "pause-20211117230855-9504" container is missing, will recreate.
	I1117 23:09:41.974647    7172 delete.go:124] DEMOLISHING pause-20211117230855-9504 ...
	I1117 23:09:41.981647    7172 cli_runner.go:115] Run: docker container inspect pause-20211117230855-9504 --format={{.State.Status}}
	W1117 23:09:42.080259    7172 cli_runner.go:162] docker container inspect pause-20211117230855-9504 --format={{.State.Status}} returned with exit code 1
	W1117 23:09:42.080259    7172 stop.go:75] unable to get state: unknown state "pause-20211117230855-9504": docker container inspect pause-20211117230855-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: pause-20211117230855-9504
	I1117 23:09:42.080259    7172 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "pause-20211117230855-9504": docker container inspect pause-20211117230855-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: pause-20211117230855-9504
	I1117 23:09:42.088883    7172 cli_runner.go:115] Run: docker container inspect pause-20211117230855-9504 --format={{.State.Status}}
	W1117 23:09:42.184995    7172 cli_runner.go:162] docker container inspect pause-20211117230855-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:09:42.184995    7172 delete.go:82] Unable to get host status for pause-20211117230855-9504, assuming it has already been deleted: state: unknown state "pause-20211117230855-9504": docker container inspect pause-20211117230855-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: pause-20211117230855-9504
	I1117 23:09:42.188081    7172 cli_runner.go:115] Run: docker container inspect -f {{.Id}} pause-20211117230855-9504
	W1117 23:09:42.281593    7172 cli_runner.go:162] docker container inspect -f {{.Id}} pause-20211117230855-9504 returned with exit code 1
	I1117 23:09:42.281593    7172 kic.go:360] could not find the container pause-20211117230855-9504 to remove it. will try anyways
	I1117 23:09:42.284204    7172 cli_runner.go:115] Run: docker container inspect pause-20211117230855-9504 --format={{.State.Status}}
	W1117 23:09:42.387592    7172 cli_runner.go:162] docker container inspect pause-20211117230855-9504 --format={{.State.Status}} returned with exit code 1
	W1117 23:09:42.387592    7172 oci.go:83] error getting container status, will try to delete anyways: unknown state "pause-20211117230855-9504": docker container inspect pause-20211117230855-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: pause-20211117230855-9504
	I1117 23:09:42.391609    7172 cli_runner.go:115] Run: docker exec --privileged -t pause-20211117230855-9504 /bin/bash -c "sudo init 0"
	W1117 23:09:42.510790    7172 cli_runner.go:162] docker exec --privileged -t pause-20211117230855-9504 /bin/bash -c "sudo init 0" returned with exit code 1
	I1117 23:09:42.510790    7172 oci.go:658] error shutdown pause-20211117230855-9504: docker exec --privileged -t pause-20211117230855-9504 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: pause-20211117230855-9504
	I1117 23:09:43.517455    7172 cli_runner.go:115] Run: docker container inspect pause-20211117230855-9504 --format={{.State.Status}}
	W1117 23:09:43.614650    7172 cli_runner.go:162] docker container inspect pause-20211117230855-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:09:43.614885    7172 oci.go:670] temporary error verifying shutdown: unknown state "pause-20211117230855-9504": docker container inspect pause-20211117230855-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: pause-20211117230855-9504
	I1117 23:09:43.614885    7172 oci.go:672] temporary error: container pause-20211117230855-9504 status is  but expect it to be exited
	I1117 23:09:43.614992    7172 retry.go:31] will retry after 552.330144ms: couldn't verify container is exited. %v: unknown state "pause-20211117230855-9504": docker container inspect pause-20211117230855-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: pause-20211117230855-9504
	I1117 23:09:44.172679    7172 cli_runner.go:115] Run: docker container inspect pause-20211117230855-9504 --format={{.State.Status}}
	W1117 23:09:44.272700    7172 cli_runner.go:162] docker container inspect pause-20211117230855-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:09:44.272947    7172 oci.go:670] temporary error verifying shutdown: unknown state "pause-20211117230855-9504": docker container inspect pause-20211117230855-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: pause-20211117230855-9504
	I1117 23:09:44.272947    7172 oci.go:672] temporary error: container pause-20211117230855-9504 status is  but expect it to be exited
	I1117 23:09:44.272947    7172 retry.go:31] will retry after 1.080381816s: couldn't verify container is exited. %v: unknown state "pause-20211117230855-9504": docker container inspect pause-20211117230855-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: pause-20211117230855-9504
	I1117 23:09:45.359104    7172 cli_runner.go:115] Run: docker container inspect pause-20211117230855-9504 --format={{.State.Status}}
	W1117 23:09:45.467801    7172 cli_runner.go:162] docker container inspect pause-20211117230855-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:09:45.467801    7172 oci.go:670] temporary error verifying shutdown: unknown state "pause-20211117230855-9504": docker container inspect pause-20211117230855-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: pause-20211117230855-9504
	I1117 23:09:45.467801    7172 oci.go:672] temporary error: container pause-20211117230855-9504 status is  but expect it to be exited
	I1117 23:09:45.467801    7172 retry.go:31] will retry after 1.31013006s: couldn't verify container is exited. %v: unknown state "pause-20211117230855-9504": docker container inspect pause-20211117230855-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: pause-20211117230855-9504
	I1117 23:09:46.784125    7172 cli_runner.go:115] Run: docker container inspect pause-20211117230855-9504 --format={{.State.Status}}
	W1117 23:09:46.873520    7172 cli_runner.go:162] docker container inspect pause-20211117230855-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:09:46.873520    7172 oci.go:670] temporary error verifying shutdown: unknown state "pause-20211117230855-9504": docker container inspect pause-20211117230855-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: pause-20211117230855-9504
	I1117 23:09:46.873520    7172 oci.go:672] temporary error: container pause-20211117230855-9504 status is  but expect it to be exited
	I1117 23:09:46.873920    7172 retry.go:31] will retry after 1.582392691s: couldn't verify container is exited. %v: unknown state "pause-20211117230855-9504": docker container inspect pause-20211117230855-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: pause-20211117230855-9504
	I1117 23:09:48.461018    7172 cli_runner.go:115] Run: docker container inspect pause-20211117230855-9504 --format={{.State.Status}}
	W1117 23:09:48.556966    7172 cli_runner.go:162] docker container inspect pause-20211117230855-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:09:48.557127    7172 oci.go:670] temporary error verifying shutdown: unknown state "pause-20211117230855-9504": docker container inspect pause-20211117230855-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: pause-20211117230855-9504
	I1117 23:09:48.557127    7172 oci.go:672] temporary error: container pause-20211117230855-9504 status is  but expect it to be exited
	I1117 23:09:48.557127    7172 retry.go:31] will retry after 2.340488664s: couldn't verify container is exited. %v: unknown state "pause-20211117230855-9504": docker container inspect pause-20211117230855-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: pause-20211117230855-9504
	I1117 23:09:50.903431    7172 cli_runner.go:115] Run: docker container inspect pause-20211117230855-9504 --format={{.State.Status}}
	W1117 23:09:50.997210    7172 cli_runner.go:162] docker container inspect pause-20211117230855-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:09:50.997533    7172 oci.go:670] temporary error verifying shutdown: unknown state "pause-20211117230855-9504": docker container inspect pause-20211117230855-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: pause-20211117230855-9504
	I1117 23:09:50.997533    7172 oci.go:672] temporary error: container pause-20211117230855-9504 status is  but expect it to be exited
	I1117 23:09:50.997621    7172 retry.go:31] will retry after 4.506218855s: couldn't verify container is exited. %v: unknown state "pause-20211117230855-9504": docker container inspect pause-20211117230855-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: pause-20211117230855-9504
	I1117 23:09:55.509810    7172 cli_runner.go:115] Run: docker container inspect pause-20211117230855-9504 --format={{.State.Status}}
	W1117 23:09:55.598295    7172 cli_runner.go:162] docker container inspect pause-20211117230855-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:09:55.598516    7172 oci.go:670] temporary error verifying shutdown: unknown state "pause-20211117230855-9504": docker container inspect pause-20211117230855-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: pause-20211117230855-9504
	I1117 23:09:55.598628    7172 oci.go:672] temporary error: container pause-20211117230855-9504 status is  but expect it to be exited
	I1117 23:09:55.598701    7172 retry.go:31] will retry after 3.221479586s: couldn't verify container is exited. %v: unknown state "pause-20211117230855-9504": docker container inspect pause-20211117230855-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: pause-20211117230855-9504
	I1117 23:09:58.826348    7172 cli_runner.go:115] Run: docker container inspect pause-20211117230855-9504 --format={{.State.Status}}
	W1117 23:09:58.926612    7172 cli_runner.go:162] docker container inspect pause-20211117230855-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:09:58.926915    7172 oci.go:670] temporary error verifying shutdown: unknown state "pause-20211117230855-9504": docker container inspect pause-20211117230855-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: pause-20211117230855-9504
	I1117 23:09:58.926947    7172 oci.go:672] temporary error: container pause-20211117230855-9504 status is  but expect it to be exited
	I1117 23:09:58.926947    7172 oci.go:87] couldn't shut down pause-20211117230855-9504 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "pause-20211117230855-9504": docker container inspect pause-20211117230855-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: pause-20211117230855-9504
	 
	I1117 23:09:58.931897    7172 cli_runner.go:115] Run: docker rm -f -v pause-20211117230855-9504
	W1117 23:09:59.020600    7172 cli_runner.go:162] docker rm -f -v pause-20211117230855-9504 returned with exit code 1
	W1117 23:09:59.021348    7172 delete.go:139] delete failed (probably ok) <nil>
	I1117 23:09:59.021348    7172 fix.go:120] Sleeping 1 second for extra luck!
	I1117 23:10:00.022462    7172 start.go:126] createHost starting for "" (driver="docker")
	I1117 23:10:00.026177    7172 out.go:203] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I1117 23:10:00.026477    7172 start.go:160] libmachine.API.Create for "pause-20211117230855-9504" (driver="docker")
	I1117 23:10:00.026567    7172 client.go:168] LocalClient.Create starting
	I1117 23:10:00.026701    7172 main.go:130] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I1117 23:10:00.026701    7172 main.go:130] libmachine: Decoding PEM data...
	I1117 23:10:00.027228    7172 main.go:130] libmachine: Parsing certificate...
	I1117 23:10:00.027473    7172 main.go:130] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I1117 23:10:00.027542    7172 main.go:130] libmachine: Decoding PEM data...
	I1117 23:10:00.027542    7172 main.go:130] libmachine: Parsing certificate...
	I1117 23:10:00.033005    7172 cli_runner.go:115] Run: docker network inspect pause-20211117230855-9504 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1117 23:10:00.131828    7172 cli_runner.go:162] docker network inspect pause-20211117230855-9504 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1117 23:10:00.136116    7172 network_create.go:254] running [docker network inspect pause-20211117230855-9504] to gather additional debugging logs...
	I1117 23:10:00.136275    7172 cli_runner.go:115] Run: docker network inspect pause-20211117230855-9504
	W1117 23:10:00.226593    7172 cli_runner.go:162] docker network inspect pause-20211117230855-9504 returned with exit code 1
	I1117 23:10:00.226774    7172 network_create.go:257] error running [docker network inspect pause-20211117230855-9504]: docker network inspect pause-20211117230855-9504: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: pause-20211117230855-9504
	I1117 23:10:00.226774    7172 network_create.go:259] output of [docker network inspect pause-20211117230855-9504]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: pause-20211117230855-9504
	
	** /stderr **
	I1117 23:10:00.231512    7172 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1117 23:10:00.350184    7172 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc000a5e388] misses:0}
	I1117 23:10:00.350184    7172 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1117 23:10:00.350184    7172 network_create.go:106] attempt to create docker network pause-20211117230855-9504 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1117 23:10:00.354508    7172 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true pause-20211117230855-9504
	W1117 23:10:00.450237    7172 cli_runner.go:162] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true pause-20211117230855-9504 returned with exit code 1
	W1117 23:10:00.450237    7172 network_create.go:98] failed to create docker network pause-20211117230855-9504 192.168.49.0/24, will retry: subnet is taken
	I1117 23:10:00.464060    7172 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000a5e388] amended:false}} dirty:map[] misses:0}
	I1117 23:10:00.464060    7172 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1117 23:10:00.478063    7172 network.go:288] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000a5e388] amended:true}} dirty:map[192.168.49.0:0xc000a5e388 192.168.58.0:0xc000492148] misses:0}
	I1117 23:10:00.478063    7172 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1117 23:10:00.478063    7172 network_create.go:106] attempt to create docker network pause-20211117230855-9504 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I1117 23:10:00.482080    7172 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true pause-20211117230855-9504
	W1117 23:10:00.575945    7172 cli_runner.go:162] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true pause-20211117230855-9504 returned with exit code 1
	W1117 23:10:00.575945    7172 network_create.go:98] failed to create docker network pause-20211117230855-9504 192.168.58.0/24, will retry: subnet is taken
	I1117 23:10:00.590846    7172 network.go:279] skipping subnet 192.168.58.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000a5e388] amended:true}} dirty:map[192.168.49.0:0xc000a5e388 192.168.58.0:0xc000492148] misses:1}
	I1117 23:10:00.590846    7172 network.go:238] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1117 23:10:00.604898    7172 network.go:288] reserving subnet 192.168.67.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000a5e388] amended:true}} dirty:map[192.168.49.0:0xc000a5e388 192.168.58.0:0xc000492148 192.168.67.0:0xc00014e368] misses:1}
	I1117 23:10:00.604898    7172 network.go:235] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1117 23:10:00.604898    7172 network_create.go:106] attempt to create docker network pause-20211117230855-9504 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
	I1117 23:10:00.609739    7172 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true pause-20211117230855-9504
	W1117 23:10:00.702999    7172 cli_runner.go:162] docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true pause-20211117230855-9504 returned with exit code 1
	W1117 23:10:00.703192    7172 network_create.go:98] failed to create docker network pause-20211117230855-9504 192.168.67.0/24, will retry: subnet is taken
	I1117 23:10:00.719201    7172 network.go:279] skipping subnet 192.168.67.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000a5e388] amended:true}} dirty:map[192.168.49.0:0xc000a5e388 192.168.58.0:0xc000492148 192.168.67.0:0xc00014e368] misses:2}
	I1117 23:10:00.719381    7172 network.go:238] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1117 23:10:00.735697    7172 network.go:288] reserving subnet 192.168.76.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000a5e388] amended:true}} dirty:map[192.168.49.0:0xc000a5e388 192.168.58.0:0xc000492148 192.168.67.0:0xc00014e368 192.168.76.0:0xc00014eaa0] misses:2}
	I1117 23:10:00.735697    7172 network.go:235] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1117 23:10:00.735697    7172 network_create.go:106] attempt to create docker network pause-20211117230855-9504 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1117 23:10:00.739632    7172 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true pause-20211117230855-9504
	I1117 23:10:00.947318    7172 network_create.go:90] docker network pause-20211117230855-9504 192.168.76.0/24 created
	I1117 23:10:00.947449    7172 kic.go:106] calculated static IP "192.168.76.2" for the "pause-20211117230855-9504" container
	I1117 23:10:00.955887    7172 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
	I1117 23:10:01.056063    7172 cli_runner.go:115] Run: docker volume create pause-20211117230855-9504 --label name.minikube.sigs.k8s.io=pause-20211117230855-9504 --label created_by.minikube.sigs.k8s.io=true
	I1117 23:10:01.156700    7172 oci.go:102] Successfully created a docker volume pause-20211117230855-9504
	I1117 23:10:01.160733    7172 cli_runner.go:115] Run: docker run --rm --name pause-20211117230855-9504-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=pause-20211117230855-9504 --entrypoint /usr/bin/test -v pause-20211117230855-9504:/var gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -d /var/lib
	I1117 23:10:02.064030    7172 oci.go:106] Successfully prepared a docker volume pause-20211117230855-9504
	I1117 23:10:02.064092    7172 preload.go:132] Checking if preload exists for k8s version v1.22.3 and runtime docker
	I1117 23:10:02.064092    7172 kic.go:179] Starting extracting preloaded images to volume ...
	I1117 23:10:02.068686    7172 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I1117 23:10:02.069627    7172 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v pause-20211117230855-9504:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir
	W1117 23:10:02.181314    7172 cli_runner.go:162] docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v pause-20211117230855-9504:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir returned with exit code 125
	I1117 23:10:02.181397    7172 kic.go:186] Unable to extract preloaded tarball to volume: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v pause-20211117230855-9504:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: status code not OK but 500: {"Message":"Unhandled exception: The notification platform is unavailable.\r\n\r\nThe notification platform is unavailable.\r\n","StackTrace":"   at Windows.UI.Notifications.ToastNotificationManager.CreateToastNotifier(String applicationId)\r\n   at Docker.WPF.PromptShareDirectory.<PromptUserAsync>d__0.MoveNext() in C:\\workspaces\\stable-2.3.x\\src\\github.com\\docker\\pinata\\win\\src\\Docker.WPF\\PromptShareDirectory.cs:line 25\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at Docker.ApiServices.Mounting.FileSharing.<DoShareAsync>d__6.MoveNext() in C:\\workspaces\\stable-2.3.x\\src\\github.com\\docker\\pinata\\win\\src\\Docker.ApiServices\\Mounting\\FileSharing.cs:line 80\r\n--- End of stack trace from previous location whe
re exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at Docker.ApiServices.Mounting.FileSharing.<ShareAsync>d__4.MoveNext() in C:\\workspaces\\stable-2.3.x\\src\\github.com\\docker\\pinata\\win\\src\\Docker.ApiServices\\Mounting\\FileSharing.cs:line 47\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at Docker.HttpApi.Controllers.FilesharingController.<ShareDirectory>d__2.MoveNext() in C:\\workspaces\\stable-2.3.x\\src\\github.com\\docker\\pinata\\win\\src\\Docker.HttpApi\\Controllers\\FilesharingController.cs:line 21\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.Exceptio
nDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at System.Threading.Tasks.TaskHelpersExtensions.<CastToObject>d__1`1.MoveNext()\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at System.Web.Http.Controllers.ApiControllerActionInvoker.<InvokeActionAsyncCore>d__1.MoveNext()\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at System.Web.Http.Controllers.ActionFilterResult.<ExecuteAsync>d__5.MoveNext()\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.Excepti
onServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at System.Web.Http.Dispatcher.HttpControllerDispatcher.<SendAsync>d__15.MoveNext()"}.
	See 'docker run --help'.
	I1117 23:10:02.441899    7172 info.go:263] docker info: {ID:L2W4:ZZCI:FAHF:ZZT6:NAWJ:ODFR:2ZBQ:YQ77:PYZZ:VR3E:WLUM:OBSV Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:42 OomKillDisable:true NGoroutines:55 SystemTime:2021-11-17 23:10:02.162898347 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:4.19.76-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://inde
x.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:6239399936 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:19.03.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7ad184331fa3e55e52b890ea95e65ba581ae3429 Expected:7ad184331fa3e55e52b890ea95e65ba581ae3429} RuncCommit:{ID:dc9208a3303feef5b3839f4323d9beb36df0a9dd Expected:dc9208a3303feef5b3839f4323d9beb36df0a9dd} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense:Community Engine Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	E1117 23:10:02.442220    7172 oci.go:197] error getting kernel modules path: Unable to locate kernel modules
	I1117 23:10:02.442319    7172 client.go:171] LocalClient.Create took 2.4156798s
	I1117 23:10:04.453974    7172 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1117 23:10:04.457781    7172 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20211117230855-9504
	W1117 23:10:04.552868    7172 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20211117230855-9504 returned with exit code 1
	I1117 23:10:04.553222    7172 retry.go:31] will retry after 149.242379ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "pause-20211117230855-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20211117230855-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: pause-20211117230855-9504
	I1117 23:10:04.707725    7172 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20211117230855-9504
	W1117 23:10:04.797225    7172 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20211117230855-9504 returned with exit code 1
	I1117 23:10:04.797563    7172 retry.go:31] will retry after 300.341948ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "pause-20211117230855-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20211117230855-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: pause-20211117230855-9504
	I1117 23:10:05.104561    7172 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20211117230855-9504
	W1117 23:10:05.190543    7172 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20211117230855-9504 returned with exit code 1
	I1117 23:10:05.190808    7172 retry.go:31] will retry after 571.057104ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "pause-20211117230855-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20211117230855-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: pause-20211117230855-9504
	I1117 23:10:05.767134    7172 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20211117230855-9504
	W1117 23:10:05.861720    7172 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20211117230855-9504 returned with exit code 1
	W1117 23:10:05.861720    7172 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "pause-20211117230855-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20211117230855-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: pause-20211117230855-9504
	
	W1117 23:10:05.861720    7172 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "pause-20211117230855-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20211117230855-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: pause-20211117230855-9504
	I1117 23:10:05.861720    7172 start.go:129] duration metric: createHost completed in 5.8391056s
	I1117 23:10:05.870139    7172 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1117 23:10:05.873354    7172 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20211117230855-9504
	W1117 23:10:05.967569    7172 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20211117230855-9504 returned with exit code 1
	I1117 23:10:05.967708    7172 retry.go:31] will retry after 178.565968ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "pause-20211117230855-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20211117230855-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: pause-20211117230855-9504
	I1117 23:10:06.151217    7172 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20211117230855-9504
	W1117 23:10:06.246627    7172 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20211117230855-9504 returned with exit code 1
	I1117 23:10:06.246874    7172 retry.go:31] will retry after 330.246446ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "pause-20211117230855-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20211117230855-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: pause-20211117230855-9504
	I1117 23:10:06.581453    7172 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20211117230855-9504
	W1117 23:10:06.673312    7172 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20211117230855-9504 returned with exit code 1
	I1117 23:10:06.673565    7172 retry.go:31] will retry after 460.157723ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "pause-20211117230855-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20211117230855-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: pause-20211117230855-9504
	I1117 23:10:07.138731    7172 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20211117230855-9504
	W1117 23:10:07.230620    7172 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20211117230855-9504 returned with exit code 1
	W1117 23:10:07.230765    7172 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "pause-20211117230855-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20211117230855-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: pause-20211117230855-9504
	
	W1117 23:10:07.230765    7172 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "pause-20211117230855-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20211117230855-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: pause-20211117230855-9504
	I1117 23:10:07.230765    7172 fix.go:57] fixHost completed within 25.3631609s
	I1117 23:10:07.230765    7172 start.go:80] releasing machines lock for "pause-20211117230855-9504", held for 25.3631609s
	W1117 23:10:07.230765    7172 start.go:532] error starting host: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	W1117 23:10:07.230765    7172 out.go:241] ! StartHost failed, but will try again: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	! StartHost failed, but will try again: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	I1117 23:10:07.231324    7172 start.go:547] Will try again in 5 seconds ...
	I1117 23:10:12.232005    7172 start.go:313] acquiring machines lock for pause-20211117230855-9504: {Name:mk4766a808f0af8ff59bac8ab80591f3e9d63384 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 23:10:12.232510    7172 start.go:317] acquired machines lock for "pause-20211117230855-9504" in 362.6µs
	I1117 23:10:12.232510    7172 start.go:93] Skipping create...Using existing machine configuration
	I1117 23:10:12.232510    7172 fix.go:55] fixHost starting: 
	I1117 23:10:12.240321    7172 cli_runner.go:115] Run: docker container inspect pause-20211117230855-9504 --format={{.State.Status}}
	W1117 23:10:12.333995    7172 cli_runner.go:162] docker container inspect pause-20211117230855-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:10:12.333995    7172 fix.go:108] recreateIfNeeded on pause-20211117230855-9504: state= err=unknown state "pause-20211117230855-9504": docker container inspect pause-20211117230855-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: pause-20211117230855-9504
	I1117 23:10:12.334177    7172 fix.go:113] machineExists: false. err=machine does not exist
	I1117 23:10:12.338898    7172 out.go:176] * docker "pause-20211117230855-9504" container is missing, will recreate.
	I1117 23:10:12.338898    7172 delete.go:124] DEMOLISHING pause-20211117230855-9504 ...
	I1117 23:10:12.345191    7172 cli_runner.go:115] Run: docker container inspect pause-20211117230855-9504 --format={{.State.Status}}
	W1117 23:10:12.444470    7172 cli_runner.go:162] docker container inspect pause-20211117230855-9504 --format={{.State.Status}} returned with exit code 1
	W1117 23:10:12.444693    7172 stop.go:75] unable to get state: unknown state "pause-20211117230855-9504": docker container inspect pause-20211117230855-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: pause-20211117230855-9504
	I1117 23:10:12.444693    7172 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "pause-20211117230855-9504": docker container inspect pause-20211117230855-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: pause-20211117230855-9504
	I1117 23:10:12.453283    7172 cli_runner.go:115] Run: docker container inspect pause-20211117230855-9504 --format={{.State.Status}}
	W1117 23:10:12.542534    7172 cli_runner.go:162] docker container inspect pause-20211117230855-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:10:12.542823    7172 delete.go:82] Unable to get host status for pause-20211117230855-9504, assuming it has already been deleted: state: unknown state "pause-20211117230855-9504": docker container inspect pause-20211117230855-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: pause-20211117230855-9504
	I1117 23:10:12.546498    7172 cli_runner.go:115] Run: docker container inspect -f {{.Id}} pause-20211117230855-9504
	W1117 23:10:12.631776    7172 cli_runner.go:162] docker container inspect -f {{.Id}} pause-20211117230855-9504 returned with exit code 1
	I1117 23:10:12.632003    7172 kic.go:360] could not find the container pause-20211117230855-9504 to remove it. will try anyways
	I1117 23:10:12.636432    7172 cli_runner.go:115] Run: docker container inspect pause-20211117230855-9504 --format={{.State.Status}}
	W1117 23:10:12.721990    7172 cli_runner.go:162] docker container inspect pause-20211117230855-9504 --format={{.State.Status}} returned with exit code 1
	W1117 23:10:12.722157    7172 oci.go:83] error getting container status, will try to delete anyways: unknown state "pause-20211117230855-9504": docker container inspect pause-20211117230855-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: pause-20211117230855-9504
	I1117 23:10:12.726534    7172 cli_runner.go:115] Run: docker exec --privileged -t pause-20211117230855-9504 /bin/bash -c "sudo init 0"
	W1117 23:10:12.825929    7172 cli_runner.go:162] docker exec --privileged -t pause-20211117230855-9504 /bin/bash -c "sudo init 0" returned with exit code 1
	I1117 23:10:12.826182    7172 oci.go:658] error shutdown pause-20211117230855-9504: docker exec --privileged -t pause-20211117230855-9504 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: pause-20211117230855-9504
	I1117 23:10:13.831359    7172 cli_runner.go:115] Run: docker container inspect pause-20211117230855-9504 --format={{.State.Status}}
	W1117 23:10:13.922724    7172 cli_runner.go:162] docker container inspect pause-20211117230855-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:10:13.923010    7172 oci.go:670] temporary error verifying shutdown: unknown state "pause-20211117230855-9504": docker container inspect pause-20211117230855-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: pause-20211117230855-9504
	I1117 23:10:13.923010    7172 oci.go:672] temporary error: container pause-20211117230855-9504 status is  but expect it to be exited
	I1117 23:10:13.923108    7172 retry.go:31] will retry after 391.517075ms: couldn't verify container is exited. %v: unknown state "pause-20211117230855-9504": docker container inspect pause-20211117230855-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: pause-20211117230855-9504
	I1117 23:10:14.319492    7172 cli_runner.go:115] Run: docker container inspect pause-20211117230855-9504 --format={{.State.Status}}
	W1117 23:10:14.421689    7172 cli_runner.go:162] docker container inspect pause-20211117230855-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:10:14.421912    7172 oci.go:670] temporary error verifying shutdown: unknown state "pause-20211117230855-9504": docker container inspect pause-20211117230855-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: pause-20211117230855-9504
	I1117 23:10:14.421912    7172 oci.go:672] temporary error: container pause-20211117230855-9504 status is  but expect it to be exited
	I1117 23:10:14.421912    7172 retry.go:31] will retry after 594.826393ms: couldn't verify container is exited. %v: unknown state "pause-20211117230855-9504": docker container inspect pause-20211117230855-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: pause-20211117230855-9504
	I1117 23:10:15.022511    7172 cli_runner.go:115] Run: docker container inspect pause-20211117230855-9504 --format={{.State.Status}}
	W1117 23:10:15.115536    7172 cli_runner.go:162] docker container inspect pause-20211117230855-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:10:15.115770    7172 oci.go:670] temporary error verifying shutdown: unknown state "pause-20211117230855-9504": docker container inspect pause-20211117230855-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: pause-20211117230855-9504
	I1117 23:10:15.115811    7172 oci.go:672] temporary error: container pause-20211117230855-9504 status is  but expect it to be exited
	I1117 23:10:15.115811    7172 retry.go:31] will retry after 1.326470261s: couldn't verify container is exited. %v: unknown state "pause-20211117230855-9504": docker container inspect pause-20211117230855-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: pause-20211117230855-9504
	I1117 23:10:16.447878    7172 cli_runner.go:115] Run: docker container inspect pause-20211117230855-9504 --format={{.State.Status}}
	W1117 23:10:16.532791    7172 cli_runner.go:162] docker container inspect pause-20211117230855-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:10:16.532791    7172 oci.go:670] temporary error verifying shutdown: unknown state "pause-20211117230855-9504": docker container inspect pause-20211117230855-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: pause-20211117230855-9504
	I1117 23:10:16.533231    7172 oci.go:672] temporary error: container pause-20211117230855-9504 status is  but expect it to be exited
	I1117 23:10:16.533231    7172 retry.go:31] will retry after 1.212558276s: couldn't verify container is exited. %v: unknown state "pause-20211117230855-9504": docker container inspect pause-20211117230855-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: pause-20211117230855-9504
	I1117 23:10:17.752138    7172 cli_runner.go:115] Run: docker container inspect pause-20211117230855-9504 --format={{.State.Status}}
	W1117 23:10:17.847305    7172 cli_runner.go:162] docker container inspect pause-20211117230855-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:10:17.847571    7172 oci.go:670] temporary error verifying shutdown: unknown state "pause-20211117230855-9504": docker container inspect pause-20211117230855-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: pause-20211117230855-9504
	I1117 23:10:17.847571    7172 oci.go:672] temporary error: container pause-20211117230855-9504 status is  but expect it to be exited
	I1117 23:10:17.847640    7172 retry.go:31] will retry after 1.779941781s: couldn't verify container is exited. %v: unknown state "pause-20211117230855-9504": docker container inspect pause-20211117230855-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: pause-20211117230855-9504
	I1117 23:10:19.633374    7172 cli_runner.go:115] Run: docker container inspect pause-20211117230855-9504 --format={{.State.Status}}
	W1117 23:10:19.720162    7172 cli_runner.go:162] docker container inspect pause-20211117230855-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:10:19.720392    7172 oci.go:670] temporary error verifying shutdown: unknown state "pause-20211117230855-9504": docker container inspect pause-20211117230855-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: pause-20211117230855-9504
	I1117 23:10:19.720392    7172 oci.go:672] temporary error: container pause-20211117230855-9504 status is  but expect it to be exited
	I1117 23:10:19.720464    7172 retry.go:31] will retry after 3.268621161s: couldn't verify container is exited. %v: unknown state "pause-20211117230855-9504": docker container inspect pause-20211117230855-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: pause-20211117230855-9504
	I1117 23:10:22.994358    7172 cli_runner.go:115] Run: docker container inspect pause-20211117230855-9504 --format={{.State.Status}}
	W1117 23:10:23.084805    7172 cli_runner.go:162] docker container inspect pause-20211117230855-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:10:23.084937    7172 oci.go:670] temporary error verifying shutdown: unknown state "pause-20211117230855-9504": docker container inspect pause-20211117230855-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: pause-20211117230855-9504
	I1117 23:10:23.084937    7172 oci.go:672] temporary error: container pause-20211117230855-9504 status is  but expect it to be exited
	I1117 23:10:23.085052    7172 retry.go:31] will retry after 6.097818893s: couldn't verify container is exited. %v: unknown state "pause-20211117230855-9504": docker container inspect pause-20211117230855-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: pause-20211117230855-9504
	I1117 23:10:29.187095    7172 cli_runner.go:115] Run: docker container inspect pause-20211117230855-9504 --format={{.State.Status}}
	W1117 23:10:29.285301    7172 cli_runner.go:162] docker container inspect pause-20211117230855-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:10:29.285301    7172 oci.go:670] temporary error verifying shutdown: unknown state "pause-20211117230855-9504": docker container inspect pause-20211117230855-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: pause-20211117230855-9504
	I1117 23:10:29.285301    7172 oci.go:672] temporary error: container pause-20211117230855-9504 status is  but expect it to be exited
	I1117 23:10:29.285301    7172 oci.go:87] couldn't shut down pause-20211117230855-9504 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "pause-20211117230855-9504": docker container inspect pause-20211117230855-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: pause-20211117230855-9504
	 
	I1117 23:10:29.288313    7172 cli_runner.go:115] Run: docker rm -f -v pause-20211117230855-9504
	W1117 23:10:29.371566    7172 cli_runner.go:162] docker rm -f -v pause-20211117230855-9504 returned with exit code 1
	W1117 23:10:29.372475    7172 delete.go:139] delete failed (probably ok) <nil>
	I1117 23:10:29.372475    7172 fix.go:120] Sleeping 1 second for extra luck!
	I1117 23:10:30.373141    7172 start.go:126] createHost starting for "" (driver="docker")
	I1117 23:10:30.420852    7172 out.go:203] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I1117 23:10:30.421243    7172 start.go:160] libmachine.API.Create for "pause-20211117230855-9504" (driver="docker")
	I1117 23:10:30.421342    7172 client.go:168] LocalClient.Create starting
	I1117 23:10:30.421464    7172 main.go:130] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I1117 23:10:30.421464    7172 main.go:130] libmachine: Decoding PEM data...
	I1117 23:10:30.421464    7172 main.go:130] libmachine: Parsing certificate...
	I1117 23:10:30.422071    7172 main.go:130] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I1117 23:10:30.422275    7172 main.go:130] libmachine: Decoding PEM data...
	I1117 23:10:30.422275    7172 main.go:130] libmachine: Parsing certificate...
	I1117 23:10:30.427401    7172 cli_runner.go:115] Run: docker network inspect pause-20211117230855-9504 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1117 23:10:30.578449    7172 cli_runner.go:162] docker network inspect pause-20211117230855-9504 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1117 23:10:30.582591    7172 network_create.go:254] running [docker network inspect pause-20211117230855-9504] to gather additional debugging logs...
	I1117 23:10:30.582655    7172 cli_runner.go:115] Run: docker network inspect pause-20211117230855-9504
	W1117 23:10:30.679536    7172 cli_runner.go:162] docker network inspect pause-20211117230855-9504 returned with exit code 1
	I1117 23:10:30.679618    7172 network_create.go:257] error running [docker network inspect pause-20211117230855-9504]: docker network inspect pause-20211117230855-9504: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: pause-20211117230855-9504
	I1117 23:10:30.679618    7172 network_create.go:259] output of [docker network inspect pause-20211117230855-9504]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: pause-20211117230855-9504
	
	** /stderr **
	I1117 23:10:30.684306    7172 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1117 23:10:30.786806    7172 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000a5e388] amended:true}} dirty:map[192.168.49.0:0xc000a5e388 192.168.58.0:0xc000492148 192.168.67.0:0xc00014e368 192.168.76.0:0xc00014eaa0] misses:2}
	I1117 23:10:30.786806    7172 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1117 23:10:30.799090    7172 network.go:279] skipping subnet 192.168.58.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000a5e388] amended:true}} dirty:map[192.168.49.0:0xc000a5e388 192.168.58.0:0xc000492148 192.168.67.0:0xc00014e368 192.168.76.0:0xc00014eaa0] misses:3}
	I1117 23:10:30.799090    7172 network.go:238] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1117 23:10:30.810087    7172 network.go:279] skipping subnet 192.168.67.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000a5e388 192.168.58.0:0xc000492148 192.168.67.0:0xc00014e368 192.168.76.0:0xc00014eaa0] amended:false}} dirty:map[] misses:0}
	I1117 23:10:30.810087    7172 network.go:238] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1117 23:10:30.822799    7172 network.go:279] skipping subnet 192.168.76.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000a5e388 192.168.58.0:0xc000492148 192.168.67.0:0xc00014e368 192.168.76.0:0xc00014eaa0] amended:false}} dirty:map[] misses:0}
	I1117 23:10:30.822799    7172 network.go:238] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1117 23:10:30.835147    7172 network.go:288] reserving subnet 192.168.85.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000a5e388 192.168.58.0:0xc000492148 192.168.67.0:0xc00014e368 192.168.76.0:0xc00014eaa0] amended:true}} dirty:map[192.168.49.0:0xc000a5e388 192.168.58.0:0xc000492148 192.168.67.0:0xc00014e368 192.168.76.0:0xc00014eaa0 192.168.85.0:0xc000492450] misses:0}
	I1117 23:10:30.835147    7172 network.go:235] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1117 23:10:30.835147    7172 network_create.go:106] attempt to create docker network pause-20211117230855-9504 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1117 23:10:30.839148    7172 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true pause-20211117230855-9504
	I1117 23:10:31.052827    7172 network_create.go:90] docker network pause-20211117230855-9504 192.168.85.0/24 created
	I1117 23:10:31.052827    7172 kic.go:106] calculated static IP "192.168.85.2" for the "pause-20211117230855-9504" container
	I1117 23:10:31.061837    7172 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
	I1117 23:10:31.155200    7172 cli_runner.go:115] Run: docker volume create pause-20211117230855-9504 --label name.minikube.sigs.k8s.io=pause-20211117230855-9504 --label created_by.minikube.sigs.k8s.io=true
	I1117 23:10:31.256876    7172 oci.go:102] Successfully created a docker volume pause-20211117230855-9504
	I1117 23:10:31.261845    7172 cli_runner.go:115] Run: docker run --rm --name pause-20211117230855-9504-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=pause-20211117230855-9504 --entrypoint /usr/bin/test -v pause-20211117230855-9504:/var gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -d /var/lib
	I1117 23:10:32.192973    7172 oci.go:106] Successfully prepared a docker volume pause-20211117230855-9504
	I1117 23:10:32.193206    7172 preload.go:132] Checking if preload exists for k8s version v1.22.3 and runtime docker
	I1117 23:10:32.193236    7172 kic.go:179] Starting extracting preloaded images to volume ...
	I1117 23:10:32.198612    7172 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I1117 23:10:32.198677    7172 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v pause-20211117230855-9504:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir
	W1117 23:10:32.315804    7172 cli_runner.go:162] docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v pause-20211117230855-9504:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir returned with exit code 125
	I1117 23:10:32.316044    7172 kic.go:186] Unable to extract preloaded tarball to volume: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v pause-20211117230855-9504:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: status code not OK but 500: {"Message":"Unhandled exception: The notification platform is unavailable.\r\n\r\nThe notification platform is unavailable.\r\n","StackTrace":"   at Windows.UI.Notifications.ToastNotificationManager.CreateToastNotifier(String applicationId)\r\n   at Docker.WPF.PromptShareDirectory.<PromptUserAsync>d__0.MoveNext() in C:\\workspaces\\stable-2.3.x\\src\\github.com\\docker\\pinata\\win\\src\\Docker.WPF\\PromptShareDirectory.cs:line 25\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at Docker.ApiServices.Mounting.FileSharing.<DoShareAsync>d__6.MoveNext() in C:\\workspaces\\stable-2.3.x\\src\\github.com\\docker\\pinata\\win\\src\\Docker.ApiServices\\Mounting\\FileSharing.cs:line 80\r\n--- End of stack trace from previous location whe
re exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at Docker.ApiServices.Mounting.FileSharing.<ShareAsync>d__4.MoveNext() in C:\\workspaces\\stable-2.3.x\\src\\github.com\\docker\\pinata\\win\\src\\Docker.ApiServices\\Mounting\\FileSharing.cs:line 47\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at Docker.HttpApi.Controllers.FilesharingController.<ShareDirectory>d__2.MoveNext() in C:\\workspaces\\stable-2.3.x\\src\\github.com\\docker\\pinata\\win\\src\\Docker.HttpApi\\Controllers\\FilesharingController.cs:line 21\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.Exceptio
nDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at System.Threading.Tasks.TaskHelpersExtensions.<CastToObject>d__1`1.MoveNext()\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at System.Web.Http.Controllers.ApiControllerActionInvoker.<InvokeActionAsyncCore>d__1.MoveNext()\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at System.Web.Http.Controllers.ActionFilterResult.<ExecuteAsync>d__5.MoveNext()\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.Excepti
onServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at System.Web.Http.Dispatcher.HttpControllerDispatcher.<SendAsync>d__15.MoveNext()"}.
	See 'docker run --help'.
	I1117 23:10:32.550157    7172 info.go:263] docker info: {ID:L2W4:ZZCI:FAHF:ZZT6:NAWJ:ODFR:2ZBQ:YQ77:PYZZ:VR3E:WLUM:OBSV Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:41 OomKillDisable:true NGoroutines:54 SystemTime:2021-11-17 23:10:32.282852094 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:4.19.76-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://inde
x.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:6239399936 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:19.03.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7ad184331fa3e55e52b890ea95e65ba581ae3429 Expected:7ad184331fa3e55e52b890ea95e65ba581ae3429} RuncCommit:{ID:dc9208a3303feef5b3839f4323d9beb36df0a9dd Expected:dc9208a3303feef5b3839f4323d9beb36df0a9dd} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense:Community Engine Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	E1117 23:10:32.550745    7172 oci.go:197] error getting kernel modules path: Unable to locate kernel modules
	I1117 23:10:32.550958    7172 client.go:171] LocalClient.Create took 2.1296007s
	I1117 23:10:34.559482    7172 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1117 23:10:34.563356    7172 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20211117230855-9504
	W1117 23:10:34.663420    7172 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20211117230855-9504 returned with exit code 1
	I1117 23:10:34.663420    7172 retry.go:31] will retry after 198.278561ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "pause-20211117230855-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20211117230855-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: pause-20211117230855-9504
	I1117 23:10:34.867547    7172 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20211117230855-9504
	W1117 23:10:34.950333    7172 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20211117230855-9504 returned with exit code 1
	I1117 23:10:34.950333    7172 retry.go:31] will retry after 298.905961ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "pause-20211117230855-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20211117230855-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: pause-20211117230855-9504
	I1117 23:10:35.254809    7172 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20211117230855-9504
	W1117 23:10:35.346276    7172 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20211117230855-9504 returned with exit code 1
	I1117 23:10:35.346665    7172 retry.go:31] will retry after 704.572333ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "pause-20211117230855-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20211117230855-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: pause-20211117230855-9504
	I1117 23:10:36.057741    7172 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20211117230855-9504
	W1117 23:10:36.155360    7172 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20211117230855-9504 returned with exit code 1
	W1117 23:10:36.155862    7172 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "pause-20211117230855-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20211117230855-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: pause-20211117230855-9504
	
	W1117 23:10:36.155862    7172 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "pause-20211117230855-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20211117230855-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: pause-20211117230855-9504
	I1117 23:10:36.155862    7172 start.go:129] duration metric: createHost completed in 5.7826781s
	I1117 23:10:36.164197    7172 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1117 23:10:36.167583    7172 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20211117230855-9504
	W1117 23:10:36.265492    7172 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20211117230855-9504 returned with exit code 1
	I1117 23:10:36.265670    7172 retry.go:31] will retry after 341.333754ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "pause-20211117230855-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20211117230855-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: pause-20211117230855-9504
	I1117 23:10:36.611533    7172 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20211117230855-9504
	W1117 23:10:36.696443    7172 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20211117230855-9504 returned with exit code 1
	I1117 23:10:36.696889    7172 retry.go:31] will retry after 448.769687ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "pause-20211117230855-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20211117230855-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: pause-20211117230855-9504
	I1117 23:10:37.150995    7172 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20211117230855-9504
	W1117 23:10:37.257467    7172 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20211117230855-9504 returned with exit code 1
	I1117 23:10:37.257467    7172 retry.go:31] will retry after 575.898922ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "pause-20211117230855-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20211117230855-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: pause-20211117230855-9504
	I1117 23:10:37.838565    7172 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20211117230855-9504
	W1117 23:10:37.932857    7172 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20211117230855-9504 returned with exit code 1
	W1117 23:10:37.933136    7172 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "pause-20211117230855-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20211117230855-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: pause-20211117230855-9504
	
	W1117 23:10:37.933212    7172 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "pause-20211117230855-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20211117230855-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: pause-20211117230855-9504
	I1117 23:10:37.933212    7172 fix.go:57] fixHost completed within 25.7005094s
	I1117 23:10:37.933267    7172 start.go:80] releasing machines lock for "pause-20211117230855-9504", held for 25.7005645s
	W1117 23:10:37.933784    7172 out.go:241] * Failed to start docker container. Running "minikube delete -p pause-20211117230855-9504" may fix it: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	* Failed to start docker container. Running "minikube delete -p pause-20211117230855-9504" may fix it: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	I1117 23:10:37.938153    7172 out.go:176] 
	W1117 23:10:37.938310    7172 out.go:241] X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	W1117 23:10:37.938310    7172 out.go:241] * 
	* 
	W1117 23:10:37.939647    7172 out.go:241] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1117 23:10:37.943340    7172 out.go:176] 

                                                
                                                
** /stderr **
pause_test.go:92: failed to second start a running minikube with args: "out/minikube-windows-amd64.exe start -p pause-20211117230855-9504 --alsologtostderr -v=1 --driver=docker" : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect pause-20211117230855-9504
helpers_test.go:235: (dbg) docker inspect pause-20211117230855-9504:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "pause-20211117230855-9504",
	        "Id": "1526ca367da2e2230c9e8171330d5ce95b1affd84562077fb1bfd799901f6da5",
	        "Created": "2021-11-17T23:10:30.916394294Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.85.0/24",
	                    "Gateway": "192.168.85.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p pause-20211117230855-9504 -n pause-20211117230855-9504
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p pause-20211117230855-9504 -n pause-20211117230855-9504: exit status 7 (1.7642391s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 23:10:39.941030    6304 status.go:247] status error: host: state: unknown state "pause-20211117230855-9504": docker container inspect pause-20211117230855-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: pause-20211117230855-9504

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "pause-20211117230855-9504" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestPause/serial/SecondStartNoReconfiguration (60.97s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/Start (38.52s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/Start
net_test.go:99: (dbg) Run:  out/minikube-windows-amd64.exe start -p cilium-20211117230315-9504 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=cilium --driver=docker

                                                
                                                
=== CONT  TestNetworkPlugins/group/cilium/Start
net_test.go:99: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p cilium-20211117230315-9504 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=cilium --driver=docker: exit status 80 (38.4219393s)

                                                
                                                
-- stdout --
	* [cilium-20211117230315-9504] minikube v1.24.0 on Microsoft Windows 10 Enterprise N 10.0.18363 Build 18363
	  - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=12739
	* Using the docker driver based on user configuration
	* Starting control plane node cilium-20211117230315-9504 in cluster cilium-20211117230315-9504
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* docker "cilium-20211117230315-9504" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1117 23:09:42.860996    4376 out.go:297] Setting OutFile to fd 1700 ...
	I1117 23:09:42.927635    4376 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 23:09:42.927891    4376 out.go:310] Setting ErrFile to fd 1612...
	I1117 23:09:42.927891    4376 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 23:09:42.947884    4376 out.go:304] Setting JSON to false
	I1117 23:09:42.951663    4376 start.go:112] hostinfo: {"hostname":"minikube2","uptime":79898,"bootTime":1637110684,"procs":133,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.18363 Build 18363","kernelVersion":"10.0.18363 Build 18363","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"29dced62-21fb-45d8-a34e-472b66ced002"}
	W1117 23:09:42.951663    4376 start.go:120] gopshost.Virtualization returned error: not implemented yet
	I1117 23:09:42.957153    4376 out.go:176] * [cilium-20211117230315-9504] minikube v1.24.0 on Microsoft Windows 10 Enterprise N 10.0.18363 Build 18363
	I1117 23:09:42.957440    4376 notify.go:174] Checking for updates...
	I1117 23:09:42.960620    4376 out.go:176]   - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	I1117 23:09:42.962736    4376 out.go:176]   - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	I1117 23:09:42.964684    4376 out.go:176]   - MINIKUBE_LOCATION=12739
	I1117 23:09:42.966682    4376 config.go:176] Loaded profile config "false-20211117230315-9504": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.3
	I1117 23:09:42.966682    4376 config.go:176] Loaded profile config "multinode-20211117225530-9504-m01": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.3
	I1117 23:09:42.967716    4376 config.go:176] Loaded profile config "pause-20211117230855-9504": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.3
	I1117 23:09:42.967716    4376 config.go:176] Loaded profile config "stopped-upgrade-20211117230646-9504": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.0
	I1117 23:09:42.967716    4376 driver.go:343] Setting default libvirt URI to qemu:///system
	I1117 23:09:44.608577    4376 docker.go:132] docker version: linux-19.03.12
	I1117 23:09:44.613184    4376 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I1117 23:09:44.965440    4376 info.go:263] docker info: {ID:L2W4:ZZCI:FAHF:ZZT6:NAWJ:ODFR:2ZBQ:YQ77:PYZZ:VR3E:WLUM:OBSV Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:48 OomKillDisable:true NGoroutines:54 SystemTime:2021-11-17 23:09:44.693530949 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:4.19.76-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://inde
x.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:6239399936 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:19.03.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7ad184331fa3e55e52b890ea95e65ba581ae3429 Expected:7ad184331fa3e55e52b890ea95e65ba581ae3429} RuncCommit:{ID:dc9208a3303feef5b3839f4323d9beb36df0a9dd Expected:dc9208a3303feef5b3839f4323d9beb36df0a9dd} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense:Community Engine Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I1117 23:09:44.970695    4376 out.go:176] * Using the docker driver based on user configuration
	I1117 23:09:44.970695    4376 start.go:280] selected driver: docker
	I1117 23:09:44.970695    4376 start.go:775] validating driver "docker" against <nil>
	I1117 23:09:44.970695    4376 start.go:786] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I1117 23:09:45.050771    4376 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I1117 23:09:45.412057    4376 info.go:263] docker info: {ID:L2W4:ZZCI:FAHF:ZZT6:NAWJ:ODFR:2ZBQ:YQ77:PYZZ:VR3E:WLUM:OBSV Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:46 OomKillDisable:true NGoroutines:54 SystemTime:2021-11-17 23:09:45.133212832 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:4.19.76-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://inde
x.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:6239399936 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:19.03.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7ad184331fa3e55e52b890ea95e65ba581ae3429 Expected:7ad184331fa3e55e52b890ea95e65ba581ae3429} RuncCommit:{ID:dc9208a3303feef5b3839f4323d9beb36df0a9dd Expected:dc9208a3303feef5b3839f4323d9beb36df0a9dd} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense:Community Engine Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I1117 23:09:45.412220    4376 start_flags.go:268] no existing cluster config was found, will generate one from the flags 
	I1117 23:09:45.412747    4376 start_flags.go:758] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1117 23:09:45.412882    4376 cni.go:93] Creating CNI manager for "cilium"
	I1117 23:09:45.412911    4376 start_flags.go:277] Found "Cilium" CNI - setting NetworkPlugin=cni
	I1117 23:09:45.412911    4376 start_flags.go:282] config:
	{Name:cilium-20211117230315-9504 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.3 ClusterName:cilium-20211117230315-9504 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: Netw
orkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:cilium NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host}
	I1117 23:09:45.416610    4376 out.go:176] * Starting control plane node cilium-20211117230315-9504 in cluster cilium-20211117230315-9504
	I1117 23:09:45.416610    4376 cache.go:118] Beginning downloading kic base image for docker with docker
	I1117 23:09:45.421077    4376 out.go:176] * Pulling base image ...
	I1117 23:09:45.421077    4376 preload.go:132] Checking if preload exists for k8s version v1.22.3 and runtime docker
	I1117 23:09:45.421077    4376 preload.go:148] Found local preload: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4
	I1117 23:09:45.421077    4376 cache.go:57] Caching tarball of preloaded images
	I1117 23:09:45.421077    4376 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c in local docker daemon
	I1117 23:09:45.421796    4376 preload.go:174] Found C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1117 23:09:45.422120    4376 cache.go:60] Finished verifying existence of preloaded tar for  v1.22.3 on docker
	I1117 23:09:45.422509    4376 profile.go:147] Saving config to C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\cilium-20211117230315-9504\config.json ...
	I1117 23:09:45.422509    4376 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\cilium-20211117230315-9504\config.json: {Name:mk764701dedb3a48f9fda3f69230305d72b4a265 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1117 23:09:45.533271    4376 image.go:79] Found gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c in local docker daemon, skipping pull
	I1117 23:09:45.533348    4376 cache.go:140] gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c exists in daemon, skipping load
	I1117 23:09:45.533348    4376 cache.go:206] Successfully downloaded all kic artifacts
	I1117 23:09:45.533348    4376 start.go:313] acquiring machines lock for cilium-20211117230315-9504: {Name:mk1871c1f177e1b102e81cc9113665d7af335989 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 23:09:45.533348    4376 start.go:317] acquired machines lock for "cilium-20211117230315-9504" in 0s
	I1117 23:09:45.533348    4376 start.go:89] Provisioning new machine with config: &{Name:cilium-20211117230315-9504 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.3 ClusterName:cilium-20211117230315-9504 Namespace:default APIServerName:minikubeCA APIServerNames:
[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:cilium NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.22.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host} &{Name: IP: Port:8443 KubernetesVersion:v1.22.3 ControlPlane:true Worker:true}
	I1117 23:09:45.533873    4376 start.go:126] createHost starting for "" (driver="docker")
	I1117 23:09:45.537620    4376 out.go:203] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I1117 23:09:45.538283    4376 start.go:160] libmachine.API.Create for "cilium-20211117230315-9504" (driver="docker")
	I1117 23:09:45.538283    4376 client.go:168] LocalClient.Create starting
	I1117 23:09:45.538810    4376 main.go:130] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I1117 23:09:45.539037    4376 main.go:130] libmachine: Decoding PEM data...
	I1117 23:09:45.539037    4376 main.go:130] libmachine: Parsing certificate...
	I1117 23:09:45.539037    4376 main.go:130] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I1117 23:09:45.539037    4376 main.go:130] libmachine: Decoding PEM data...
	I1117 23:09:45.539037    4376 main.go:130] libmachine: Parsing certificate...
	I1117 23:09:45.544362    4376 cli_runner.go:115] Run: docker network inspect cilium-20211117230315-9504 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1117 23:09:45.640519    4376 cli_runner.go:162] docker network inspect cilium-20211117230315-9504 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1117 23:09:45.643514    4376 network_create.go:254] running [docker network inspect cilium-20211117230315-9504] to gather additional debugging logs...
	I1117 23:09:45.644517    4376 cli_runner.go:115] Run: docker network inspect cilium-20211117230315-9504
	W1117 23:09:45.733854    4376 cli_runner.go:162] docker network inspect cilium-20211117230315-9504 returned with exit code 1
	I1117 23:09:45.733854    4376 network_create.go:257] error running [docker network inspect cilium-20211117230315-9504]: docker network inspect cilium-20211117230315-9504: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: cilium-20211117230315-9504
	I1117 23:09:45.734105    4376 network_create.go:259] output of [docker network inspect cilium-20211117230315-9504]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: cilium-20211117230315-9504
	
	** /stderr **
	I1117 23:09:45.737821    4376 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1117 23:09:45.849418    4376 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc0007c68e0] misses:0}
	I1117 23:09:45.849418    4376 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1117 23:09:45.849418    4376 network_create.go:106] attempt to create docker network cilium-20211117230315-9504 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1117 23:09:45.853443    4376 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true cilium-20211117230315-9504
	I1117 23:09:46.068302    4376 network_create.go:90] docker network cilium-20211117230315-9504 192.168.49.0/24 created
	I1117 23:09:46.068302    4376 kic.go:106] calculated static IP "192.168.49.2" for the "cilium-20211117230315-9504" container
	I1117 23:09:46.075560    4376 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
	I1117 23:09:46.171314    4376 cli_runner.go:115] Run: docker volume create cilium-20211117230315-9504 --label name.minikube.sigs.k8s.io=cilium-20211117230315-9504 --label created_by.minikube.sigs.k8s.io=true
	I1117 23:09:46.287134    4376 oci.go:102] Successfully created a docker volume cilium-20211117230315-9504
	I1117 23:09:46.291715    4376 cli_runner.go:115] Run: docker run --rm --name cilium-20211117230315-9504-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=cilium-20211117230315-9504 --entrypoint /usr/bin/test -v cilium-20211117230315-9504:/var gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -d /var/lib
	I1117 23:09:47.779686    4376 cli_runner.go:168] Completed: docker run --rm --name cilium-20211117230315-9504-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=cilium-20211117230315-9504 --entrypoint /usr/bin/test -v cilium-20211117230315-9504:/var gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -d /var/lib: (1.4878983s)
	I1117 23:09:47.779686    4376 oci.go:106] Successfully prepared a docker volume cilium-20211117230315-9504
	I1117 23:09:47.779686    4376 preload.go:132] Checking if preload exists for k8s version v1.22.3 and runtime docker
	I1117 23:09:47.779686    4376 kic.go:179] Starting extracting preloaded images to volume ...
	I1117 23:09:47.784353    4376 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I1117 23:09:47.784353    4376 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v cilium-20211117230315-9504:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir
	W1117 23:09:47.899873    4376 cli_runner.go:162] docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v cilium-20211117230315-9504:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir returned with exit code 125
	I1117 23:09:47.899873    4376 kic.go:186] Unable to extract preloaded tarball to volume: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v cilium-20211117230315-9504:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: status code not OK but 500: {"Message":"Unhandled exception: The notification platform is unavailable.\r\n\r\nThe notification platform is unavailable.\r\n","StackTrace":"   at Windows.UI.Notifications.ToastNotificationManager.CreateToastNotifier(String applicationId)\r\n   at Docker.WPF.PromptShareDirectory.<PromptUserAsync>d__0.MoveNext() in C:\\workspaces\\stable-2.3.x\\src\\github.com\\docker\\pinata\\win\\src\\Docker.WPF\\PromptShareDirectory.cs:line 25\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at Docker.ApiServices.Mounting.FileSharing.<DoShareAsync>d__6.MoveNext() in C:\\workspaces\\stable-2.3.x\\src\\github.com\\docker\\pinata\\win\\src\\Docker.ApiServices\\Mounting\\FileSharing.cs:line 80\r\n--- End of stack trace from previous location whe
re exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at Docker.ApiServices.Mounting.FileSharing.<ShareAsync>d__4.MoveNext() in C:\\workspaces\\stable-2.3.x\\src\\github.com\\docker\\pinata\\win\\src\\Docker.ApiServices\\Mounting\\FileSharing.cs:line 47\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at Docker.HttpApi.Controllers.FilesharingController.<ShareDirectory>d__2.MoveNext() in C:\\workspaces\\stable-2.3.x\\src\\github.com\\docker\\pinata\\win\\src\\Docker.HttpApi\\Controllers\\FilesharingController.cs:line 21\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.Exceptio
nDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at System.Threading.Tasks.TaskHelpersExtensions.<CastToObject>d__1`1.MoveNext()\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at System.Web.Http.Controllers.ApiControllerActionInvoker.<InvokeActionAsyncCore>d__1.MoveNext()\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at System.Web.Http.Controllers.ActionFilterResult.<ExecuteAsync>d__5.MoveNext()\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.Excepti
onServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at System.Web.Http.Dispatcher.HttpControllerDispatcher.<SendAsync>d__15.MoveNext()"}.
	See 'docker run --help'.
	I1117 23:09:48.149284    4376 info.go:263] docker info: {ID:L2W4:ZZCI:FAHF:ZZT6:NAWJ:ODFR:2ZBQ:YQ77:PYZZ:VR3E:WLUM:OBSV Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:40 OomKillDisable:true NGoroutines:52 SystemTime:2021-11-17 23:09:47.864957833 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:4.19.76-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://inde
x.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:6239399936 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:19.03.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7ad184331fa3e55e52b890ea95e65ba581ae3429 Expected:7ad184331fa3e55e52b890ea95e65ba581ae3429} RuncCommit:{ID:dc9208a3303feef5b3839f4323d9beb36df0a9dd Expected:dc9208a3303feef5b3839f4323d9beb36df0a9dd} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense:Community Engine Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	E1117 23:09:48.149284    4376 oci.go:197] error getting kernel modules path: Unable to locate kernel modules
	I1117 23:09:48.149284    4376 client.go:171] LocalClient.Create took 2.6109816s
	I1117 23:09:50.157060    4376 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1117 23:09:50.161058    4376 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20211117230315-9504
	W1117 23:09:50.258275    4376 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20211117230315-9504 returned with exit code 1
	I1117 23:09:50.258275    4376 retry.go:31] will retry after 276.165072ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "cilium-20211117230315-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20211117230315-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cilium-20211117230315-9504
	I1117 23:09:50.539920    4376 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20211117230315-9504
	W1117 23:09:50.630984    4376 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20211117230315-9504 returned with exit code 1
	I1117 23:09:50.631132    4376 retry.go:31] will retry after 540.190908ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "cilium-20211117230315-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20211117230315-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cilium-20211117230315-9504
	I1117 23:09:51.176258    4376 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20211117230315-9504
	W1117 23:09:51.274410    4376 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20211117230315-9504 returned with exit code 1
	I1117 23:09:51.274778    4376 retry.go:31] will retry after 655.06503ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "cilium-20211117230315-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20211117230315-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cilium-20211117230315-9504
	I1117 23:09:51.936210    4376 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20211117230315-9504
	W1117 23:09:52.033552    4376 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20211117230315-9504 returned with exit code 1
	W1117 23:09:52.033734    4376 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "cilium-20211117230315-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20211117230315-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cilium-20211117230315-9504
	
	W1117 23:09:52.033734    4376 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "cilium-20211117230315-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20211117230315-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cilium-20211117230315-9504
	I1117 23:09:52.033734    4376 start.go:129] duration metric: createHost completed in 6.499813s
	I1117 23:09:52.033734    4376 start.go:80] releasing machines lock for "cilium-20211117230315-9504", held for 6.5003377s
	W1117 23:09:52.033734    4376 start.go:532] error starting host: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	I1117 23:09:52.042186    4376 cli_runner.go:115] Run: docker container inspect cilium-20211117230315-9504 --format={{.State.Status}}
	W1117 23:09:52.136262    4376 cli_runner.go:162] docker container inspect cilium-20211117230315-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:09:52.136518    4376 delete.go:82] Unable to get host status for cilium-20211117230315-9504, assuming it has already been deleted: state: unknown state "cilium-20211117230315-9504": docker container inspect cilium-20211117230315-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cilium-20211117230315-9504
	W1117 23:09:52.136889    4376 out.go:241] ! StartHost failed, but will try again: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	! StartHost failed, but will try again: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	I1117 23:09:52.136934    4376 start.go:547] Will try again in 5 seconds ...
	I1117 23:09:57.137464    4376 start.go:313] acquiring machines lock for cilium-20211117230315-9504: {Name:mk1871c1f177e1b102e81cc9113665d7af335989 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 23:09:57.137464    4376 start.go:317] acquired machines lock for "cilium-20211117230315-9504" in 0s
	I1117 23:09:57.137464    4376 start.go:93] Skipping create...Using existing machine configuration
	I1117 23:09:57.137464    4376 fix.go:55] fixHost starting: 
	I1117 23:09:57.145455    4376 cli_runner.go:115] Run: docker container inspect cilium-20211117230315-9504 --format={{.State.Status}}
	W1117 23:09:57.239730    4376 cli_runner.go:162] docker container inspect cilium-20211117230315-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:09:57.239730    4376 fix.go:108] recreateIfNeeded on cilium-20211117230315-9504: state= err=unknown state "cilium-20211117230315-9504": docker container inspect cilium-20211117230315-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cilium-20211117230315-9504
	I1117 23:09:57.239730    4376 fix.go:113] machineExists: false. err=machine does not exist
	I1117 23:09:57.244593    4376 out.go:176] * docker "cilium-20211117230315-9504" container is missing, will recreate.
	I1117 23:09:57.244593    4376 delete.go:124] DEMOLISHING cilium-20211117230315-9504 ...
	I1117 23:09:57.253721    4376 cli_runner.go:115] Run: docker container inspect cilium-20211117230315-9504 --format={{.State.Status}}
	W1117 23:09:57.351039    4376 cli_runner.go:162] docker container inspect cilium-20211117230315-9504 --format={{.State.Status}} returned with exit code 1
	W1117 23:09:57.351039    4376 stop.go:75] unable to get state: unknown state "cilium-20211117230315-9504": docker container inspect cilium-20211117230315-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cilium-20211117230315-9504
	I1117 23:09:57.351039    4376 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "cilium-20211117230315-9504": docker container inspect cilium-20211117230315-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cilium-20211117230315-9504
	I1117 23:09:57.364352    4376 cli_runner.go:115] Run: docker container inspect cilium-20211117230315-9504 --format={{.State.Status}}
	W1117 23:09:57.462467    4376 cli_runner.go:162] docker container inspect cilium-20211117230315-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:09:57.462467    4376 delete.go:82] Unable to get host status for cilium-20211117230315-9504, assuming it has already been deleted: state: unknown state "cilium-20211117230315-9504": docker container inspect cilium-20211117230315-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cilium-20211117230315-9504
	I1117 23:09:57.467475    4376 cli_runner.go:115] Run: docker container inspect -f {{.Id}} cilium-20211117230315-9504
	W1117 23:09:57.561661    4376 cli_runner.go:162] docker container inspect -f {{.Id}} cilium-20211117230315-9504 returned with exit code 1
	I1117 23:09:57.561731    4376 kic.go:360] could not find the container cilium-20211117230315-9504 to remove it. will try anyways
	I1117 23:09:57.565097    4376 cli_runner.go:115] Run: docker container inspect cilium-20211117230315-9504 --format={{.State.Status}}
	W1117 23:09:57.656090    4376 cli_runner.go:162] docker container inspect cilium-20211117230315-9504 --format={{.State.Status}} returned with exit code 1
	W1117 23:09:57.656090    4376 oci.go:83] error getting container status, will try to delete anyways: unknown state "cilium-20211117230315-9504": docker container inspect cilium-20211117230315-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cilium-20211117230315-9504
	I1117 23:09:57.659102    4376 cli_runner.go:115] Run: docker exec --privileged -t cilium-20211117230315-9504 /bin/bash -c "sudo init 0"
	W1117 23:09:57.746474    4376 cli_runner.go:162] docker exec --privileged -t cilium-20211117230315-9504 /bin/bash -c "sudo init 0" returned with exit code 1
	I1117 23:09:57.746474    4376 oci.go:658] error shutdown cilium-20211117230315-9504: docker exec --privileged -t cilium-20211117230315-9504 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: cilium-20211117230315-9504
	I1117 23:09:58.754441    4376 cli_runner.go:115] Run: docker container inspect cilium-20211117230315-9504 --format={{.State.Status}}
	W1117 23:09:58.854007    4376 cli_runner.go:162] docker container inspect cilium-20211117230315-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:09:58.854320    4376 oci.go:670] temporary error verifying shutdown: unknown state "cilium-20211117230315-9504": docker container inspect cilium-20211117230315-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cilium-20211117230315-9504
	I1117 23:09:58.854320    4376 oci.go:672] temporary error: container cilium-20211117230315-9504 status is  but expect it to be exited
	I1117 23:09:58.854384    4376 retry.go:31] will retry after 462.318748ms: couldn't verify container is exited. %v: unknown state "cilium-20211117230315-9504": docker container inspect cilium-20211117230315-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cilium-20211117230315-9504
	I1117 23:09:59.321958    4376 cli_runner.go:115] Run: docker container inspect cilium-20211117230315-9504 --format={{.State.Status}}
	W1117 23:09:59.417159    4376 cli_runner.go:162] docker container inspect cilium-20211117230315-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:09:59.417213    4376 oci.go:670] temporary error verifying shutdown: unknown state "cilium-20211117230315-9504": docker container inspect cilium-20211117230315-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cilium-20211117230315-9504
	I1117 23:09:59.417213    4376 oci.go:672] temporary error: container cilium-20211117230315-9504 status is  but expect it to be exited
	I1117 23:09:59.417213    4376 retry.go:31] will retry after 890.117305ms: couldn't verify container is exited. %v: unknown state "cilium-20211117230315-9504": docker container inspect cilium-20211117230315-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cilium-20211117230315-9504
	I1117 23:10:00.313184    4376 cli_runner.go:115] Run: docker container inspect cilium-20211117230315-9504 --format={{.State.Status}}
	W1117 23:10:00.409997    4376 cli_runner.go:162] docker container inspect cilium-20211117230315-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:10:00.410174    4376 oci.go:670] temporary error verifying shutdown: unknown state "cilium-20211117230315-9504": docker container inspect cilium-20211117230315-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cilium-20211117230315-9504
	I1117 23:10:00.410174    4376 oci.go:672] temporary error: container cilium-20211117230315-9504 status is  but expect it to be exited
	I1117 23:10:00.410174    4376 retry.go:31] will retry after 636.341646ms: couldn't verify container is exited. %v: unknown state "cilium-20211117230315-9504": docker container inspect cilium-20211117230315-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cilium-20211117230315-9504
	I1117 23:10:01.052527    4376 cli_runner.go:115] Run: docker container inspect cilium-20211117230315-9504 --format={{.State.Status}}
	W1117 23:10:01.158336    4376 cli_runner.go:162] docker container inspect cilium-20211117230315-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:10:01.158336    4376 oci.go:670] temporary error verifying shutdown: unknown state "cilium-20211117230315-9504": docker container inspect cilium-20211117230315-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cilium-20211117230315-9504
	I1117 23:10:01.158336    4376 oci.go:672] temporary error: container cilium-20211117230315-9504 status is  but expect it to be exited
	I1117 23:10:01.158336    4376 retry.go:31] will retry after 1.107876242s: couldn't verify container is exited. %v: unknown state "cilium-20211117230315-9504": docker container inspect cilium-20211117230315-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cilium-20211117230315-9504
	I1117 23:10:02.273758    4376 cli_runner.go:115] Run: docker container inspect cilium-20211117230315-9504 --format={{.State.Status}}
	W1117 23:10:02.368924    4376 cli_runner.go:162] docker container inspect cilium-20211117230315-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:10:02.368924    4376 oci.go:670] temporary error verifying shutdown: unknown state "cilium-20211117230315-9504": docker container inspect cilium-20211117230315-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cilium-20211117230315-9504
	I1117 23:10:02.368924    4376 oci.go:672] temporary error: container cilium-20211117230315-9504 status is  but expect it to be exited
	I1117 23:10:02.368924    4376 retry.go:31] will retry after 1.511079094s: couldn't verify container is exited. %v: unknown state "cilium-20211117230315-9504": docker container inspect cilium-20211117230315-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cilium-20211117230315-9504
	I1117 23:10:03.884909    4376 cli_runner.go:115] Run: docker container inspect cilium-20211117230315-9504 --format={{.State.Status}}
	W1117 23:10:03.975676    4376 cli_runner.go:162] docker container inspect cilium-20211117230315-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:10:03.975676    4376 oci.go:670] temporary error verifying shutdown: unknown state "cilium-20211117230315-9504": docker container inspect cilium-20211117230315-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cilium-20211117230315-9504
	I1117 23:10:03.975676    4376 oci.go:672] temporary error: container cilium-20211117230315-9504 status is  but expect it to be exited
	I1117 23:10:03.975676    4376 retry.go:31] will retry after 3.04096222s: couldn't verify container is exited. %v: unknown state "cilium-20211117230315-9504": docker container inspect cilium-20211117230315-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cilium-20211117230315-9504
	I1117 23:10:07.022637    4376 cli_runner.go:115] Run: docker container inspect cilium-20211117230315-9504 --format={{.State.Status}}
	W1117 23:10:07.126430    4376 cli_runner.go:162] docker container inspect cilium-20211117230315-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:10:07.126430    4376 oci.go:670] temporary error verifying shutdown: unknown state "cilium-20211117230315-9504": docker container inspect cilium-20211117230315-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cilium-20211117230315-9504
	I1117 23:10:07.126430    4376 oci.go:672] temporary error: container cilium-20211117230315-9504 status is  but expect it to be exited
	I1117 23:10:07.126430    4376 retry.go:31] will retry after 5.781953173s: couldn't verify container is exited. %v: unknown state "cilium-20211117230315-9504": docker container inspect cilium-20211117230315-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cilium-20211117230315-9504
	I1117 23:10:12.914504    4376 cli_runner.go:115] Run: docker container inspect cilium-20211117230315-9504 --format={{.State.Status}}
	W1117 23:10:13.027590    4376 cli_runner.go:162] docker container inspect cilium-20211117230315-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:10:13.027728    4376 oci.go:670] temporary error verifying shutdown: unknown state "cilium-20211117230315-9504": docker container inspect cilium-20211117230315-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cilium-20211117230315-9504
	I1117 23:10:13.027728    4376 oci.go:672] temporary error: container cilium-20211117230315-9504 status is  but expect it to be exited
	I1117 23:10:13.027728    4376 oci.go:87] couldn't shut down cilium-20211117230315-9504 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "cilium-20211117230315-9504": docker container inspect cilium-20211117230315-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cilium-20211117230315-9504
	 
	I1117 23:10:13.031489    4376 cli_runner.go:115] Run: docker rm -f -v cilium-20211117230315-9504
	W1117 23:10:13.130428    4376 cli_runner.go:162] docker rm -f -v cilium-20211117230315-9504 returned with exit code 1
	W1117 23:10:13.131540    4376 delete.go:139] delete failed (probably ok) <nil>
	I1117 23:10:13.131540    4376 fix.go:120] Sleeping 1 second for extra luck!
	I1117 23:10:14.133428    4376 start.go:126] createHost starting for "" (driver="docker")
	I1117 23:10:14.145684    4376 out.go:203] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I1117 23:10:14.145684    4376 start.go:160] libmachine.API.Create for "cilium-20211117230315-9504" (driver="docker")
	I1117 23:10:14.145684    4376 client.go:168] LocalClient.Create starting
	I1117 23:10:14.146527    4376 main.go:130] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I1117 23:10:14.146806    4376 main.go:130] libmachine: Decoding PEM data...
	I1117 23:10:14.146806    4376 main.go:130] libmachine: Parsing certificate...
	I1117 23:10:14.146976    4376 main.go:130] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I1117 23:10:14.147183    4376 main.go:130] libmachine: Decoding PEM data...
	I1117 23:10:14.147183    4376 main.go:130] libmachine: Parsing certificate...
	I1117 23:10:14.151922    4376 cli_runner.go:115] Run: docker network inspect cilium-20211117230315-9504 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1117 23:10:14.245000    4376 cli_runner.go:162] docker network inspect cilium-20211117230315-9504 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1117 23:10:14.249381    4376 network_create.go:254] running [docker network inspect cilium-20211117230315-9504] to gather additional debugging logs...
	I1117 23:10:14.249381    4376 cli_runner.go:115] Run: docker network inspect cilium-20211117230315-9504
	W1117 23:10:14.341126    4376 cli_runner.go:162] docker network inspect cilium-20211117230315-9504 returned with exit code 1
	I1117 23:10:14.341301    4376 network_create.go:257] error running [docker network inspect cilium-20211117230315-9504]: docker network inspect cilium-20211117230315-9504: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: cilium-20211117230315-9504
	I1117 23:10:14.341301    4376 network_create.go:259] output of [docker network inspect cilium-20211117230315-9504]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: cilium-20211117230315-9504
	
	** /stderr **
	I1117 23:10:14.345180    4376 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1117 23:10:14.457694    4376 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0007c68e0] amended:false}} dirty:map[] misses:0}
	I1117 23:10:14.457694    4376 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1117 23:10:14.469953    4376 network.go:288] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0007c68e0] amended:true}} dirty:map[192.168.49.0:0xc0007c68e0 192.168.58.0:0xc0007c6410] misses:0}
	I1117 23:10:14.470739    4376 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1117 23:10:14.470982    4376 network_create.go:106] attempt to create docker network cilium-20211117230315-9504 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I1117 23:10:14.474975    4376 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true cilium-20211117230315-9504
	I1117 23:10:14.681007    4376 network_create.go:90] docker network cilium-20211117230315-9504 192.168.58.0/24 created
	I1117 23:10:14.681007    4376 kic.go:106] calculated static IP "192.168.58.2" for the "cilium-20211117230315-9504" container
	I1117 23:10:14.699107    4376 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
	I1117 23:10:14.794127    4376 cli_runner.go:115] Run: docker volume create cilium-20211117230315-9504 --label name.minikube.sigs.k8s.io=cilium-20211117230315-9504 --label created_by.minikube.sigs.k8s.io=true
	I1117 23:10:14.886916    4376 oci.go:102] Successfully created a docker volume cilium-20211117230315-9504
	I1117 23:10:14.891830    4376 cli_runner.go:115] Run: docker run --rm --name cilium-20211117230315-9504-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=cilium-20211117230315-9504 --entrypoint /usr/bin/test -v cilium-20211117230315-9504:/var gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -d /var/lib
	I1117 23:10:15.780180    4376 oci.go:106] Successfully prepared a docker volume cilium-20211117230315-9504
	I1117 23:10:15.780494    4376 preload.go:132] Checking if preload exists for k8s version v1.22.3 and runtime docker
	I1117 23:10:15.780589    4376 kic.go:179] Starting extracting preloaded images to volume ...
	I1117 23:10:15.784778    4376 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I1117 23:10:15.784912    4376 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v cilium-20211117230315-9504:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir
	W1117 23:10:15.906435    4376 cli_runner.go:162] docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v cilium-20211117230315-9504:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir returned with exit code 125
	I1117 23:10:15.906435    4376 kic.go:186] Unable to extract preloaded tarball to volume: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v cilium-20211117230315-9504:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: status code not OK but 500: {"Message":"Unhandled exception: The notification platform is unavailable.\r\n\r\nThe notification platform is unavailable.\r\n","StackTrace":"   at Windows.UI.Notifications.ToastNotificationManager.CreateToastNotifier(String applicationId)\r\n   at Docker.WPF.PromptShareDirectory.<PromptUserAsync>d__0.MoveNext() in C:\\workspaces\\stable-2.3.x\\src\\github.com\\docker\\pinata\\win\\src\\Docker.WPF\\PromptShareDirectory.cs:line 25\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at Docker.ApiServices.Mounting.FileSharing.<DoShareAsync>d__6.MoveNext() in C:\\workspaces\\stable-2.3.x\\src\\github.com\\docker\\pinata\\win\\src\\Docker.ApiServices\\Mounting\\FileSharing.cs:line 80\r\n--- End of stack trace from previous location whe
re exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at Docker.ApiServices.Mounting.FileSharing.<ShareAsync>d__4.MoveNext() in C:\\workspaces\\stable-2.3.x\\src\\github.com\\docker\\pinata\\win\\src\\Docker.ApiServices\\Mounting\\FileSharing.cs:line 47\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at Docker.HttpApi.Controllers.FilesharingController.<ShareDirectory>d__2.MoveNext() in C:\\workspaces\\stable-2.3.x\\src\\github.com\\docker\\pinata\\win\\src\\Docker.HttpApi\\Controllers\\FilesharingController.cs:line 21\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.Exceptio
nDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at System.Threading.Tasks.TaskHelpersExtensions.<CastToObject>d__1`1.MoveNext()\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at System.Web.Http.Controllers.ApiControllerActionInvoker.<InvokeActionAsyncCore>d__1.MoveNext()\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at System.Web.Http.Controllers.ActionFilterResult.<ExecuteAsync>d__5.MoveNext()\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.Excepti
onServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at System.Web.Http.Dispatcher.HttpControllerDispatcher.<SendAsync>d__15.MoveNext()"}.
	See 'docker run --help'.
	I1117 23:10:16.161029    4376 info.go:263] docker info: {ID:L2W4:ZZCI:FAHF:ZZT6:NAWJ:ODFR:2ZBQ:YQ77:PYZZ:VR3E:WLUM:OBSV Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:40 OomKillDisable:true NGoroutines:52 SystemTime:2021-11-17 23:10:15.877368346 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:4.19.76-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://inde
x.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:6239399936 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:19.03.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7ad184331fa3e55e52b890ea95e65ba581ae3429 Expected:7ad184331fa3e55e52b890ea95e65ba581ae3429} RuncCommit:{ID:dc9208a3303feef5b3839f4323d9beb36df0a9dd Expected:dc9208a3303feef5b3839f4323d9beb36df0a9dd} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense:Community Engine Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	E1117 23:10:16.161692    4376 oci.go:197] error getting kernel modules path: Unable to locate kernel modules
	I1117 23:10:16.161855    4376 client.go:171] LocalClient.Create took 2.016156s
	I1117 23:10:18.169631    4376 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1117 23:10:18.173644    4376 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20211117230315-9504
	W1117 23:10:18.263540    4376 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20211117230315-9504 returned with exit code 1
	I1117 23:10:18.263861    4376 retry.go:31] will retry after 178.565968ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "cilium-20211117230315-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20211117230315-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cilium-20211117230315-9504
	I1117 23:10:18.447430    4376 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20211117230315-9504
	W1117 23:10:18.539654    4376 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20211117230315-9504 returned with exit code 1
	I1117 23:10:18.539654    4376 retry.go:31] will retry after 330.246446ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "cilium-20211117230315-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20211117230315-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cilium-20211117230315-9504
	I1117 23:10:18.875198    4376 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20211117230315-9504
	W1117 23:10:18.966052    4376 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20211117230315-9504 returned with exit code 1
	I1117 23:10:18.966468    4376 retry.go:31] will retry after 460.157723ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "cilium-20211117230315-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20211117230315-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cilium-20211117230315-9504
	I1117 23:10:19.431449    4376 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20211117230315-9504
	W1117 23:10:19.519573    4376 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20211117230315-9504 returned with exit code 1
	W1117 23:10:19.519573    4376 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "cilium-20211117230315-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20211117230315-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cilium-20211117230315-9504
	
	W1117 23:10:19.519573    4376 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "cilium-20211117230315-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20211117230315-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cilium-20211117230315-9504
	I1117 23:10:19.519573    4376 start.go:129] duration metric: createHost completed in 5.3860389s
	I1117 23:10:19.527046    4376 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1117 23:10:19.530401    4376 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20211117230315-9504
	W1117 23:10:19.617769    4376 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20211117230315-9504 returned with exit code 1
	I1117 23:10:19.617975    4376 retry.go:31] will retry after 195.758538ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "cilium-20211117230315-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20211117230315-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cilium-20211117230315-9504
	I1117 23:10:19.819359    4376 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20211117230315-9504
	W1117 23:10:19.907860    4376 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20211117230315-9504 returned with exit code 1
	I1117 23:10:19.907860    4376 retry.go:31] will retry after 297.413196ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "cilium-20211117230315-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20211117230315-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cilium-20211117230315-9504
	I1117 23:10:20.210810    4376 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20211117230315-9504
	W1117 23:10:20.300682    4376 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20211117230315-9504 returned with exit code 1
	I1117 23:10:20.300970    4376 retry.go:31] will retry after 663.23513ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "cilium-20211117230315-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20211117230315-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cilium-20211117230315-9504
	I1117 23:10:20.969435    4376 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20211117230315-9504
	W1117 23:10:21.061820    4376 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20211117230315-9504 returned with exit code 1
	W1117 23:10:21.061820    4376 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "cilium-20211117230315-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20211117230315-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cilium-20211117230315-9504
	
	W1117 23:10:21.061820    4376 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "cilium-20211117230315-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20211117230315-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cilium-20211117230315-9504
	I1117 23:10:21.061820    4376 fix.go:57] fixHost completed within 23.9241763s
	I1117 23:10:21.061820    4376 start.go:80] releasing machines lock for "cilium-20211117230315-9504", held for 23.9241763s
	W1117 23:10:21.064305    4376 out.go:241] * Failed to start docker container. Running "minikube delete -p cilium-20211117230315-9504" may fix it: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	* Failed to start docker container. Running "minikube delete -p cilium-20211117230315-9504" may fix it: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	I1117 23:10:21.069154    4376 out.go:176] 
	W1117 23:10:21.069400    4376 out.go:241] X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	W1117 23:10:21.069400    4376 out.go:241] * 
	* 
	W1117 23:10:21.071401    4376 out.go:241] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1117 23:10:21.075127    4376 out.go:176] 

                                                
                                                
** /stderr **
net_test.go:101: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/cilium/Start (38.52s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (38.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:99: (dbg) Run:  out/minikube-windows-amd64.exe start -p calico-20211117230315-9504 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=calico --driver=docker

                                                
                                                
=== CONT  TestNetworkPlugins/group/calico/Start
net_test.go:99: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p calico-20211117230315-9504 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=calico --driver=docker: exit status 80 (38.0651406s)

                                                
                                                
-- stdout --
	* [calico-20211117230315-9504] minikube v1.24.0 on Microsoft Windows 10 Enterprise N 10.0.18363 Build 18363
	  - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=12739
	* Using the docker driver based on user configuration
	* Starting control plane node calico-20211117230315-9504 in cluster calico-20211117230315-9504
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* docker "calico-20211117230315-9504" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1117 23:09:45.814226   10140 out.go:297] Setting OutFile to fd 1788 ...
	I1117 23:09:45.886411   10140 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 23:09:45.886411   10140 out.go:310] Setting ErrFile to fd 1332...
	I1117 23:09:45.886411   10140 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 23:09:45.897618   10140 out.go:304] Setting JSON to false
	I1117 23:09:45.900145   10140 start.go:112] hostinfo: {"hostname":"minikube2","uptime":79901,"bootTime":1637110684,"procs":133,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.18363 Build 18363","kernelVersion":"10.0.18363 Build 18363","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"29dced62-21fb-45d8-a34e-472b66ced002"}
	W1117 23:09:45.900145   10140 start.go:120] gopshost.Virtualization returned error: not implemented yet
	I1117 23:09:45.906567   10140 out.go:176] * [calico-20211117230315-9504] minikube v1.24.0 on Microsoft Windows 10 Enterprise N 10.0.18363 Build 18363
	I1117 23:09:45.906853   10140 notify.go:174] Checking for updates...
	I1117 23:09:45.909585   10140 out.go:176]   - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	I1117 23:09:45.911966   10140 out.go:176]   - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	I1117 23:09:45.914207   10140 out.go:176]   - MINIKUBE_LOCATION=12739
	I1117 23:09:45.915204   10140 config.go:176] Loaded profile config "cilium-20211117230315-9504": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.3
	I1117 23:09:45.915204   10140 config.go:176] Loaded profile config "false-20211117230315-9504": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.3
	I1117 23:09:45.915204   10140 config.go:176] Loaded profile config "multinode-20211117225530-9504-m01": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.3
	I1117 23:09:45.916209   10140 config.go:176] Loaded profile config "pause-20211117230855-9504": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.3
	I1117 23:09:45.916209   10140 driver.go:343] Setting default libvirt URI to qemu:///system
	I1117 23:09:47.520090   10140 docker.go:132] docker version: linux-19.03.12
	I1117 23:09:47.524089   10140 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I1117 23:09:47.940795   10140 info.go:263] docker info: {ID:L2W4:ZZCI:FAHF:ZZT6:NAWJ:ODFR:2ZBQ:YQ77:PYZZ:VR3E:WLUM:OBSV Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:42 OomKillDisable:true NGoroutines:56 SystemTime:2021-11-17 23:09:47.636598728 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:4.19.76-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://inde
x.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:6239399936 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:19.03.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7ad184331fa3e55e52b890ea95e65ba581ae3429 Expected:7ad184331fa3e55e52b890ea95e65ba581ae3429} RuncCommit:{ID:dc9208a3303feef5b3839f4323d9beb36df0a9dd Expected:dc9208a3303feef5b3839f4323d9beb36df0a9dd} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense:Community Engine Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I1117 23:09:47.944763   10140 out.go:176] * Using the docker driver based on user configuration
	I1117 23:09:47.944763   10140 start.go:280] selected driver: docker
	I1117 23:09:47.945300   10140 start.go:775] validating driver "docker" against <nil>
	I1117 23:09:47.945408   10140 start.go:786] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I1117 23:09:48.009420   10140 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I1117 23:09:48.358241   10140 info.go:263] docker info: {ID:L2W4:ZZCI:FAHF:ZZT6:NAWJ:ODFR:2ZBQ:YQ77:PYZZ:VR3E:WLUM:OBSV Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:40 OomKillDisable:true NGoroutines:52 SystemTime:2021-11-17 23:09:48.089745312 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:4.19.76-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://inde
x.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:6239399936 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:19.03.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7ad184331fa3e55e52b890ea95e65ba581ae3429 Expected:7ad184331fa3e55e52b890ea95e65ba581ae3429} RuncCommit:{ID:dc9208a3303feef5b3839f4323d9beb36df0a9dd Expected:dc9208a3303feef5b3839f4323d9beb36df0a9dd} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense:Community Engine Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I1117 23:09:48.358241   10140 start_flags.go:268] no existing cluster config was found, will generate one from the flags 
	I1117 23:09:48.358241   10140 start_flags.go:758] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1117 23:09:48.358241   10140 cni.go:93] Creating CNI manager for "calico"
	I1117 23:09:48.358241   10140 start_flags.go:277] Found "Calico" CNI - setting NetworkPlugin=cni
	I1117 23:09:48.358241   10140 start_flags.go:282] config:
	{Name:calico-20211117230315-9504 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.3 ClusterName:calico-20211117230315-9504 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: Netw
orkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host}
	I1117 23:09:48.363448   10140 out.go:176] * Starting control plane node calico-20211117230315-9504 in cluster calico-20211117230315-9504
	I1117 23:09:48.363555   10140 cache.go:118] Beginning downloading kic base image for docker with docker
	I1117 23:09:48.366671   10140 out.go:176] * Pulling base image ...
	I1117 23:09:48.366704   10140 preload.go:132] Checking if preload exists for k8s version v1.22.3 and runtime docker
	I1117 23:09:48.366704   10140 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c in local docker daemon
	I1117 23:09:48.366704   10140 preload.go:148] Found local preload: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4
	I1117 23:09:48.366704   10140 cache.go:57] Caching tarball of preloaded images
	I1117 23:09:48.367322   10140 preload.go:174] Found C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1117 23:09:48.367322   10140 cache.go:60] Finished verifying existence of preloaded tar for  v1.22.3 on docker
	I1117 23:09:48.367322   10140 profile.go:147] Saving config to C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\calico-20211117230315-9504\config.json ...
	I1117 23:09:48.367925   10140 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\calico-20211117230315-9504\config.json: {Name:mkbf605a6364627f70a15a01327855afe01c1ab1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1117 23:09:48.468576   10140 image.go:79] Found gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c in local docker daemon, skipping pull
	I1117 23:09:48.468576   10140 cache.go:140] gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c exists in daemon, skipping load
	I1117 23:09:48.468576   10140 cache.go:206] Successfully downloaded all kic artifacts
	I1117 23:09:48.468576   10140 start.go:313] acquiring machines lock for calico-20211117230315-9504: {Name:mkcfad000a1e361222ba78663b9022d70b4f11bc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 23:09:48.468576   10140 start.go:317] acquired machines lock for "calico-20211117230315-9504" in 0s
	I1117 23:09:48.468576   10140 start.go:89] Provisioning new machine with config: &{Name:calico-20211117230315-9504 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.3 ClusterName:calico-20211117230315-9504 Namespace:default APIServerName:minikubeCA APIServerNames:
[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.22.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host} &{Name: IP: Port:8443 KubernetesVersion:v1.22.3 ControlPlane:true Worker:true}
	I1117 23:09:48.469252   10140 start.go:126] createHost starting for "" (driver="docker")
	I1117 23:09:48.473062   10140 out.go:203] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I1117 23:09:48.473062   10140 start.go:160] libmachine.API.Create for "calico-20211117230315-9504" (driver="docker")
	I1117 23:09:48.473597   10140 client.go:168] LocalClient.Create starting
	I1117 23:09:48.473781   10140 main.go:130] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I1117 23:09:48.473781   10140 main.go:130] libmachine: Decoding PEM data...
	I1117 23:09:48.473781   10140 main.go:130] libmachine: Parsing certificate...
	I1117 23:09:48.474675   10140 main.go:130] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I1117 23:09:48.474889   10140 main.go:130] libmachine: Decoding PEM data...
	I1117 23:09:48.474955   10140 main.go:130] libmachine: Parsing certificate...
	I1117 23:09:48.480356   10140 cli_runner.go:115] Run: docker network inspect calico-20211117230315-9504 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1117 23:09:48.582967   10140 cli_runner.go:162] docker network inspect calico-20211117230315-9504 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1117 23:09:48.589940   10140 network_create.go:254] running [docker network inspect calico-20211117230315-9504] to gather additional debugging logs...
	I1117 23:09:48.589940   10140 cli_runner.go:115] Run: docker network inspect calico-20211117230315-9504
	W1117 23:09:48.680351   10140 cli_runner.go:162] docker network inspect calico-20211117230315-9504 returned with exit code 1
	I1117 23:09:48.680434   10140 network_create.go:257] error running [docker network inspect calico-20211117230315-9504]: docker network inspect calico-20211117230315-9504: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: calico-20211117230315-9504
	I1117 23:09:48.680434   10140 network_create.go:259] output of [docker network inspect calico-20211117230315-9504]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: calico-20211117230315-9504
	
	** /stderr **
	I1117 23:09:48.683081   10140 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1117 23:09:48.790838   10140 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc0007121f0] misses:0}
	I1117 23:09:48.790838   10140 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1117 23:09:48.790838   10140 network_create.go:106] attempt to create docker network calico-20211117230315-9504 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1117 23:09:48.793930   10140 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true calico-20211117230315-9504
	W1117 23:09:48.884573   10140 cli_runner.go:162] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true calico-20211117230315-9504 returned with exit code 1
	W1117 23:09:48.884707   10140 network_create.go:98] failed to create docker network calico-20211117230315-9504 192.168.49.0/24, will retry: subnet is taken
	I1117 23:09:48.898301   10140 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0007121f0] amended:false}} dirty:map[] misses:0}
	I1117 23:09:48.899298   10140 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1117 23:09:48.913375   10140 network.go:288] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0007121f0] amended:true}} dirty:map[192.168.49.0:0xc0007121f0 192.168.58.0:0xc00060cd98] misses:0}
	I1117 23:09:48.913375   10140 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1117 23:09:48.913899   10140 network_create.go:106] attempt to create docker network calico-20211117230315-9504 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I1117 23:09:48.917514   10140 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true calico-20211117230315-9504
	I1117 23:09:49.117993   10140 network_create.go:90] docker network calico-20211117230315-9504 192.168.58.0/24 created
	I1117 23:09:49.117993   10140 kic.go:106] calculated static IP "192.168.58.2" for the "calico-20211117230315-9504" container
	I1117 23:09:49.127384   10140 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
	I1117 23:09:49.233918   10140 cli_runner.go:115] Run: docker volume create calico-20211117230315-9504 --label name.minikube.sigs.k8s.io=calico-20211117230315-9504 --label created_by.minikube.sigs.k8s.io=true
	I1117 23:09:49.330986   10140 oci.go:102] Successfully created a docker volume calico-20211117230315-9504
	I1117 23:09:49.335860   10140 cli_runner.go:115] Run: docker run --rm --name calico-20211117230315-9504-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-20211117230315-9504 --entrypoint /usr/bin/test -v calico-20211117230315-9504:/var gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -d /var/lib
	I1117 23:09:50.426642   10140 cli_runner.go:168] Completed: docker run --rm --name calico-20211117230315-9504-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-20211117230315-9504 --entrypoint /usr/bin/test -v calico-20211117230315-9504:/var gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -d /var/lib: (1.0906848s)
	I1117 23:09:50.426642   10140 oci.go:106] Successfully prepared a docker volume calico-20211117230315-9504
	I1117 23:09:50.426763   10140 preload.go:132] Checking if preload exists for k8s version v1.22.3 and runtime docker
	I1117 23:09:50.426885   10140 kic.go:179] Starting extracting preloaded images to volume ...
	I1117 23:09:50.431071   10140 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I1117 23:09:50.434118   10140 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v calico-20211117230315-9504:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir
	W1117 23:09:50.543275   10140 cli_runner.go:162] docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v calico-20211117230315-9504:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir returned with exit code 125
	I1117 23:09:50.543518   10140 kic.go:186] Unable to extract preloaded tarball to volume: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v calico-20211117230315-9504:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: status code not OK but 500: {"Message":"Unhandled exception: The notification platform is unavailable.\r\n\r\nThe notification platform is unavailable.\r\n","StackTrace":"   at Windows.UI.Notifications.ToastNotificationManager.CreateToastNotifier(String applicationId)\r\n   at Docker.WPF.PromptShareDirectory.<PromptUserAsync>d__0.MoveNext() in C:\\workspaces\\stable-2.3.x\\src\\github.com\\docker\\pinata\\win\\src\\Docker.WPF\\PromptShareDirectory.cs:line 25\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at Docker.ApiServices.Mounting.FileSharing.<DoShareAsync>d__6.MoveNext() in C:\\workspaces\\stable-2.3.x\\src\\github.com\\docker\\pinata\\win\\src\\Docker.ApiServices\\Mounting\\FileSharing.cs:line 80\r\n--- End of stack trace from previous location whe
re exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at Docker.ApiServices.Mounting.FileSharing.<ShareAsync>d__4.MoveNext() in C:\\workspaces\\stable-2.3.x\\src\\github.com\\docker\\pinata\\win\\src\\Docker.ApiServices\\Mounting\\FileSharing.cs:line 47\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at Docker.HttpApi.Controllers.FilesharingController.<ShareDirectory>d__2.MoveNext() in C:\\workspaces\\stable-2.3.x\\src\\github.com\\docker\\pinata\\win\\src\\Docker.HttpApi\\Controllers\\FilesharingController.cs:line 21\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.Exceptio
nDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at System.Threading.Tasks.TaskHelpersExtensions.<CastToObject>d__1`1.MoveNext()\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at System.Web.Http.Controllers.ApiControllerActionInvoker.<InvokeActionAsyncCore>d__1.MoveNext()\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at System.Web.Http.Controllers.ActionFilterResult.<ExecuteAsync>d__5.MoveNext()\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.Excepti
onServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at System.Web.Http.Dispatcher.HttpControllerDispatcher.<SendAsync>d__15.MoveNext()"}.
	See 'docker run --help'.
	I1117 23:09:50.804677   10140 info.go:263] docker info: {ID:L2W4:ZZCI:FAHF:ZZT6:NAWJ:ODFR:2ZBQ:YQ77:PYZZ:VR3E:WLUM:OBSV Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:41 OomKillDisable:true NGoroutines:53 SystemTime:2021-11-17 23:09:50.523918291 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:4.19.76-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://inde
x.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:6239399936 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:19.03.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7ad184331fa3e55e52b890ea95e65ba581ae3429 Expected:7ad184331fa3e55e52b890ea95e65ba581ae3429} RuncCommit:{ID:dc9208a3303feef5b3839f4323d9beb36df0a9dd Expected:dc9208a3303feef5b3839f4323d9beb36df0a9dd} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense:Community Engine Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	E1117 23:09:50.805259   10140 oci.go:197] error getting kernel modules path: Unable to locate kernel modules
	I1117 23:09:50.805259   10140 client.go:171] LocalClient.Create took 2.3316447s
	I1117 23:09:52.814389   10140 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1117 23:09:52.818064   10140 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20211117230315-9504
	W1117 23:09:52.904211   10140 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20211117230315-9504 returned with exit code 1
	I1117 23:09:52.904551   10140 retry.go:31] will retry after 276.165072ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "calico-20211117230315-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20211117230315-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-20211117230315-9504
	I1117 23:09:53.187964   10140 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20211117230315-9504
	W1117 23:09:53.279061   10140 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20211117230315-9504 returned with exit code 1
	I1117 23:09:53.279157   10140 retry.go:31] will retry after 540.190908ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "calico-20211117230315-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20211117230315-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-20211117230315-9504
	I1117 23:09:53.824917   10140 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20211117230315-9504
	W1117 23:09:53.912442   10140 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20211117230315-9504 returned with exit code 1
	I1117 23:09:53.912442   10140 retry.go:31] will retry after 655.06503ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "calico-20211117230315-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20211117230315-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-20211117230315-9504
	I1117 23:09:54.572412   10140 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20211117230315-9504
	W1117 23:09:54.664796   10140 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20211117230315-9504 returned with exit code 1
	W1117 23:09:54.664970   10140 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "calico-20211117230315-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20211117230315-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-20211117230315-9504
	
	W1117 23:09:54.664998   10140 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "calico-20211117230315-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20211117230315-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-20211117230315-9504
	I1117 23:09:54.664998   10140 start.go:129] duration metric: createHost completed in 6.1956999s
	I1117 23:09:54.664998   10140 start.go:80] releasing machines lock for "calico-20211117230315-9504", held for 6.1963759s
	W1117 23:09:54.665132   10140 start.go:532] error starting host: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	I1117 23:09:54.675050   10140 cli_runner.go:115] Run: docker container inspect calico-20211117230315-9504 --format={{.State.Status}}
	W1117 23:09:54.769241   10140 cli_runner.go:162] docker container inspect calico-20211117230315-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:09:54.769545   10140 delete.go:82] Unable to get host status for calico-20211117230315-9504, assuming it has already been deleted: state: unknown state "calico-20211117230315-9504": docker container inspect calico-20211117230315-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-20211117230315-9504
	W1117 23:09:54.769545   10140 out.go:241] ! StartHost failed, but will try again: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	! StartHost failed, but will try again: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	I1117 23:09:54.769545   10140 start.go:547] Will try again in 5 seconds ...
	I1117 23:09:59.770301   10140 start.go:313] acquiring machines lock for calico-20211117230315-9504: {Name:mkcfad000a1e361222ba78663b9022d70b4f11bc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 23:09:59.770634   10140 start.go:317] acquired machines lock for "calico-20211117230315-9504" in 276.8µs
	I1117 23:09:59.770845   10140 start.go:93] Skipping create...Using existing machine configuration
	I1117 23:09:59.770876   10140 fix.go:55] fixHost starting: 
	I1117 23:09:59.779075   10140 cli_runner.go:115] Run: docker container inspect calico-20211117230315-9504 --format={{.State.Status}}
	W1117 23:09:59.872644   10140 cli_runner.go:162] docker container inspect calico-20211117230315-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:09:59.873347   10140 fix.go:108] recreateIfNeeded on calico-20211117230315-9504: state= err=unknown state "calico-20211117230315-9504": docker container inspect calico-20211117230315-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-20211117230315-9504
	I1117 23:09:59.873347   10140 fix.go:113] machineExists: false. err=machine does not exist
	I1117 23:09:59.878274   10140 out.go:176] * docker "calico-20211117230315-9504" container is missing, will recreate.
	I1117 23:09:59.878393   10140 delete.go:124] DEMOLISHING calico-20211117230315-9504 ...
	I1117 23:09:59.887422   10140 cli_runner.go:115] Run: docker container inspect calico-20211117230315-9504 --format={{.State.Status}}
	W1117 23:09:59.977077   10140 cli_runner.go:162] docker container inspect calico-20211117230315-9504 --format={{.State.Status}} returned with exit code 1
	W1117 23:09:59.977235   10140 stop.go:75] unable to get state: unknown state "calico-20211117230315-9504": docker container inspect calico-20211117230315-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-20211117230315-9504
	I1117 23:09:59.977235   10140 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "calico-20211117230315-9504": docker container inspect calico-20211117230315-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-20211117230315-9504
	I1117 23:09:59.985245   10140 cli_runner.go:115] Run: docker container inspect calico-20211117230315-9504 --format={{.State.Status}}
	W1117 23:10:00.076781   10140 cli_runner.go:162] docker container inspect calico-20211117230315-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:10:00.076967   10140 delete.go:82] Unable to get host status for calico-20211117230315-9504, assuming it has already been deleted: state: unknown state "calico-20211117230315-9504": docker container inspect calico-20211117230315-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-20211117230315-9504
	I1117 23:10:00.081488   10140 cli_runner.go:115] Run: docker container inspect -f {{.Id}} calico-20211117230315-9504
	W1117 23:10:00.173404   10140 cli_runner.go:162] docker container inspect -f {{.Id}} calico-20211117230315-9504 returned with exit code 1
	I1117 23:10:00.173587   10140 kic.go:360] could not find the container calico-20211117230315-9504 to remove it. will try anyways
	I1117 23:10:00.178019   10140 cli_runner.go:115] Run: docker container inspect calico-20211117230315-9504 --format={{.State.Status}}
	W1117 23:10:00.270131   10140 cli_runner.go:162] docker container inspect calico-20211117230315-9504 --format={{.State.Status}} returned with exit code 1
	W1117 23:10:00.270471   10140 oci.go:83] error getting container status, will try to delete anyways: unknown state "calico-20211117230315-9504": docker container inspect calico-20211117230315-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-20211117230315-9504
	I1117 23:10:00.274540   10140 cli_runner.go:115] Run: docker exec --privileged -t calico-20211117230315-9504 /bin/bash -c "sudo init 0"
	W1117 23:10:00.371285   10140 cli_runner.go:162] docker exec --privileged -t calico-20211117230315-9504 /bin/bash -c "sudo init 0" returned with exit code 1
	I1117 23:10:00.371450   10140 oci.go:658] error shutdown calico-20211117230315-9504: docker exec --privileged -t calico-20211117230315-9504 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: calico-20211117230315-9504
	I1117 23:10:01.375754   10140 cli_runner.go:115] Run: docker container inspect calico-20211117230315-9504 --format={{.State.Status}}
	W1117 23:10:01.460758   10140 cli_runner.go:162] docker container inspect calico-20211117230315-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:10:01.460906   10140 oci.go:670] temporary error verifying shutdown: unknown state "calico-20211117230315-9504": docker container inspect calico-20211117230315-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-20211117230315-9504
	I1117 23:10:01.460906   10140 oci.go:672] temporary error: container calico-20211117230315-9504 status is  but expect it to be exited
	I1117 23:10:01.460906   10140 retry.go:31] will retry after 462.318748ms: couldn't verify container is exited. %v: unknown state "calico-20211117230315-9504": docker container inspect calico-20211117230315-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-20211117230315-9504
	I1117 23:10:01.925290   10140 cli_runner.go:115] Run: docker container inspect calico-20211117230315-9504 --format={{.State.Status}}
	W1117 23:10:02.023839   10140 cli_runner.go:162] docker container inspect calico-20211117230315-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:10:02.024000   10140 oci.go:670] temporary error verifying shutdown: unknown state "calico-20211117230315-9504": docker container inspect calico-20211117230315-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-20211117230315-9504
	I1117 23:10:02.024095   10140 oci.go:672] temporary error: container calico-20211117230315-9504 status is  but expect it to be exited
	I1117 23:10:02.024178   10140 retry.go:31] will retry after 890.117305ms: couldn't verify container is exited. %v: unknown state "calico-20211117230315-9504": docker container inspect calico-20211117230315-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-20211117230315-9504
	I1117 23:10:02.919045   10140 cli_runner.go:115] Run: docker container inspect calico-20211117230315-9504 --format={{.State.Status}}
	W1117 23:10:03.011059   10140 cli_runner.go:162] docker container inspect calico-20211117230315-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:10:03.011059   10140 oci.go:670] temporary error verifying shutdown: unknown state "calico-20211117230315-9504": docker container inspect calico-20211117230315-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-20211117230315-9504
	I1117 23:10:03.011344   10140 oci.go:672] temporary error: container calico-20211117230315-9504 status is  but expect it to be exited
	I1117 23:10:03.011384   10140 retry.go:31] will retry after 636.341646ms: couldn't verify container is exited. %v: unknown state "calico-20211117230315-9504": docker container inspect calico-20211117230315-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-20211117230315-9504
	I1117 23:10:03.652715   10140 cli_runner.go:115] Run: docker container inspect calico-20211117230315-9504 --format={{.State.Status}}
	W1117 23:10:03.745665   10140 cli_runner.go:162] docker container inspect calico-20211117230315-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:10:03.745903   10140 oci.go:670] temporary error verifying shutdown: unknown state "calico-20211117230315-9504": docker container inspect calico-20211117230315-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-20211117230315-9504
	I1117 23:10:03.745903   10140 oci.go:672] temporary error: container calico-20211117230315-9504 status is  but expect it to be exited
	I1117 23:10:03.746010   10140 retry.go:31] will retry after 1.107876242s: couldn't verify container is exited. %v: unknown state "calico-20211117230315-9504": docker container inspect calico-20211117230315-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-20211117230315-9504
	I1117 23:10:04.858196   10140 cli_runner.go:115] Run: docker container inspect calico-20211117230315-9504 --format={{.State.Status}}
	W1117 23:10:04.947062   10140 cli_runner.go:162] docker container inspect calico-20211117230315-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:10:04.947062   10140 oci.go:670] temporary error verifying shutdown: unknown state "calico-20211117230315-9504": docker container inspect calico-20211117230315-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-20211117230315-9504
	I1117 23:10:04.947062   10140 oci.go:672] temporary error: container calico-20211117230315-9504 status is  but expect it to be exited
	I1117 23:10:04.947062   10140 retry.go:31] will retry after 1.511079094s: couldn't verify container is exited. %v: unknown state "calico-20211117230315-9504": docker container inspect calico-20211117230315-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-20211117230315-9504
	I1117 23:10:06.462954   10140 cli_runner.go:115] Run: docker container inspect calico-20211117230315-9504 --format={{.State.Status}}
	W1117 23:10:06.553644   10140 cli_runner.go:162] docker container inspect calico-20211117230315-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:10:06.553727   10140 oci.go:670] temporary error verifying shutdown: unknown state "calico-20211117230315-9504": docker container inspect calico-20211117230315-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-20211117230315-9504
	I1117 23:10:06.553802   10140 oci.go:672] temporary error: container calico-20211117230315-9504 status is  but expect it to be exited
	I1117 23:10:06.553802   10140 retry.go:31] will retry after 3.04096222s: couldn't verify container is exited. %v: unknown state "calico-20211117230315-9504": docker container inspect calico-20211117230315-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-20211117230315-9504
	I1117 23:10:09.601333   10140 cli_runner.go:115] Run: docker container inspect calico-20211117230315-9504 --format={{.State.Status}}
	W1117 23:10:09.698373   10140 cli_runner.go:162] docker container inspect calico-20211117230315-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:10:09.698373   10140 oci.go:670] temporary error verifying shutdown: unknown state "calico-20211117230315-9504": docker container inspect calico-20211117230315-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-20211117230315-9504
	I1117 23:10:09.698373   10140 oci.go:672] temporary error: container calico-20211117230315-9504 status is  but expect it to be exited
	I1117 23:10:09.698373   10140 retry.go:31] will retry after 5.781953173s: couldn't verify container is exited. %v: unknown state "calico-20211117230315-9504": docker container inspect calico-20211117230315-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-20211117230315-9504
	I1117 23:10:15.484304   10140 cli_runner.go:115] Run: docker container inspect calico-20211117230315-9504 --format={{.State.Status}}
	W1117 23:10:15.571874   10140 cli_runner.go:162] docker container inspect calico-20211117230315-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:10:15.571874   10140 oci.go:670] temporary error verifying shutdown: unknown state "calico-20211117230315-9504": docker container inspect calico-20211117230315-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-20211117230315-9504
	I1117 23:10:15.571874   10140 oci.go:672] temporary error: container calico-20211117230315-9504 status is  but expect it to be exited
	I1117 23:10:15.571874   10140 oci.go:87] couldn't shut down calico-20211117230315-9504 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "calico-20211117230315-9504": docker container inspect calico-20211117230315-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-20211117230315-9504
	 
	I1117 23:10:15.575835   10140 cli_runner.go:115] Run: docker rm -f -v calico-20211117230315-9504
	W1117 23:10:15.665391   10140 cli_runner.go:162] docker rm -f -v calico-20211117230315-9504 returned with exit code 1
	W1117 23:10:15.666777   10140 delete.go:139] delete failed (probably ok) <nil>
	I1117 23:10:15.666845   10140 fix.go:120] Sleeping 1 second for extra luck!
	I1117 23:10:16.667297   10140 start.go:126] createHost starting for "" (driver="docker")
	I1117 23:10:16.678339   10140 out.go:203] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I1117 23:10:16.679587   10140 start.go:160] libmachine.API.Create for "calico-20211117230315-9504" (driver="docker")
	I1117 23:10:16.679587   10140 client.go:168] LocalClient.Create starting
	I1117 23:10:16.680224   10140 main.go:130] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I1117 23:10:16.680575   10140 main.go:130] libmachine: Decoding PEM data...
	I1117 23:10:16.680575   10140 main.go:130] libmachine: Parsing certificate...
	I1117 23:10:16.680810   10140 main.go:130] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I1117 23:10:16.680937   10140 main.go:130] libmachine: Decoding PEM data...
	I1117 23:10:16.680937   10140 main.go:130] libmachine: Parsing certificate...
	I1117 23:10:16.686353   10140 cli_runner.go:115] Run: docker network inspect calico-20211117230315-9504 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1117 23:10:16.773688   10140 cli_runner.go:162] docker network inspect calico-20211117230315-9504 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1117 23:10:16.778263   10140 network_create.go:254] running [docker network inspect calico-20211117230315-9504] to gather additional debugging logs...
	I1117 23:10:16.778263   10140 cli_runner.go:115] Run: docker network inspect calico-20211117230315-9504
	W1117 23:10:16.868553   10140 cli_runner.go:162] docker network inspect calico-20211117230315-9504 returned with exit code 1
	I1117 23:10:16.868553   10140 network_create.go:257] error running [docker network inspect calico-20211117230315-9504]: docker network inspect calico-20211117230315-9504: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: calico-20211117230315-9504
	I1117 23:10:16.868553   10140 network_create.go:259] output of [docker network inspect calico-20211117230315-9504]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: calico-20211117230315-9504
	
	** /stderr **
	I1117 23:10:16.873139   10140 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1117 23:10:16.976521   10140 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0007121f0] amended:true}} dirty:map[192.168.49.0:0xc0007121f0 192.168.58.0:0xc00060cd98] misses:0}
	I1117 23:10:16.976521   10140 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1117 23:10:16.988556   10140 network.go:279] skipping subnet 192.168.58.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0007121f0] amended:true}} dirty:map[192.168.49.0:0xc0007121f0 192.168.58.0:0xc00060cd98] misses:1}
	I1117 23:10:16.988556   10140 network.go:238] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1117 23:10:17.000140   10140 network.go:288] reserving subnet 192.168.67.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0007121f0] amended:true}} dirty:map[192.168.49.0:0xc0007121f0 192.168.58.0:0xc00060cd98 192.168.67.0:0xc000342398] misses:1}
	I1117 23:10:17.000140   10140 network.go:235] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1117 23:10:17.000140   10140 network_create.go:106] attempt to create docker network calico-20211117230315-9504 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
	I1117 23:10:17.006564   10140 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true calico-20211117230315-9504
	I1117 23:10:17.217297   10140 network_create.go:90] docker network calico-20211117230315-9504 192.168.67.0/24 created
	I1117 23:10:17.217297   10140 kic.go:106] calculated static IP "192.168.67.2" for the "calico-20211117230315-9504" container
	I1117 23:10:17.225182   10140 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
	I1117 23:10:17.319669   10140 cli_runner.go:115] Run: docker volume create calico-20211117230315-9504 --label name.minikube.sigs.k8s.io=calico-20211117230315-9504 --label created_by.minikube.sigs.k8s.io=true
	I1117 23:10:17.411969   10140 oci.go:102] Successfully created a docker volume calico-20211117230315-9504
	I1117 23:10:17.416237   10140 cli_runner.go:115] Run: docker run --rm --name calico-20211117230315-9504-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-20211117230315-9504 --entrypoint /usr/bin/test -v calico-20211117230315-9504:/var gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -d /var/lib
	I1117 23:10:18.369358   10140 oci.go:106] Successfully prepared a docker volume calico-20211117230315-9504
	I1117 23:10:18.369448   10140 preload.go:132] Checking if preload exists for k8s version v1.22.3 and runtime docker
	I1117 23:10:18.369739   10140 kic.go:179] Starting extracting preloaded images to volume ...
	I1117 23:10:18.374664   10140 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I1117 23:10:18.374739   10140 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v calico-20211117230315-9504:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir
	W1117 23:10:18.484468   10140 cli_runner.go:162] docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v calico-20211117230315-9504:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir returned with exit code 125
	I1117 23:10:18.484545   10140 kic.go:186] Unable to extract preloaded tarball to volume: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v calico-20211117230315-9504:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: status code not OK but 500: {"Message":"Unhandled exception: The notification platform is unavailable.\r\n\r\nThe notification platform is unavailable.\r\n","StackTrace":"   at Windows.UI.Notifications.ToastNotificationManager.CreateToastNotifier(String applicationId)\r\n   at Docker.WPF.PromptShareDirectory.<PromptUserAsync>d__0.MoveNext() in C:\\workspaces\\stable-2.3.x\\src\\github.com\\docker\\pinata\\win\\src\\Docker.WPF\\PromptShareDirectory.cs:line 25\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at Docker.ApiServices.Mounting.FileSharing.<DoShareAsync>d__6.MoveNext() in C:\\workspaces\\stable-2.3.x\\src\\github.com\\docker\\pinata\\win\\src\\Docker.ApiServices\\Mounting\\FileSharing.cs:line 80\r\n--- End of stack trace from previous location whe
re exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at Docker.ApiServices.Mounting.FileSharing.<ShareAsync>d__4.MoveNext() in C:\\workspaces\\stable-2.3.x\\src\\github.com\\docker\\pinata\\win\\src\\Docker.ApiServices\\Mounting\\FileSharing.cs:line 47\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at Docker.HttpApi.Controllers.FilesharingController.<ShareDirectory>d__2.MoveNext() in C:\\workspaces\\stable-2.3.x\\src\\github.com\\docker\\pinata\\win\\src\\Docker.HttpApi\\Controllers\\FilesharingController.cs:line 21\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.Exceptio
nDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at System.Threading.Tasks.TaskHelpersExtensions.<CastToObject>d__1`1.MoveNext()\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at System.Web.Http.Controllers.ApiControllerActionInvoker.<InvokeActionAsyncCore>d__1.MoveNext()\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at System.Web.Http.Controllers.ActionFilterResult.<ExecuteAsync>d__5.MoveNext()\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.Excepti
onServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at System.Web.Http.Dispatcher.HttpControllerDispatcher.<SendAsync>d__15.MoveNext()"}.
	See 'docker run --help'.
	I1117 23:10:18.731870   10140 info.go:263] docker info: {ID:L2W4:ZZCI:FAHF:ZZT6:NAWJ:ODFR:2ZBQ:YQ77:PYZZ:VR3E:WLUM:OBSV Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:41 OomKillDisable:true NGoroutines:53 SystemTime:2021-11-17 23:10:18.461434354 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:4.19.76-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://inde
x.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:6239399936 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:19.03.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7ad184331fa3e55e52b890ea95e65ba581ae3429 Expected:7ad184331fa3e55e52b890ea95e65ba581ae3429} RuncCommit:{ID:dc9208a3303feef5b3839f4323d9beb36df0a9dd Expected:dc9208a3303feef5b3839f4323d9beb36df0a9dd} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense:Community Engine Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	E1117 23:10:18.732413   10140 oci.go:197] error getting kernel modules path: Unable to locate kernel modules
	I1117 23:10:18.732413   10140 client.go:171] LocalClient.Create took 2.0528109s
	I1117 23:10:20.741594   10140 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1117 23:10:20.744308   10140 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20211117230315-9504
	W1117 23:10:20.837775   10140 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20211117230315-9504 returned with exit code 1
	I1117 23:10:20.838274   10140 retry.go:31] will retry after 178.565968ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "calico-20211117230315-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20211117230315-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-20211117230315-9504
	I1117 23:10:21.021437   10140 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20211117230315-9504
	W1117 23:10:21.120262   10140 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20211117230315-9504 returned with exit code 1
	I1117 23:10:21.120420   10140 retry.go:31] will retry after 330.246446ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "calico-20211117230315-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20211117230315-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-20211117230315-9504
	I1117 23:10:21.457036   10140 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20211117230315-9504
	W1117 23:10:21.546838   10140 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20211117230315-9504 returned with exit code 1
	I1117 23:10:21.546838   10140 retry.go:31] will retry after 460.157723ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "calico-20211117230315-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20211117230315-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-20211117230315-9504
	I1117 23:10:22.012972   10140 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20211117230315-9504
	W1117 23:10:22.108466   10140 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20211117230315-9504 returned with exit code 1
	W1117 23:10:22.108887   10140 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "calico-20211117230315-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20211117230315-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-20211117230315-9504
	
	W1117 23:10:22.108945   10140 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "calico-20211117230315-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20211117230315-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-20211117230315-9504
	I1117 23:10:22.108945   10140 start.go:129] duration metric: createHost completed in 5.4416066s
	I1117 23:10:22.116691   10140 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1117 23:10:22.121108   10140 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20211117230315-9504
	W1117 23:10:22.211305   10140 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20211117230315-9504 returned with exit code 1
	I1117 23:10:22.211305   10140 retry.go:31] will retry after 195.758538ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "calico-20211117230315-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20211117230315-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-20211117230315-9504
	I1117 23:10:22.411562   10140 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20211117230315-9504
	W1117 23:10:22.503170   10140 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20211117230315-9504 returned with exit code 1
	I1117 23:10:22.503647   10140 retry.go:31] will retry after 297.413196ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "calico-20211117230315-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20211117230315-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-20211117230315-9504
	I1117 23:10:22.806271   10140 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20211117230315-9504
	W1117 23:10:22.896173   10140 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20211117230315-9504 returned with exit code 1
	I1117 23:10:22.896542   10140 retry.go:31] will retry after 663.23513ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "calico-20211117230315-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20211117230315-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-20211117230315-9504
	I1117 23:10:23.565648   10140 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20211117230315-9504
	W1117 23:10:23.658448   10140 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20211117230315-9504 returned with exit code 1
	W1117 23:10:23.659017   10140 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "calico-20211117230315-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20211117230315-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-20211117230315-9504
	
	W1117 23:10:23.659017   10140 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "calico-20211117230315-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20211117230315-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-20211117230315-9504
	I1117 23:10:23.659017   10140 fix.go:57] fixHost completed within 23.8879624s
	I1117 23:10:23.659017   10140 start.go:80] releasing machines lock for "calico-20211117230315-9504", held for 23.8881522s
	W1117 23:10:23.659546   10140 out.go:241] * Failed to start docker container. Running "minikube delete -p calico-20211117230315-9504" may fix it: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	* Failed to start docker container. Running "minikube delete -p calico-20211117230315-9504" may fix it: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	I1117 23:10:23.664072   10140 out.go:176] 
	W1117 23:10:23.664072   10140 out.go:241] X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	W1117 23:10:23.664072   10140 out.go:241] * 
	* 
	W1117 23:10:23.665407   10140 out.go:241] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1117 23:10:23.667771   10140 out.go:176] 

                                                
                                                
** /stderr **
net_test.go:101: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/calico/Start (38.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-weave/Start (38.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-weave/Start
net_test.go:99: (dbg) Run:  out/minikube-windows-amd64.exe start -p custom-weave-20211117230315-9504 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=testdata\weavenet.yaml --driver=docker

                                                
                                                
=== CONT  TestNetworkPlugins/group/custom-weave/Start
net_test.go:99: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p custom-weave-20211117230315-9504 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=testdata\weavenet.yaml --driver=docker: exit status 80 (37.9330328s)

                                                
                                                
-- stdout --
	* [custom-weave-20211117230315-9504] minikube v1.24.0 on Microsoft Windows 10 Enterprise N 10.0.18363 Build 18363
	  - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=12739
	* Using the docker driver based on user configuration
	* Starting control plane node custom-weave-20211117230315-9504 in cluster custom-weave-20211117230315-9504
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* docker "custom-weave-20211117230315-9504" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1117 23:10:03.303852    3856 out.go:297] Setting OutFile to fd 1404 ...
	I1117 23:10:03.374655    3856 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 23:10:03.374655    3856 out.go:310] Setting ErrFile to fd 1796...
	I1117 23:10:03.374707    3856 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 23:10:03.385893    3856 out.go:304] Setting JSON to false
	I1117 23:10:03.387873    3856 start.go:112] hostinfo: {"hostname":"minikube2","uptime":79919,"bootTime":1637110684,"procs":131,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.18363 Build 18363","kernelVersion":"10.0.18363 Build 18363","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"29dced62-21fb-45d8-a34e-472b66ced002"}
	W1117 23:10:03.387873    3856 start.go:120] gopshost.Virtualization returned error: not implemented yet
	I1117 23:10:03.404243    3856 out.go:176] * [custom-weave-20211117230315-9504] minikube v1.24.0 on Microsoft Windows 10 Enterprise N 10.0.18363 Build 18363
	I1117 23:10:03.404243    3856 notify.go:174] Checking for updates...
	I1117 23:10:03.407431    3856 out.go:176]   - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	I1117 23:10:03.410418    3856 out.go:176]   - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	I1117 23:10:03.412763    3856 out.go:176]   - MINIKUBE_LOCATION=12739
	I1117 23:10:03.414105    3856 config.go:176] Loaded profile config "calico-20211117230315-9504": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.3
	I1117 23:10:03.415107    3856 config.go:176] Loaded profile config "cilium-20211117230315-9504": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.3
	I1117 23:10:03.415107    3856 config.go:176] Loaded profile config "multinode-20211117225530-9504-m01": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.3
	I1117 23:10:03.416083    3856 config.go:176] Loaded profile config "pause-20211117230855-9504": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.3
	I1117 23:10:03.416083    3856 driver.go:343] Setting default libvirt URI to qemu:///system
	I1117 23:10:05.056462    3856 docker.go:132] docker version: linux-19.03.12
	I1117 23:10:05.061316    3856 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I1117 23:10:05.393862    3856 info.go:263] docker info: {ID:L2W4:ZZCI:FAHF:ZZT6:NAWJ:ODFR:2ZBQ:YQ77:PYZZ:VR3E:WLUM:OBSV Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:40 OomKillDisable:true NGoroutines:52 SystemTime:2021-11-17 23:10:05.141874267 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:4.19.76-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://inde
x.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:6239399936 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:19.03.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7ad184331fa3e55e52b890ea95e65ba581ae3429 Expected:7ad184331fa3e55e52b890ea95e65ba581ae3429} RuncCommit:{ID:dc9208a3303feef5b3839f4323d9beb36df0a9dd Expected:dc9208a3303feef5b3839f4323d9beb36df0a9dd} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense:Community Engine Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I1117 23:10:05.397910    3856 out.go:176] * Using the docker driver based on user configuration
	I1117 23:10:05.397910    3856 start.go:280] selected driver: docker
	I1117 23:10:05.397910    3856 start.go:775] validating driver "docker" against <nil>
	I1117 23:10:05.397910    3856 start.go:786] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I1117 23:10:05.458087    3856 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I1117 23:10:05.792766    3856 info.go:263] docker info: {ID:L2W4:ZZCI:FAHF:ZZT6:NAWJ:ODFR:2ZBQ:YQ77:PYZZ:VR3E:WLUM:OBSV Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:40 OomKillDisable:true NGoroutines:52 SystemTime:2021-11-17 23:10:05.533503901 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:4.19.76-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://inde
x.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:6239399936 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:19.03.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7ad184331fa3e55e52b890ea95e65ba581ae3429 Expected:7ad184331fa3e55e52b890ea95e65ba581ae3429} RuncCommit:{ID:dc9208a3303feef5b3839f4323d9beb36df0a9dd Expected:dc9208a3303feef5b3839f4323d9beb36df0a9dd} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense:Community Engine Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I1117 23:10:05.793141    3856 start_flags.go:268] no existing cluster config was found, will generate one from the flags 
	I1117 23:10:05.793141    3856 start_flags.go:758] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1117 23:10:05.793742    3856 cni.go:93] Creating CNI manager for "testdata\\weavenet.yaml"
	I1117 23:10:05.793908    3856 start_flags.go:277] Found "testdata\\weavenet.yaml" CNI - setting NetworkPlugin=cni
	I1117 23:10:05.793908    3856 start_flags.go:282] config:
	{Name:custom-weave-20211117230315-9504 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.3 ClusterName:custom-weave-20211117230315-9504 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRI
Socket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata\weavenet.yaml NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host}
	I1117 23:10:05.797062    3856 out.go:176] * Starting control plane node custom-weave-20211117230315-9504 in cluster custom-weave-20211117230315-9504
	I1117 23:10:05.797184    3856 cache.go:118] Beginning downloading kic base image for docker with docker
	I1117 23:10:05.800144    3856 out.go:176] * Pulling base image ...
	I1117 23:10:05.800288    3856 preload.go:132] Checking if preload exists for k8s version v1.22.3 and runtime docker
	I1117 23:10:05.800345    3856 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c in local docker daemon
	I1117 23:10:05.800741    3856 preload.go:148] Found local preload: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4
	I1117 23:10:05.800741    3856 cache.go:57] Caching tarball of preloaded images
	I1117 23:10:05.800741    3856 preload.go:174] Found C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1117 23:10:05.801277    3856 cache.go:60] Finished verifying existence of preloaded tar for  v1.22.3 on docker
	I1117 23:10:05.801592    3856 profile.go:147] Saving config to C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\custom-weave-20211117230315-9504\config.json ...
	I1117 23:10:05.801592    3856 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\custom-weave-20211117230315-9504\config.json: {Name:mk7e2e6a55a451aafad07403ad3189c5f3e13fa7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1117 23:10:05.896260    3856 image.go:79] Found gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c in local docker daemon, skipping pull
	I1117 23:10:05.897020    3856 cache.go:140] gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c exists in daemon, skipping load
	I1117 23:10:05.897020    3856 cache.go:206] Successfully downloaded all kic artifacts
	I1117 23:10:05.897020    3856 start.go:313] acquiring machines lock for custom-weave-20211117230315-9504: {Name:mk6e359f047f181c78deda9d1294dfaf319dcb4b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 23:10:05.897020    3856 start.go:317] acquired machines lock for "custom-weave-20211117230315-9504" in 0s
	I1117 23:10:05.897020    3856 start.go:89] Provisioning new machine with config: &{Name:custom-weave-20211117230315-9504 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.3 ClusterName:custom-weave-20211117230315-9504 Namespace:default APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata\weavenet.yaml NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.22.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host} &{Name: IP: Port:8443 KubernetesVersion:v1.22.3 ControlPlane:true Worker:true}
	I1117 23:10:05.897671    3856 start.go:126] createHost starting for "" (driver="docker")
	I1117 23:10:05.900987    3856 out.go:203] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I1117 23:10:05.901743    3856 start.go:160] libmachine.API.Create for "custom-weave-20211117230315-9504" (driver="docker")
	I1117 23:10:05.901743    3856 client.go:168] LocalClient.Create starting
	I1117 23:10:05.902462    3856 main.go:130] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I1117 23:10:05.902462    3856 main.go:130] libmachine: Decoding PEM data...
	I1117 23:10:05.902462    3856 main.go:130] libmachine: Parsing certificate...
	I1117 23:10:05.902462    3856 main.go:130] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I1117 23:10:05.903118    3856 main.go:130] libmachine: Decoding PEM data...
	I1117 23:10:05.903118    3856 main.go:130] libmachine: Parsing certificate...
	I1117 23:10:05.909325    3856 cli_runner.go:115] Run: docker network inspect custom-weave-20211117230315-9504 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1117 23:10:05.998951    3856 cli_runner.go:162] docker network inspect custom-weave-20211117230315-9504 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1117 23:10:06.003601    3856 network_create.go:254] running [docker network inspect custom-weave-20211117230315-9504] to gather additional debugging logs...
	I1117 23:10:06.003625    3856 cli_runner.go:115] Run: docker network inspect custom-weave-20211117230315-9504
	W1117 23:10:06.093126    3856 cli_runner.go:162] docker network inspect custom-weave-20211117230315-9504 returned with exit code 1
	I1117 23:10:06.093504    3856 network_create.go:257] error running [docker network inspect custom-weave-20211117230315-9504]: docker network inspect custom-weave-20211117230315-9504: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: custom-weave-20211117230315-9504
	I1117 23:10:06.093504    3856 network_create.go:259] output of [docker network inspect custom-weave-20211117230315-9504]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: custom-weave-20211117230315-9504
	
	** /stderr **
	I1117 23:10:06.097690    3856 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1117 23:10:06.206209    3856 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc0001ac690] misses:0}
	I1117 23:10:06.206209    3856 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1117 23:10:06.206209    3856 network_create.go:106] attempt to create docker network custom-weave-20211117230315-9504 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1117 23:10:06.209188    3856 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true custom-weave-20211117230315-9504
	I1117 23:10:06.409552    3856 network_create.go:90] docker network custom-weave-20211117230315-9504 192.168.49.0/24 created
	I1117 23:10:06.409925    3856 kic.go:106] calculated static IP "192.168.49.2" for the "custom-weave-20211117230315-9504" container
	I1117 23:10:06.417149    3856 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
	I1117 23:10:06.508427    3856 cli_runner.go:115] Run: docker volume create custom-weave-20211117230315-9504 --label name.minikube.sigs.k8s.io=custom-weave-20211117230315-9504 --label created_by.minikube.sigs.k8s.io=true
	I1117 23:10:06.603441    3856 oci.go:102] Successfully created a docker volume custom-weave-20211117230315-9504
	I1117 23:10:06.610031    3856 cli_runner.go:115] Run: docker run --rm --name custom-weave-20211117230315-9504-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=custom-weave-20211117230315-9504 --entrypoint /usr/bin/test -v custom-weave-20211117230315-9504:/var gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -d /var/lib
	I1117 23:10:07.691692    3856 cli_runner.go:168] Completed: docker run --rm --name custom-weave-20211117230315-9504-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=custom-weave-20211117230315-9504 --entrypoint /usr/bin/test -v custom-weave-20211117230315-9504:/var gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -d /var/lib: (1.0815768s)
	I1117 23:10:07.691990    3856 oci.go:106] Successfully prepared a docker volume custom-weave-20211117230315-9504
	I1117 23:10:07.692105    3856 preload.go:132] Checking if preload exists for k8s version v1.22.3 and runtime docker
	I1117 23:10:07.692345    3856 kic.go:179] Starting extracting preloaded images to volume ...
	I1117 23:10:07.696842    3856 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I1117 23:10:07.696842    3856 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v custom-weave-20211117230315-9504:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir
	W1117 23:10:07.803784    3856 cli_runner.go:162] docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v custom-weave-20211117230315-9504:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir returned with exit code 125
	I1117 23:10:07.803955    3856 kic.go:186] Unable to extract preloaded tarball to volume: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v custom-weave-20211117230315-9504:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: status code not OK but 500: {"Message":"Unhandled exception: The notification platform is unavailable.\r\n\r\nThe notification platform is unavailable.\r\n","StackTrace":"   at Windows.UI.Notifications.ToastNotificationManager.CreateToastNotifier(String applicationId)\r\n   at Docker.WPF.PromptShareDirectory.<PromptUserAsync>d__0.MoveNext() in C:\\workspaces\\stable-2.3.x\\src\\github.com\\docker\\pinata\\win\\src\\Docker.WPF\\PromptShareDirectory.cs:line 25\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at Docker.ApiServices.Mounting.FileSharing.<DoShareAsync>d__6.MoveNext() in C:\\workspaces\\stable-2.3.x\\src\\github.com\\docker\\pinata\\win\\src\\Docker.ApiServices\\Mounting\\FileSharing.cs:line 80\r\n--- End of stack trace from previous location whe
re exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at Docker.ApiServices.Mounting.FileSharing.<ShareAsync>d__4.MoveNext() in C:\\workspaces\\stable-2.3.x\\src\\github.com\\docker\\pinata\\win\\src\\Docker.ApiServices\\Mounting\\FileSharing.cs:line 47\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at Docker.HttpApi.Controllers.FilesharingController.<ShareDirectory>d__2.MoveNext() in C:\\workspaces\\stable-2.3.x\\src\\github.com\\docker\\pinata\\win\\src\\Docker.HttpApi\\Controllers\\FilesharingController.cs:line 21\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.Exceptio
nDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at System.Threading.Tasks.TaskHelpersExtensions.<CastToObject>d__1`1.MoveNext()\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at System.Web.Http.Controllers.ApiControllerActionInvoker.<InvokeActionAsyncCore>d__1.MoveNext()\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at System.Web.Http.Controllers.ActionFilterResult.<ExecuteAsync>d__5.MoveNext()\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.Excepti
onServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at System.Web.Http.Dispatcher.HttpControllerDispatcher.<SendAsync>d__15.MoveNext()"}.
	See 'docker run --help'.
	I1117 23:10:08.042434    3856 info.go:263] docker info: {ID:L2W4:ZZCI:FAHF:ZZT6:NAWJ:ODFR:2ZBQ:YQ77:PYZZ:VR3E:WLUM:OBSV Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:41 OomKillDisable:true NGoroutines:53 SystemTime:2021-11-17 23:10:07.782530152 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:4.19.76-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://inde
x.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:6239399936 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:19.03.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7ad184331fa3e55e52b890ea95e65ba581ae3429 Expected:7ad184331fa3e55e52b890ea95e65ba581ae3429} RuncCommit:{ID:dc9208a3303feef5b3839f4323d9beb36df0a9dd Expected:dc9208a3303feef5b3839f4323d9beb36df0a9dd} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense:Community Engine Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	E1117 23:10:08.042720    3856 oci.go:197] error getting kernel modules path: Unable to locate kernel modules
	I1117 23:10:08.042813    3856 client.go:171] LocalClient.Create took 2.1410545s
	I1117 23:10:10.052408    3856 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1117 23:10:10.055968    3856 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20211117230315-9504
	W1117 23:10:10.149317    3856 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20211117230315-9504 returned with exit code 1
	I1117 23:10:10.149623    3856 retry.go:31] will retry after 276.165072ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "custom-weave-20211117230315-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20211117230315-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: custom-weave-20211117230315-9504
	I1117 23:10:10.431300    3856 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20211117230315-9504
	W1117 23:10:10.530458    3856 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20211117230315-9504 returned with exit code 1
	I1117 23:10:10.530702    3856 retry.go:31] will retry after 540.190908ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "custom-weave-20211117230315-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20211117230315-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: custom-weave-20211117230315-9504
	I1117 23:10:11.075384    3856 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20211117230315-9504
	W1117 23:10:11.168614    3856 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20211117230315-9504 returned with exit code 1
	I1117 23:10:11.168952    3856 retry.go:31] will retry after 655.06503ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "custom-weave-20211117230315-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20211117230315-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: custom-weave-20211117230315-9504
	I1117 23:10:11.829398    3856 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20211117230315-9504
	W1117 23:10:11.920452    3856 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20211117230315-9504 returned with exit code 1
	W1117 23:10:11.920745    3856 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "custom-weave-20211117230315-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20211117230315-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: custom-weave-20211117230315-9504
	
	W1117 23:10:11.920827    3856 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "custom-weave-20211117230315-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20211117230315-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: custom-weave-20211117230315-9504
	I1117 23:10:11.920827    3856 start.go:129] duration metric: createHost completed in 6.0231099s
	I1117 23:10:11.920902    3856 start.go:80] releasing machines lock for "custom-weave-20211117230315-9504", held for 6.0238362s
	W1117 23:10:11.921051    3856 start.go:532] error starting host: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	I1117 23:10:11.929304    3856 cli_runner.go:115] Run: docker container inspect custom-weave-20211117230315-9504 --format={{.State.Status}}
	W1117 23:10:12.024835    3856 cli_runner.go:162] docker container inspect custom-weave-20211117230315-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:10:12.024940    3856 delete.go:82] Unable to get host status for custom-weave-20211117230315-9504, assuming it has already been deleted: state: unknown state "custom-weave-20211117230315-9504": docker container inspect custom-weave-20211117230315-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: custom-weave-20211117230315-9504
	W1117 23:10:12.025179    3856 out.go:241] ! StartHost failed, but will try again: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	! StartHost failed, but will try again: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	I1117 23:10:12.025179    3856 start.go:547] Will try again in 5 seconds ...
	I1117 23:10:17.026341    3856 start.go:313] acquiring machines lock for custom-weave-20211117230315-9504: {Name:mk6e359f047f181c78deda9d1294dfaf319dcb4b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 23:10:17.026623    3856 start.go:317] acquired machines lock for "custom-weave-20211117230315-9504" in 0s
	I1117 23:10:17.026756    3856 start.go:93] Skipping create...Using existing machine configuration
	I1117 23:10:17.026756    3856 fix.go:55] fixHost starting: 
	I1117 23:10:17.033508    3856 cli_runner.go:115] Run: docker container inspect custom-weave-20211117230315-9504 --format={{.State.Status}}
	W1117 23:10:17.129670    3856 cli_runner.go:162] docker container inspect custom-weave-20211117230315-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:10:17.129881    3856 fix.go:108] recreateIfNeeded on custom-weave-20211117230315-9504: state= err=unknown state "custom-weave-20211117230315-9504": docker container inspect custom-weave-20211117230315-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: custom-weave-20211117230315-9504
	I1117 23:10:17.130080    3856 fix.go:113] machineExists: false. err=machine does not exist
	I1117 23:10:17.134830    3856 out.go:176] * docker "custom-weave-20211117230315-9504" container is missing, will recreate.
	I1117 23:10:17.134830    3856 delete.go:124] DEMOLISHING custom-weave-20211117230315-9504 ...
	I1117 23:10:17.143355    3856 cli_runner.go:115] Run: docker container inspect custom-weave-20211117230315-9504 --format={{.State.Status}}
	W1117 23:10:17.239381    3856 cli_runner.go:162] docker container inspect custom-weave-20211117230315-9504 --format={{.State.Status}} returned with exit code 1
	W1117 23:10:17.239599    3856 stop.go:75] unable to get state: unknown state "custom-weave-20211117230315-9504": docker container inspect custom-weave-20211117230315-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: custom-weave-20211117230315-9504
	I1117 23:10:17.239599    3856 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "custom-weave-20211117230315-9504": docker container inspect custom-weave-20211117230315-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: custom-weave-20211117230315-9504
	I1117 23:10:17.248590    3856 cli_runner.go:115] Run: docker container inspect custom-weave-20211117230315-9504 --format={{.State.Status}}
	W1117 23:10:17.344572    3856 cli_runner.go:162] docker container inspect custom-weave-20211117230315-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:10:17.344572    3856 delete.go:82] Unable to get host status for custom-weave-20211117230315-9504, assuming it has already been deleted: state: unknown state "custom-weave-20211117230315-9504": docker container inspect custom-weave-20211117230315-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: custom-weave-20211117230315-9504
	I1117 23:10:17.348080    3856 cli_runner.go:115] Run: docker container inspect -f {{.Id}} custom-weave-20211117230315-9504
	W1117 23:10:17.439114    3856 cli_runner.go:162] docker container inspect -f {{.Id}} custom-weave-20211117230315-9504 returned with exit code 1
	I1117 23:10:17.439407    3856 kic.go:360] could not find the container custom-weave-20211117230315-9504 to remove it. will try anyways
	I1117 23:10:17.442758    3856 cli_runner.go:115] Run: docker container inspect custom-weave-20211117230315-9504 --format={{.State.Status}}
	W1117 23:10:17.529512    3856 cli_runner.go:162] docker container inspect custom-weave-20211117230315-9504 --format={{.State.Status}} returned with exit code 1
	W1117 23:10:17.529512    3856 oci.go:83] error getting container status, will try to delete anyways: unknown state "custom-weave-20211117230315-9504": docker container inspect custom-weave-20211117230315-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: custom-weave-20211117230315-9504
	I1117 23:10:17.533529    3856 cli_runner.go:115] Run: docker exec --privileged -t custom-weave-20211117230315-9504 /bin/bash -c "sudo init 0"
	W1117 23:10:17.632063    3856 cli_runner.go:162] docker exec --privileged -t custom-weave-20211117230315-9504 /bin/bash -c "sudo init 0" returned with exit code 1
	I1117 23:10:17.632063    3856 oci.go:658] error shutdown custom-weave-20211117230315-9504: docker exec --privileged -t custom-weave-20211117230315-9504 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: custom-weave-20211117230315-9504
	I1117 23:10:18.637272    3856 cli_runner.go:115] Run: docker container inspect custom-weave-20211117230315-9504 --format={{.State.Status}}
	W1117 23:10:18.723086    3856 cli_runner.go:162] docker container inspect custom-weave-20211117230315-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:10:18.723086    3856 oci.go:670] temporary error verifying shutdown: unknown state "custom-weave-20211117230315-9504": docker container inspect custom-weave-20211117230315-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: custom-weave-20211117230315-9504
	I1117 23:10:18.723086    3856 oci.go:672] temporary error: container custom-weave-20211117230315-9504 status is  but expect it to be exited
	I1117 23:10:18.723086    3856 retry.go:31] will retry after 462.318748ms: couldn't verify container is exited. %v: unknown state "custom-weave-20211117230315-9504": docker container inspect custom-weave-20211117230315-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: custom-weave-20211117230315-9504
	I1117 23:10:19.191305    3856 cli_runner.go:115] Run: docker container inspect custom-weave-20211117230315-9504 --format={{.State.Status}}
	W1117 23:10:19.278950    3856 cli_runner.go:162] docker container inspect custom-weave-20211117230315-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:10:19.279054    3856 oci.go:670] temporary error verifying shutdown: unknown state "custom-weave-20211117230315-9504": docker container inspect custom-weave-20211117230315-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: custom-weave-20211117230315-9504
	I1117 23:10:19.279054    3856 oci.go:672] temporary error: container custom-weave-20211117230315-9504 status is  but expect it to be exited
	I1117 23:10:19.279054    3856 retry.go:31] will retry after 890.117305ms: couldn't verify container is exited. %v: unknown state "custom-weave-20211117230315-9504": docker container inspect custom-weave-20211117230315-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: custom-weave-20211117230315-9504
	I1117 23:10:20.174825    3856 cli_runner.go:115] Run: docker container inspect custom-weave-20211117230315-9504 --format={{.State.Status}}
	W1117 23:10:20.268562    3856 cli_runner.go:162] docker container inspect custom-weave-20211117230315-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:10:20.268755    3856 oci.go:670] temporary error verifying shutdown: unknown state "custom-weave-20211117230315-9504": docker container inspect custom-weave-20211117230315-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: custom-weave-20211117230315-9504
	I1117 23:10:20.268755    3856 oci.go:672] temporary error: container custom-weave-20211117230315-9504 status is  but expect it to be exited
	I1117 23:10:20.268852    3856 retry.go:31] will retry after 636.341646ms: couldn't verify container is exited. %v: unknown state "custom-weave-20211117230315-9504": docker container inspect custom-weave-20211117230315-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: custom-weave-20211117230315-9504
	I1117 23:10:20.910393    3856 cli_runner.go:115] Run: docker container inspect custom-weave-20211117230315-9504 --format={{.State.Status}}
	W1117 23:10:21.003850    3856 cli_runner.go:162] docker container inspect custom-weave-20211117230315-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:10:21.004145    3856 oci.go:670] temporary error verifying shutdown: unknown state "custom-weave-20211117230315-9504": docker container inspect custom-weave-20211117230315-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: custom-weave-20211117230315-9504
	I1117 23:10:21.004145    3856 oci.go:672] temporary error: container custom-weave-20211117230315-9504 status is  but expect it to be exited
	I1117 23:10:21.004145    3856 retry.go:31] will retry after 1.107876242s: couldn't verify container is exited. %v: unknown state "custom-weave-20211117230315-9504": docker container inspect custom-weave-20211117230315-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: custom-weave-20211117230315-9504
	I1117 23:10:22.118028    3856 cli_runner.go:115] Run: docker container inspect custom-weave-20211117230315-9504 --format={{.State.Status}}
	W1117 23:10:22.224955    3856 cli_runner.go:162] docker container inspect custom-weave-20211117230315-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:10:22.224955    3856 oci.go:670] temporary error verifying shutdown: unknown state "custom-weave-20211117230315-9504": docker container inspect custom-weave-20211117230315-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: custom-weave-20211117230315-9504
	I1117 23:10:22.224955    3856 oci.go:672] temporary error: container custom-weave-20211117230315-9504 status is  but expect it to be exited
	I1117 23:10:22.224955    3856 retry.go:31] will retry after 1.511079094s: couldn't verify container is exited. %v: unknown state "custom-weave-20211117230315-9504": docker container inspect custom-weave-20211117230315-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: custom-weave-20211117230315-9504
	I1117 23:10:23.741042    3856 cli_runner.go:115] Run: docker container inspect custom-weave-20211117230315-9504 --format={{.State.Status}}
	W1117 23:10:23.846445    3856 cli_runner.go:162] docker container inspect custom-weave-20211117230315-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:10:23.846739    3856 oci.go:670] temporary error verifying shutdown: unknown state "custom-weave-20211117230315-9504": docker container inspect custom-weave-20211117230315-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: custom-weave-20211117230315-9504
	I1117 23:10:23.846777    3856 oci.go:672] temporary error: container custom-weave-20211117230315-9504 status is  but expect it to be exited
	I1117 23:10:23.846777    3856 retry.go:31] will retry after 3.04096222s: couldn't verify container is exited. %v: unknown state "custom-weave-20211117230315-9504": docker container inspect custom-weave-20211117230315-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: custom-weave-20211117230315-9504
	I1117 23:10:26.893983    3856 cli_runner.go:115] Run: docker container inspect custom-weave-20211117230315-9504 --format={{.State.Status}}
	W1117 23:10:26.985706    3856 cli_runner.go:162] docker container inspect custom-weave-20211117230315-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:10:26.986101    3856 oci.go:670] temporary error verifying shutdown: unknown state "custom-weave-20211117230315-9504": docker container inspect custom-weave-20211117230315-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: custom-weave-20211117230315-9504
	I1117 23:10:26.986140    3856 oci.go:672] temporary error: container custom-weave-20211117230315-9504 status is  but expect it to be exited
	I1117 23:10:26.986140    3856 retry.go:31] will retry after 5.781953173s: couldn't verify container is exited. %v: unknown state "custom-weave-20211117230315-9504": docker container inspect custom-weave-20211117230315-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: custom-weave-20211117230315-9504
	I1117 23:10:32.774175    3856 cli_runner.go:115] Run: docker container inspect custom-weave-20211117230315-9504 --format={{.State.Status}}
	W1117 23:10:32.875396    3856 cli_runner.go:162] docker container inspect custom-weave-20211117230315-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:10:32.875595    3856 oci.go:670] temporary error verifying shutdown: unknown state "custom-weave-20211117230315-9504": docker container inspect custom-weave-20211117230315-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: custom-weave-20211117230315-9504
	I1117 23:10:32.875595    3856 oci.go:672] temporary error: container custom-weave-20211117230315-9504 status is  but expect it to be exited
	I1117 23:10:32.875643    3856 oci.go:87] couldn't shut down custom-weave-20211117230315-9504 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "custom-weave-20211117230315-9504": docker container inspect custom-weave-20211117230315-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: custom-weave-20211117230315-9504
	 
	I1117 23:10:32.879839    3856 cli_runner.go:115] Run: docker rm -f -v custom-weave-20211117230315-9504
	W1117 23:10:32.967043    3856 cli_runner.go:162] docker rm -f -v custom-weave-20211117230315-9504 returned with exit code 1
	W1117 23:10:32.968309    3856 delete.go:139] delete failed (probably ok) <nil>
	I1117 23:10:32.968506    3856 fix.go:120] Sleeping 1 second for extra luck!
	I1117 23:10:33.969156    3856 start.go:126] createHost starting for "" (driver="docker")
	I1117 23:10:33.973724    3856 out.go:203] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I1117 23:10:33.973724    3856 start.go:160] libmachine.API.Create for "custom-weave-20211117230315-9504" (driver="docker")
	I1117 23:10:33.973724    3856 client.go:168] LocalClient.Create starting
	I1117 23:10:33.974344    3856 main.go:130] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I1117 23:10:33.975018    3856 main.go:130] libmachine: Decoding PEM data...
	I1117 23:10:33.975224    3856 main.go:130] libmachine: Parsing certificate...
	I1117 23:10:33.975224    3856 main.go:130] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I1117 23:10:33.975224    3856 main.go:130] libmachine: Decoding PEM data...
	I1117 23:10:33.975224    3856 main.go:130] libmachine: Parsing certificate...
	I1117 23:10:33.982046    3856 cli_runner.go:115] Run: docker network inspect custom-weave-20211117230315-9504 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1117 23:10:34.081889    3856 cli_runner.go:162] docker network inspect custom-weave-20211117230315-9504 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1117 23:10:34.085509    3856 network_create.go:254] running [docker network inspect custom-weave-20211117230315-9504] to gather additional debugging logs...
	I1117 23:10:34.085509    3856 cli_runner.go:115] Run: docker network inspect custom-weave-20211117230315-9504
	W1117 23:10:34.176977    3856 cli_runner.go:162] docker network inspect custom-weave-20211117230315-9504 returned with exit code 1
	I1117 23:10:34.176977    3856 network_create.go:257] error running [docker network inspect custom-weave-20211117230315-9504]: docker network inspect custom-weave-20211117230315-9504: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: custom-weave-20211117230315-9504
	I1117 23:10:34.176977    3856 network_create.go:259] output of [docker network inspect custom-weave-20211117230315-9504]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: custom-weave-20211117230315-9504
	
	** /stderr **
	I1117 23:10:34.179978    3856 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1117 23:10:34.295157    3856 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0001ac690] amended:false}} dirty:map[] misses:0}
	I1117 23:10:34.295157    3856 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1117 23:10:34.306157    3856 network.go:288] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0001ac690] amended:true}} dirty:map[192.168.49.0:0xc0001ac690 192.168.58.0:0xc0001ac878] misses:0}
	I1117 23:10:34.306157    3856 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1117 23:10:34.306157    3856 network_create.go:106] attempt to create docker network custom-weave-20211117230315-9504 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I1117 23:10:34.310178    3856 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true custom-weave-20211117230315-9504
	W1117 23:10:34.399918    3856 cli_runner.go:162] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true custom-weave-20211117230315-9504 returned with exit code 1
	W1117 23:10:34.399918    3856 network_create.go:98] failed to create docker network custom-weave-20211117230315-9504 192.168.58.0/24, will retry: subnet is taken
	I1117 23:10:34.411526    3856 network.go:279] skipping subnet 192.168.58.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0001ac690] amended:true}} dirty:map[192.168.49.0:0xc0001ac690 192.168.58.0:0xc0001ac878] misses:1}
	I1117 23:10:34.411526    3856 network.go:238] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1117 23:10:34.421550    3856 network.go:288] reserving subnet 192.168.67.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0001ac690] amended:true}} dirty:map[192.168.49.0:0xc0001ac690 192.168.58.0:0xc0001ac878 192.168.67.0:0xc0006de3a8] misses:1}
	I1117 23:10:34.421550    3856 network.go:235] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1117 23:10:34.421550    3856 network_create.go:106] attempt to create docker network custom-weave-20211117230315-9504 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
	I1117 23:10:34.425575    3856 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true custom-weave-20211117230315-9504
	I1117 23:10:34.640179    3856 network_create.go:90] docker network custom-weave-20211117230315-9504 192.168.67.0/24 created
	I1117 23:10:34.640179    3856 kic.go:106] calculated static IP "192.168.67.2" for the "custom-weave-20211117230315-9504" container
	I1117 23:10:34.648473    3856 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
	I1117 23:10:34.739560    3856 cli_runner.go:115] Run: docker volume create custom-weave-20211117230315-9504 --label name.minikube.sigs.k8s.io=custom-weave-20211117230315-9504 --label created_by.minikube.sigs.k8s.io=true
	I1117 23:10:34.833635    3856 oci.go:102] Successfully created a docker volume custom-weave-20211117230315-9504
	I1117 23:10:34.837395    3856 cli_runner.go:115] Run: docker run --rm --name custom-weave-20211117230315-9504-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=custom-weave-20211117230315-9504 --entrypoint /usr/bin/test -v custom-weave-20211117230315-9504:/var gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -d /var/lib
	I1117 23:10:35.702163    3856 oci.go:106] Successfully prepared a docker volume custom-weave-20211117230315-9504
	I1117 23:10:35.702359    3856 preload.go:132] Checking if preload exists for k8s version v1.22.3 and runtime docker
	I1117 23:10:35.702411    3856 kic.go:179] Starting extracting preloaded images to volume ...
	I1117 23:10:35.707257    3856 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v custom-weave-20211117230315-9504:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir
	I1117 23:10:35.707546    3856 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	W1117 23:10:35.820269    3856 cli_runner.go:162] docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v custom-weave-20211117230315-9504:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir returned with exit code 125
	I1117 23:10:35.820370    3856 kic.go:186] Unable to extract preloaded tarball to volume: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v custom-weave-20211117230315-9504:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: status code not OK but 500: {"Message":"Unhandled exception: The notification platform is unavailable.\r\n\r\nThe notification platform is unavailable.\r\n","StackTrace":"   at Windows.UI.Notifications.ToastNotificationManager.CreateToastNotifier(String applicationId)\r\n   at Docker.WPF.PromptShareDirectory.<PromptUserAsync>d__0.MoveNext() in C:\\workspaces\\stable-2.3.x\\src\\github.com\\docker\\pinata\\win\\src\\Docker.WPF\\PromptShareDirectory.cs:line 25\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at Docker.ApiServices.Mounting.FileSharing.<DoShareAsync>d__6.MoveNext() in C:\\workspaces\\stable-2.3.x\\src\\github.com\\docker\\pinata\\win\\src\\Docker.ApiServices\\Mounting\\FileSharing.cs:line 80\r\n--- End of stack trace from previous location whe
re exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at Docker.ApiServices.Mounting.FileSharing.<ShareAsync>d__4.MoveNext() in C:\\workspaces\\stable-2.3.x\\src\\github.com\\docker\\pinata\\win\\src\\Docker.ApiServices\\Mounting\\FileSharing.cs:line 47\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at Docker.HttpApi.Controllers.FilesharingController.<ShareDirectory>d__2.MoveNext() in C:\\workspaces\\stable-2.3.x\\src\\github.com\\docker\\pinata\\win\\src\\Docker.HttpApi\\Controllers\\FilesharingController.cs:line 21\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.Exceptio
nDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at System.Threading.Tasks.TaskHelpersExtensions.<CastToObject>d__1`1.MoveNext()\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at System.Web.Http.Controllers.ApiControllerActionInvoker.<InvokeActionAsyncCore>d__1.MoveNext()\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at System.Web.Http.Controllers.ActionFilterResult.<ExecuteAsync>d__5.MoveNext()\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.Excepti
onServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at System.Web.Http.Dispatcher.HttpControllerDispatcher.<SendAsync>d__15.MoveNext()"}.
	See 'docker run --help'.
	I1117 23:10:36.087683    3856 info.go:263] docker info: {ID:L2W4:ZZCI:FAHF:ZZT6:NAWJ:ODFR:2ZBQ:YQ77:PYZZ:VR3E:WLUM:OBSV Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:40 OomKillDisable:true NGoroutines:52 SystemTime:2021-11-17 23:10:35.812154587 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:4.19.76-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://inde
x.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:6239399936 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:19.03.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7ad184331fa3e55e52b890ea95e65ba581ae3429 Expected:7ad184331fa3e55e52b890ea95e65ba581ae3429} RuncCommit:{ID:dc9208a3303feef5b3839f4323d9beb36df0a9dd Expected:dc9208a3303feef5b3839f4323d9beb36df0a9dd} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense:Community Engine Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	E1117 23:10:36.088065    3856 oci.go:197] error getting kernel modules path: Unable to locate kernel modules
	I1117 23:10:36.088123    3856 client.go:171] LocalClient.Create took 2.1137964s
	I1117 23:10:38.096378    3856 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1117 23:10:38.100102    3856 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20211117230315-9504
	W1117 23:10:38.205372    3856 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20211117230315-9504 returned with exit code 1
	I1117 23:10:38.205372    3856 retry.go:31] will retry after 178.565968ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "custom-weave-20211117230315-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20211117230315-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: custom-weave-20211117230315-9504
	I1117 23:10:38.388356    3856 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20211117230315-9504
	W1117 23:10:38.480779    3856 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20211117230315-9504 returned with exit code 1
	I1117 23:10:38.481058    3856 retry.go:31] will retry after 330.246446ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "custom-weave-20211117230315-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20211117230315-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: custom-weave-20211117230315-9504
	I1117 23:10:38.816056    3856 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20211117230315-9504
	W1117 23:10:38.906370    3856 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20211117230315-9504 returned with exit code 1
	I1117 23:10:38.906841    3856 retry.go:31] will retry after 460.157723ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "custom-weave-20211117230315-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20211117230315-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: custom-weave-20211117230315-9504
	I1117 23:10:39.373959    3856 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20211117230315-9504
	W1117 23:10:39.464746    3856 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20211117230315-9504 returned with exit code 1
	W1117 23:10:39.464746    3856 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "custom-weave-20211117230315-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20211117230315-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: custom-weave-20211117230315-9504
	
	W1117 23:10:39.464746    3856 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "custom-weave-20211117230315-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20211117230315-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: custom-weave-20211117230315-9504
	I1117 23:10:39.464746    3856 start.go:129] duration metric: createHost completed in 5.4955494s
	I1117 23:10:39.474612    3856 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1117 23:10:39.478065    3856 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20211117230315-9504
	W1117 23:10:39.569639    3856 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20211117230315-9504 returned with exit code 1
	I1117 23:10:39.570010    3856 retry.go:31] will retry after 195.758538ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "custom-weave-20211117230315-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20211117230315-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: custom-weave-20211117230315-9504
	I1117 23:10:39.773164    3856 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20211117230315-9504
	W1117 23:10:39.859554    3856 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20211117230315-9504 returned with exit code 1
	I1117 23:10:39.860012    3856 retry.go:31] will retry after 297.413196ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "custom-weave-20211117230315-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20211117230315-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: custom-weave-20211117230315-9504
	I1117 23:10:40.163377    3856 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20211117230315-9504
	W1117 23:10:40.254740    3856 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20211117230315-9504 returned with exit code 1
	I1117 23:10:40.254921    3856 retry.go:31] will retry after 663.23513ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "custom-weave-20211117230315-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20211117230315-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: custom-weave-20211117230315-9504
	I1117 23:10:40.923620    3856 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20211117230315-9504
	W1117 23:10:41.018332    3856 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20211117230315-9504 returned with exit code 1
	W1117 23:10:41.018773    3856 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "custom-weave-20211117230315-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20211117230315-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: custom-weave-20211117230315-9504
	
	W1117 23:10:41.018828    3856 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "custom-weave-20211117230315-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20211117230315-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: custom-weave-20211117230315-9504
	I1117 23:10:41.018886    3856 fix.go:57] fixHost completed within 23.9919502s
	I1117 23:10:41.018942    3856 start.go:80] releasing machines lock for "custom-weave-20211117230315-9504", held for 23.992139s
	W1117 23:10:41.019656    3856 out.go:241] * Failed to start docker container. Running "minikube delete -p custom-weave-20211117230315-9504" may fix it: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	* Failed to start docker container. Running "minikube delete -p custom-weave-20211117230315-9504" may fix it: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	I1117 23:10:41.024072    3856 out.go:176] 
	W1117 23:10:41.024310    3856 out.go:241] X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	W1117 23:10:41.024405    3856 out.go:241] * 
	* 
	W1117 23:10:41.025881    3856 out.go:241] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1117 23:10:41.030322    3856 out.go:176] 

                                                
                                                
** /stderr **
net_test.go:101: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/custom-weave/Start (38.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (38.07s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:99: (dbg) Run:  out/minikube-windows-amd64.exe start -p enable-default-cni-20211117230313-9504 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --enable-default-cni=true --driver=docker

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:99: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p enable-default-cni-20211117230313-9504 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --enable-default-cni=true --driver=docker: exit status 80 (37.9788729s)

                                                
                                                
-- stdout --
	* [enable-default-cni-20211117230313-9504] minikube v1.24.0 on Microsoft Windows 10 Enterprise N 10.0.18363 Build 18363
	  - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=12739
	* Using the docker driver based on user configuration
	* Starting control plane node enable-default-cni-20211117230313-9504 in cluster enable-default-cni-20211117230313-9504
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* docker "enable-default-cni-20211117230313-9504" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1117 23:10:26.670972    2072 out.go:297] Setting OutFile to fd 1888 ...
	I1117 23:10:26.742261    2072 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 23:10:26.742261    2072 out.go:310] Setting ErrFile to fd 1896...
	I1117 23:10:26.742336    2072 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 23:10:26.754357    2072 out.go:304] Setting JSON to false
	I1117 23:10:26.755891    2072 start.go:112] hostinfo: {"hostname":"minikube2","uptime":79942,"bootTime":1637110684,"procs":132,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.18363 Build 18363","kernelVersion":"10.0.18363 Build 18363","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"29dced62-21fb-45d8-a34e-472b66ced002"}
	W1117 23:10:26.755891    2072 start.go:120] gopshost.Virtualization returned error: not implemented yet
	I1117 23:10:26.762061    2072 out.go:176] * [enable-default-cni-20211117230313-9504] minikube v1.24.0 on Microsoft Windows 10 Enterprise N 10.0.18363 Build 18363
	I1117 23:10:26.762391    2072 notify.go:174] Checking for updates...
	I1117 23:10:26.765120    2072 out.go:176]   - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	I1117 23:10:26.768247    2072 out.go:176]   - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	I1117 23:10:26.772700    2072 out.go:176]   - MINIKUBE_LOCATION=12739
	I1117 23:10:26.773390    2072 config.go:176] Loaded profile config "calico-20211117230315-9504": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.3
	I1117 23:10:26.774091    2072 config.go:176] Loaded profile config "custom-weave-20211117230315-9504": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.3
	I1117 23:10:26.774091    2072 config.go:176] Loaded profile config "multinode-20211117225530-9504-m01": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.3
	I1117 23:10:26.774091    2072 config.go:176] Loaded profile config "pause-20211117230855-9504": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.3
	I1117 23:10:26.774091    2072 driver.go:343] Setting default libvirt URI to qemu:///system
	I1117 23:10:28.361973    2072 docker.go:132] docker version: linux-19.03.12
	I1117 23:10:28.367800    2072 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I1117 23:10:28.729274    2072 info.go:263] docker info: {ID:L2W4:ZZCI:FAHF:ZZT6:NAWJ:ODFR:2ZBQ:YQ77:PYZZ:VR3E:WLUM:OBSV Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:40 OomKillDisable:true NGoroutines:52 SystemTime:2021-11-17 23:10:28.447121492 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:4.19.76-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://inde
x.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:6239399936 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:19.03.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7ad184331fa3e55e52b890ea95e65ba581ae3429 Expected:7ad184331fa3e55e52b890ea95e65ba581ae3429} RuncCommit:{ID:dc9208a3303feef5b3839f4323d9beb36df0a9dd Expected:dc9208a3303feef5b3839f4323d9beb36df0a9dd} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense:Community Engine Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I1117 23:10:28.735802    2072 out.go:176] * Using the docker driver based on user configuration
	I1117 23:10:28.735890    2072 start.go:280] selected driver: docker
	I1117 23:10:28.735890    2072 start.go:775] validating driver "docker" against <nil>
	I1117 23:10:28.735972    2072 start.go:786] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I1117 23:10:28.794365    2072 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I1117 23:10:29.159629    2072 info.go:263] docker info: {ID:L2W4:ZZCI:FAHF:ZZT6:NAWJ:ODFR:2ZBQ:YQ77:PYZZ:VR3E:WLUM:OBSV Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:40 OomKillDisable:true NGoroutines:52 SystemTime:2021-11-17 23:10:28.884083448 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:4.19.76-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://inde
x.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:6239399936 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:19.03.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7ad184331fa3e55e52b890ea95e65ba581ae3429 Expected:7ad184331fa3e55e52b890ea95e65ba581ae3429} RuncCommit:{ID:dc9208a3303feef5b3839f4323d9beb36df0a9dd Expected:dc9208a3303feef5b3839f4323d9beb36df0a9dd} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense:Community Engine Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I1117 23:10:29.159918    2072 start_flags.go:268] no existing cluster config was found, will generate one from the flags 
	E1117 23:10:29.160129    2072 start_flags.go:399] Found deprecated --enable-default-cni flag, setting --cni=bridge
	I1117 23:10:29.160129    2072 start_flags.go:758] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1117 23:10:29.160129    2072 cni.go:93] Creating CNI manager for "bridge"
	I1117 23:10:29.160129    2072 start_flags.go:277] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1117 23:10:29.160129    2072 start_flags.go:282] config:
	{Name:enable-default-cni-20211117230313-9504 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.3 ClusterName:enable-default-cni-20211117230313-9504 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host}
	I1117 23:10:29.164538    2072 out.go:176] * Starting control plane node enable-default-cni-20211117230313-9504 in cluster enable-default-cni-20211117230313-9504
	I1117 23:10:29.164538    2072 cache.go:118] Beginning downloading kic base image for docker with docker
	I1117 23:10:29.167718    2072 out.go:176] * Pulling base image ...
	I1117 23:10:29.167718    2072 preload.go:132] Checking if preload exists for k8s version v1.22.3 and runtime docker
	I1117 23:10:29.167718    2072 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c in local docker daemon
	I1117 23:10:29.167718    2072 preload.go:148] Found local preload: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4
	I1117 23:10:29.167718    2072 cache.go:57] Caching tarball of preloaded images
	I1117 23:10:29.168746    2072 preload.go:174] Found C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1117 23:10:29.168986    2072 cache.go:60] Finished verifying existence of preloaded tar for  v1.22.3 on docker
	I1117 23:10:29.169166    2072 profile.go:147] Saving config to C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\enable-default-cni-20211117230313-9504\config.json ...
	I1117 23:10:29.169166    2072 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\enable-default-cni-20211117230313-9504\config.json: {Name:mkcdd2f6a7bc169112b26c6c71ddbe6ccc57221f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1117 23:10:29.265297    2072 image.go:79] Found gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c in local docker daemon, skipping pull
	I1117 23:10:29.265297    2072 cache.go:140] gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c exists in daemon, skipping load
	I1117 23:10:29.265297    2072 cache.go:206] Successfully downloaded all kic artifacts
	I1117 23:10:29.267306    2072 start.go:313] acquiring machines lock for enable-default-cni-20211117230313-9504: {Name:mk3bd086231fd04d7b7746f777010152cfba8d05 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 23:10:29.267306    2072 start.go:317] acquired machines lock for "enable-default-cni-20211117230313-9504" in 0s
	I1117 23:10:29.267306    2072 start.go:89] Provisioning new machine with config: &{Name:enable-default-cni-20211117230313-9504 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.3 ClusterName:enable-default-cni-20211117230313-9504 Namespace:default APIServerName:mi
nikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.22.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host} &{Name: IP: Port:8443 KubernetesVersion:v1.22.3 ControlPlane:true Worker:true}
	I1117 23:10:29.267306    2072 start.go:126] createHost starting for "" (driver="docker")
	I1117 23:10:29.272302    2072 out.go:203] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I1117 23:10:29.272302    2072 start.go:160] libmachine.API.Create for "enable-default-cni-20211117230313-9504" (driver="docker")
	I1117 23:10:29.272302    2072 client.go:168] LocalClient.Create starting
	I1117 23:10:29.273309    2072 main.go:130] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I1117 23:10:29.273309    2072 main.go:130] libmachine: Decoding PEM data...
	I1117 23:10:29.273309    2072 main.go:130] libmachine: Parsing certificate...
	I1117 23:10:29.273309    2072 main.go:130] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I1117 23:10:29.273309    2072 main.go:130] libmachine: Decoding PEM data...
	I1117 23:10:29.273309    2072 main.go:130] libmachine: Parsing certificate...
	I1117 23:10:29.278314    2072 cli_runner.go:115] Run: docker network inspect enable-default-cni-20211117230313-9504 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1117 23:10:29.362968    2072 cli_runner.go:162] docker network inspect enable-default-cni-20211117230313-9504 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1117 23:10:29.366881    2072 network_create.go:254] running [docker network inspect enable-default-cni-20211117230313-9504] to gather additional debugging logs...
	I1117 23:10:29.366881    2072 cli_runner.go:115] Run: docker network inspect enable-default-cni-20211117230313-9504
	W1117 23:10:29.456284    2072 cli_runner.go:162] docker network inspect enable-default-cni-20211117230313-9504 returned with exit code 1
	I1117 23:10:29.456563    2072 network_create.go:257] error running [docker network inspect enable-default-cni-20211117230313-9504]: docker network inspect enable-default-cni-20211117230313-9504: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: enable-default-cni-20211117230313-9504
	I1117 23:10:29.456563    2072 network_create.go:259] output of [docker network inspect enable-default-cni-20211117230313-9504]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: enable-default-cni-20211117230313-9504
	
	** /stderr **
	I1117 23:10:29.461453    2072 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1117 23:10:29.571870    2072 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc000614248] misses:0}
	I1117 23:10:29.571870    2072 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1117 23:10:29.571870    2072 network_create.go:106] attempt to create docker network enable-default-cni-20211117230313-9504 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1117 23:10:29.575860    2072 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true enable-default-cni-20211117230313-9504
	I1117 23:10:29.782543    2072 network_create.go:90] docker network enable-default-cni-20211117230313-9504 192.168.49.0/24 created
	I1117 23:10:29.782543    2072 kic.go:106] calculated static IP "192.168.49.2" for the "enable-default-cni-20211117230313-9504" container
	I1117 23:10:29.791525    2072 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
	I1117 23:10:29.902496    2072 cli_runner.go:115] Run: docker volume create enable-default-cni-20211117230313-9504 --label name.minikube.sigs.k8s.io=enable-default-cni-20211117230313-9504 --label created_by.minikube.sigs.k8s.io=true
	I1117 23:10:30.004619    2072 oci.go:102] Successfully created a docker volume enable-default-cni-20211117230313-9504
	I1117 23:10:30.013155    2072 cli_runner.go:115] Run: docker run --rm --name enable-default-cni-20211117230313-9504-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=enable-default-cni-20211117230313-9504 --entrypoint /usr/bin/test -v enable-default-cni-20211117230313-9504:/var gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -d /var/lib
	I1117 23:10:31.218975    2072 cli_runner.go:168] Completed: docker run --rm --name enable-default-cni-20211117230313-9504-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=enable-default-cni-20211117230313-9504 --entrypoint /usr/bin/test -v enable-default-cni-20211117230313-9504:/var gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -d /var/lib: (1.2058112s)
	I1117 23:10:31.219118    2072 oci.go:106] Successfully prepared a docker volume enable-default-cni-20211117230313-9504
	I1117 23:10:31.219227    2072 preload.go:132] Checking if preload exists for k8s version v1.22.3 and runtime docker
	I1117 23:10:31.219439    2072 kic.go:179] Starting extracting preloaded images to volume ...
	I1117 23:10:31.224439    2072 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I1117 23:10:31.224439    2072 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v enable-default-cni-20211117230313-9504:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir
	W1117 23:10:31.341617    2072 cli_runner.go:162] docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v enable-default-cni-20211117230313-9504:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir returned with exit code 125
	I1117 23:10:31.341617    2072 kic.go:186] Unable to extract preloaded tarball to volume: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v enable-default-cni-20211117230313-9504:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: status code not OK but 500: {"Message":"Unhandled exception: The notification platform is unavailable.\r\n\r\nThe notification platform is unavailable.\r\n","StackTrace":"   at Windows.UI.Notifications.ToastNotificationManager.CreateToastNotifier(String applicationId)\r\n   at Docker.WPF.PromptShareDirectory.<PromptUserAsync>d__0.MoveNext() in C:\\workspaces\\stable-2.3.x\\src\\github.com\\docker\\pinata\\win\\src\\Docker.WPF\\PromptShareDirectory.cs:line 25\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at Docker.ApiServices.Mounting.FileSharing.<DoShareAsync>d__6.MoveNext() in C:\\workspaces\\stable-2.3.x\\src\\github.com\\docker\\pinata\\win\\src\\Docker.ApiServices\\Mounting\\FileSharing.cs:line 80\r\n--- End of stack trace from previous location whe
re exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at Docker.ApiServices.Mounting.FileSharing.<ShareAsync>d__4.MoveNext() in C:\\workspaces\\stable-2.3.x\\src\\github.com\\docker\\pinata\\win\\src\\Docker.ApiServices\\Mounting\\FileSharing.cs:line 47\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at Docker.HttpApi.Controllers.FilesharingController.<ShareDirectory>d__2.MoveNext() in C:\\workspaces\\stable-2.3.x\\src\\github.com\\docker\\pinata\\win\\src\\Docker.HttpApi\\Controllers\\FilesharingController.cs:line 21\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.Exceptio
nDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at System.Threading.Tasks.TaskHelpersExtensions.<CastToObject>d__1`1.MoveNext()\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at System.Web.Http.Controllers.ApiControllerActionInvoker.<InvokeActionAsyncCore>d__1.MoveNext()\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at System.Web.Http.Controllers.ActionFilterResult.<ExecuteAsync>d__5.MoveNext()\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.Excepti
onServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at System.Web.Http.Dispatcher.HttpControllerDispatcher.<SendAsync>d__15.MoveNext()"}.
	See 'docker run --help'.
	I1117 23:10:31.595619    2072 info.go:263] docker info: {ID:L2W4:ZZCI:FAHF:ZZT6:NAWJ:ODFR:2ZBQ:YQ77:PYZZ:VR3E:WLUM:OBSV Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:41 OomKillDisable:true NGoroutines:53 SystemTime:2021-11-17 23:10:31.324116733 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:4.19.76-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://inde
x.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:6239399936 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:19.03.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7ad184331fa3e55e52b890ea95e65ba581ae3429 Expected:7ad184331fa3e55e52b890ea95e65ba581ae3429} RuncCommit:{ID:dc9208a3303feef5b3839f4323d9beb36df0a9dd Expected:dc9208a3303feef5b3839f4323d9beb36df0a9dd} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense:Community Engine Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	E1117 23:10:31.595619    2072 oci.go:197] error getting kernel modules path: Unable to locate kernel modules
	I1117 23:10:31.595619    2072 client.go:171] LocalClient.Create took 2.3232998s
	I1117 23:10:33.604263    2072 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1117 23:10:33.606956    2072 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20211117230313-9504
	W1117 23:10:33.699870    2072 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20211117230313-9504 returned with exit code 1
	I1117 23:10:33.700083    2072 retry.go:31] will retry after 276.165072ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "enable-default-cni-20211117230313-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20211117230313-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-20211117230313-9504
	I1117 23:10:33.980658    2072 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20211117230313-9504
	W1117 23:10:34.068166    2072 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20211117230313-9504 returned with exit code 1
	I1117 23:10:34.068471    2072 retry.go:31] will retry after 540.190908ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "enable-default-cni-20211117230313-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20211117230313-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-20211117230313-9504
	I1117 23:10:34.612844    2072 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20211117230313-9504
	W1117 23:10:34.708613    2072 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20211117230313-9504 returned with exit code 1
	I1117 23:10:34.708822    2072 retry.go:31] will retry after 655.06503ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "enable-default-cni-20211117230313-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20211117230313-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-20211117230313-9504
	I1117 23:10:35.369785    2072 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20211117230313-9504
	W1117 23:10:35.463012    2072 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20211117230313-9504 returned with exit code 1
	W1117 23:10:35.463156    2072 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "enable-default-cni-20211117230313-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20211117230313-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-20211117230313-9504
	
	W1117 23:10:35.463156    2072 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "enable-default-cni-20211117230313-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20211117230313-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-20211117230313-9504
	I1117 23:10:35.463156    2072 start.go:129] duration metric: createHost completed in 6.1958036s
	I1117 23:10:35.463156    2072 start.go:80] releasing machines lock for "enable-default-cni-20211117230313-9504", held for 6.1958036s
	W1117 23:10:35.463156    2072 start.go:532] error starting host: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	I1117 23:10:35.472254    2072 cli_runner.go:115] Run: docker container inspect enable-default-cni-20211117230313-9504 --format={{.State.Status}}
	W1117 23:10:35.568652    2072 cli_runner.go:162] docker container inspect enable-default-cni-20211117230313-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:10:35.568986    2072 delete.go:82] Unable to get host status for enable-default-cni-20211117230313-9504, assuming it has already been deleted: state: unknown state "enable-default-cni-20211117230313-9504": docker container inspect enable-default-cni-20211117230313-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-20211117230313-9504
	W1117 23:10:35.569343    2072 out.go:241] ! StartHost failed, but will try again: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	! StartHost failed, but will try again: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	I1117 23:10:35.569343    2072 start.go:547] Will try again in 5 seconds ...
	I1117 23:10:40.570252    2072 start.go:313] acquiring machines lock for enable-default-cni-20211117230313-9504: {Name:mk3bd086231fd04d7b7746f777010152cfba8d05 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 23:10:40.570776    2072 start.go:317] acquired machines lock for "enable-default-cni-20211117230313-9504" in 524.6µs
	I1117 23:10:40.570977    2072 start.go:93] Skipping create...Using existing machine configuration
	I1117 23:10:40.570977    2072 fix.go:55] fixHost starting: 
	I1117 23:10:40.579436    2072 cli_runner.go:115] Run: docker container inspect enable-default-cni-20211117230313-9504 --format={{.State.Status}}
	W1117 23:10:40.674403    2072 cli_runner.go:162] docker container inspect enable-default-cni-20211117230313-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:10:40.674403    2072 fix.go:108] recreateIfNeeded on enable-default-cni-20211117230313-9504: state= err=unknown state "enable-default-cni-20211117230313-9504": docker container inspect enable-default-cni-20211117230313-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-20211117230313-9504
	I1117 23:10:40.674403    2072 fix.go:113] machineExists: false. err=machine does not exist
	I1117 23:10:40.679427    2072 out.go:176] * docker "enable-default-cni-20211117230313-9504" container is missing, will recreate.
	I1117 23:10:40.679541    2072 delete.go:124] DEMOLISHING enable-default-cni-20211117230313-9504 ...
	I1117 23:10:40.687224    2072 cli_runner.go:115] Run: docker container inspect enable-default-cni-20211117230313-9504 --format={{.State.Status}}
	W1117 23:10:40.780256    2072 cli_runner.go:162] docker container inspect enable-default-cni-20211117230313-9504 --format={{.State.Status}} returned with exit code 1
	W1117 23:10:40.780364    2072 stop.go:75] unable to get state: unknown state "enable-default-cni-20211117230313-9504": docker container inspect enable-default-cni-20211117230313-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-20211117230313-9504
	I1117 23:10:40.780364    2072 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "enable-default-cni-20211117230313-9504": docker container inspect enable-default-cni-20211117230313-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-20211117230313-9504
	I1117 23:10:40.789906    2072 cli_runner.go:115] Run: docker container inspect enable-default-cni-20211117230313-9504 --format={{.State.Status}}
	W1117 23:10:40.876784    2072 cli_runner.go:162] docker container inspect enable-default-cni-20211117230313-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:10:40.877240    2072 delete.go:82] Unable to get host status for enable-default-cni-20211117230313-9504, assuming it has already been deleted: state: unknown state "enable-default-cni-20211117230313-9504": docker container inspect enable-default-cni-20211117230313-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-20211117230313-9504
	I1117 23:10:40.881969    2072 cli_runner.go:115] Run: docker container inspect -f {{.Id}} enable-default-cni-20211117230313-9504
	W1117 23:10:40.973268    2072 cli_runner.go:162] docker container inspect -f {{.Id}} enable-default-cni-20211117230313-9504 returned with exit code 1
	I1117 23:10:40.973605    2072 kic.go:360] could not find the container enable-default-cni-20211117230313-9504 to remove it. will try anyways
	I1117 23:10:40.977929    2072 cli_runner.go:115] Run: docker container inspect enable-default-cni-20211117230313-9504 --format={{.State.Status}}
	W1117 23:10:41.071542    2072 cli_runner.go:162] docker container inspect enable-default-cni-20211117230313-9504 --format={{.State.Status}} returned with exit code 1
	W1117 23:10:41.071542    2072 oci.go:83] error getting container status, will try to delete anyways: unknown state "enable-default-cni-20211117230313-9504": docker container inspect enable-default-cni-20211117230313-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-20211117230313-9504
	I1117 23:10:41.077449    2072 cli_runner.go:115] Run: docker exec --privileged -t enable-default-cni-20211117230313-9504 /bin/bash -c "sudo init 0"
	W1117 23:10:41.203279    2072 cli_runner.go:162] docker exec --privileged -t enable-default-cni-20211117230313-9504 /bin/bash -c "sudo init 0" returned with exit code 1
	I1117 23:10:41.203279    2072 oci.go:658] error shutdown enable-default-cni-20211117230313-9504: docker exec --privileged -t enable-default-cni-20211117230313-9504 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: enable-default-cni-20211117230313-9504
	I1117 23:10:42.208760    2072 cli_runner.go:115] Run: docker container inspect enable-default-cni-20211117230313-9504 --format={{.State.Status}}
	W1117 23:10:42.309454    2072 cli_runner.go:162] docker container inspect enable-default-cni-20211117230313-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:10:42.309734    2072 oci.go:670] temporary error verifying shutdown: unknown state "enable-default-cni-20211117230313-9504": docker container inspect enable-default-cni-20211117230313-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-20211117230313-9504
	I1117 23:10:42.309734    2072 oci.go:672] temporary error: container enable-default-cni-20211117230313-9504 status is  but expect it to be exited
	I1117 23:10:42.309818    2072 retry.go:31] will retry after 462.318748ms: couldn't verify container is exited. %v: unknown state "enable-default-cni-20211117230313-9504": docker container inspect enable-default-cni-20211117230313-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-20211117230313-9504
	I1117 23:10:42.777375    2072 cli_runner.go:115] Run: docker container inspect enable-default-cni-20211117230313-9504 --format={{.State.Status}}
	W1117 23:10:42.867933    2072 cli_runner.go:162] docker container inspect enable-default-cni-20211117230313-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:10:42.867933    2072 oci.go:670] temporary error verifying shutdown: unknown state "enable-default-cni-20211117230313-9504": docker container inspect enable-default-cni-20211117230313-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-20211117230313-9504
	I1117 23:10:42.867933    2072 oci.go:672] temporary error: container enable-default-cni-20211117230313-9504 status is  but expect it to be exited
	I1117 23:10:42.867933    2072 retry.go:31] will retry after 890.117305ms: couldn't verify container is exited. %v: unknown state "enable-default-cni-20211117230313-9504": docker container inspect enable-default-cni-20211117230313-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-20211117230313-9504
	I1117 23:10:43.762919    2072 cli_runner.go:115] Run: docker container inspect enable-default-cni-20211117230313-9504 --format={{.State.Status}}
	W1117 23:10:43.860056    2072 cli_runner.go:162] docker container inspect enable-default-cni-20211117230313-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:10:43.860056    2072 oci.go:670] temporary error verifying shutdown: unknown state "enable-default-cni-20211117230313-9504": docker container inspect enable-default-cni-20211117230313-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-20211117230313-9504
	I1117 23:10:43.860056    2072 oci.go:672] temporary error: container enable-default-cni-20211117230313-9504 status is  but expect it to be exited
	I1117 23:10:43.860056    2072 retry.go:31] will retry after 636.341646ms: couldn't verify container is exited. %v: unknown state "enable-default-cni-20211117230313-9504": docker container inspect enable-default-cni-20211117230313-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-20211117230313-9504
	I1117 23:10:44.501301    2072 cli_runner.go:115] Run: docker container inspect enable-default-cni-20211117230313-9504 --format={{.State.Status}}
	W1117 23:10:44.595782    2072 cli_runner.go:162] docker container inspect enable-default-cni-20211117230313-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:10:44.596098    2072 oci.go:670] temporary error verifying shutdown: unknown state "enable-default-cni-20211117230313-9504": docker container inspect enable-default-cni-20211117230313-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-20211117230313-9504
	I1117 23:10:44.596098    2072 oci.go:672] temporary error: container enable-default-cni-20211117230313-9504 status is  but expect it to be exited
	I1117 23:10:44.596176    2072 retry.go:31] will retry after 1.107876242s: couldn't verify container is exited. %v: unknown state "enable-default-cni-20211117230313-9504": docker container inspect enable-default-cni-20211117230313-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-20211117230313-9504
	I1117 23:10:45.707444    2072 cli_runner.go:115] Run: docker container inspect enable-default-cni-20211117230313-9504 --format={{.State.Status}}
	W1117 23:10:45.803119    2072 cli_runner.go:162] docker container inspect enable-default-cni-20211117230313-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:10:45.803119    2072 oci.go:670] temporary error verifying shutdown: unknown state "enable-default-cni-20211117230313-9504": docker container inspect enable-default-cni-20211117230313-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-20211117230313-9504
	I1117 23:10:45.803119    2072 oci.go:672] temporary error: container enable-default-cni-20211117230313-9504 status is  but expect it to be exited
	I1117 23:10:45.803119    2072 retry.go:31] will retry after 1.511079094s: couldn't verify container is exited. %v: unknown state "enable-default-cni-20211117230313-9504": docker container inspect enable-default-cni-20211117230313-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-20211117230313-9504
	I1117 23:10:47.319933    2072 cli_runner.go:115] Run: docker container inspect enable-default-cni-20211117230313-9504 --format={{.State.Status}}
	W1117 23:10:47.413560    2072 cli_runner.go:162] docker container inspect enable-default-cni-20211117230313-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:10:47.413560    2072 oci.go:670] temporary error verifying shutdown: unknown state "enable-default-cni-20211117230313-9504": docker container inspect enable-default-cni-20211117230313-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-20211117230313-9504
	I1117 23:10:47.413560    2072 oci.go:672] temporary error: container enable-default-cni-20211117230313-9504 status is  but expect it to be exited
	I1117 23:10:47.413560    2072 retry.go:31] will retry after 3.04096222s: couldn't verify container is exited. %v: unknown state "enable-default-cni-20211117230313-9504": docker container inspect enable-default-cni-20211117230313-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-20211117230313-9504
	I1117 23:10:50.460973    2072 cli_runner.go:115] Run: docker container inspect enable-default-cni-20211117230313-9504 --format={{.State.Status}}
	W1117 23:10:50.551607    2072 cli_runner.go:162] docker container inspect enable-default-cni-20211117230313-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:10:50.551607    2072 oci.go:670] temporary error verifying shutdown: unknown state "enable-default-cni-20211117230313-9504": docker container inspect enable-default-cni-20211117230313-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-20211117230313-9504
	I1117 23:10:50.551607    2072 oci.go:672] temporary error: container enable-default-cni-20211117230313-9504 status is  but expect it to be exited
	I1117 23:10:50.551607    2072 retry.go:31] will retry after 5.781953173s: couldn't verify container is exited. %v: unknown state "enable-default-cni-20211117230313-9504": docker container inspect enable-default-cni-20211117230313-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-20211117230313-9504
	I1117 23:10:56.339537    2072 cli_runner.go:115] Run: docker container inspect enable-default-cni-20211117230313-9504 --format={{.State.Status}}
	W1117 23:10:56.438310    2072 cli_runner.go:162] docker container inspect enable-default-cni-20211117230313-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:10:56.438409    2072 oci.go:670] temporary error verifying shutdown: unknown state "enable-default-cni-20211117230313-9504": docker container inspect enable-default-cni-20211117230313-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-20211117230313-9504
	I1117 23:10:56.438409    2072 oci.go:672] temporary error: container enable-default-cni-20211117230313-9504 status is  but expect it to be exited
	I1117 23:10:56.438465    2072 oci.go:87] couldn't shut down enable-default-cni-20211117230313-9504 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "enable-default-cni-20211117230313-9504": docker container inspect enable-default-cni-20211117230313-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-20211117230313-9504
	 
	I1117 23:10:56.442258    2072 cli_runner.go:115] Run: docker rm -f -v enable-default-cni-20211117230313-9504
	W1117 23:10:56.537583    2072 cli_runner.go:162] docker rm -f -v enable-default-cni-20211117230313-9504 returned with exit code 1
	W1117 23:10:56.538638    2072 delete.go:139] delete failed (probably ok) <nil>
	I1117 23:10:56.538638    2072 fix.go:120] Sleeping 1 second for extra luck!
	I1117 23:10:57.539503    2072 start.go:126] createHost starting for "" (driver="docker")
	I1117 23:10:57.543065    2072 out.go:203] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I1117 23:10:57.543471    2072 start.go:160] libmachine.API.Create for "enable-default-cni-20211117230313-9504" (driver="docker")
	I1117 23:10:57.543530    2072 client.go:168] LocalClient.Create starting
	I1117 23:10:57.544204    2072 main.go:130] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I1117 23:10:57.544483    2072 main.go:130] libmachine: Decoding PEM data...
	I1117 23:10:57.544536    2072 main.go:130] libmachine: Parsing certificate...
	I1117 23:10:57.544712    2072 main.go:130] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I1117 23:10:57.544992    2072 main.go:130] libmachine: Decoding PEM data...
	I1117 23:10:57.545050    2072 main.go:130] libmachine: Parsing certificate...
	I1117 23:10:57.549932    2072 cli_runner.go:115] Run: docker network inspect enable-default-cni-20211117230313-9504 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1117 23:10:57.638243    2072 cli_runner.go:162] docker network inspect enable-default-cni-20211117230313-9504 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1117 23:10:57.643636    2072 network_create.go:254] running [docker network inspect enable-default-cni-20211117230313-9504] to gather additional debugging logs...
	I1117 23:10:57.643690    2072 cli_runner.go:115] Run: docker network inspect enable-default-cni-20211117230313-9504
	W1117 23:10:57.730468    2072 cli_runner.go:162] docker network inspect enable-default-cni-20211117230313-9504 returned with exit code 1
	I1117 23:10:57.730468    2072 network_create.go:257] error running [docker network inspect enable-default-cni-20211117230313-9504]: docker network inspect enable-default-cni-20211117230313-9504: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: enable-default-cni-20211117230313-9504
	I1117 23:10:57.730468    2072 network_create.go:259] output of [docker network inspect enable-default-cni-20211117230313-9504]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: enable-default-cni-20211117230313-9504
	
	** /stderr **
	I1117 23:10:57.735626    2072 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1117 23:10:57.837545    2072 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000614248] amended:false}} dirty:map[] misses:0}
	I1117 23:10:57.837545    2072 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1117 23:10:57.848921    2072 network.go:288] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000614248] amended:true}} dirty:map[192.168.49.0:0xc000614248 192.168.58.0:0xc00010c2e8] misses:0}
	I1117 23:10:57.848921    2072 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1117 23:10:57.848921    2072 network_create.go:106] attempt to create docker network enable-default-cni-20211117230313-9504 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I1117 23:10:57.854074    2072 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true enable-default-cni-20211117230313-9504
	I1117 23:10:58.055803    2072 network_create.go:90] docker network enable-default-cni-20211117230313-9504 192.168.58.0/24 created
	I1117 23:10:58.056213    2072 kic.go:106] calculated static IP "192.168.58.2" for the "enable-default-cni-20211117230313-9504" container
	I1117 23:10:58.065785    2072 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
	I1117 23:10:58.172512    2072 cli_runner.go:115] Run: docker volume create enable-default-cni-20211117230313-9504 --label name.minikube.sigs.k8s.io=enable-default-cni-20211117230313-9504 --label created_by.minikube.sigs.k8s.io=true
	I1117 23:10:58.265571    2072 oci.go:102] Successfully created a docker volume enable-default-cni-20211117230313-9504
	I1117 23:10:58.270411    2072 cli_runner.go:115] Run: docker run --rm --name enable-default-cni-20211117230313-9504-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=enable-default-cni-20211117230313-9504 --entrypoint /usr/bin/test -v enable-default-cni-20211117230313-9504:/var gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -d /var/lib
	I1117 23:10:59.159971    2072 oci.go:106] Successfully prepared a docker volume enable-default-cni-20211117230313-9504
	I1117 23:10:59.159971    2072 preload.go:132] Checking if preload exists for k8s version v1.22.3 and runtime docker
	I1117 23:10:59.159971    2072 kic.go:179] Starting extracting preloaded images to volume ...
	I1117 23:10:59.164814    2072 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I1117 23:10:59.164814    2072 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v enable-default-cni-20211117230313-9504:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir
	W1117 23:10:59.280372    2072 cli_runner.go:162] docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v enable-default-cni-20211117230313-9504:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir returned with exit code 125
	I1117 23:10:59.280703    2072 kic.go:186] Unable to extract preloaded tarball to volume: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v enable-default-cni-20211117230313-9504:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: status code not OK but 500: {"Message":"Unhandled exception: The notification platform is unavailable.\r\n\r\nThe notification platform is unavailable.\r\n","StackTrace":"   at Windows.UI.Notifications.ToastNotificationManager.CreateToastNotifier(String applicationId)\r\n   at Docker.WPF.PromptShareDirectory.<PromptUserAsync>d__0.MoveNext() in C:\\workspaces\\stable-2.3.x\\src\\github.com\\docker\\pinata\\win\\src\\Docker.WPF\\PromptShareDirectory.cs:line 25\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at Docker.ApiServices.Mounting.FileSharing.<DoShareAsync>d__6.MoveNext() in C:\\workspaces\\stable-2.3.x\\src\\github.com\\docker\\pinata\\win\\src\\Docker.ApiServices\\Mounting\\FileSharing.cs:line 80\r\n--- End of stack trace from previous location whe
re exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at Docker.ApiServices.Mounting.FileSharing.<ShareAsync>d__4.MoveNext() in C:\\workspaces\\stable-2.3.x\\src\\github.com\\docker\\pinata\\win\\src\\Docker.ApiServices\\Mounting\\FileSharing.cs:line 47\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at Docker.HttpApi.Controllers.FilesharingController.<ShareDirectory>d__2.MoveNext() in C:\\workspaces\\stable-2.3.x\\src\\github.com\\docker\\pinata\\win\\src\\Docker.HttpApi\\Controllers\\FilesharingController.cs:line 21\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.Exceptio
nDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at System.Threading.Tasks.TaskHelpersExtensions.<CastToObject>d__1`1.MoveNext()\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at System.Web.Http.Controllers.ApiControllerActionInvoker.<InvokeActionAsyncCore>d__1.MoveNext()\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at System.Web.Http.Controllers.ActionFilterResult.<ExecuteAsync>d__5.MoveNext()\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.Excepti
onServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at System.Web.Http.Dispatcher.HttpControllerDispatcher.<SendAsync>d__15.MoveNext()"}.
	See 'docker run --help'.
	I1117 23:10:59.518906    2072 info.go:263] docker info: {ID:L2W4:ZZCI:FAHF:ZZT6:NAWJ:ODFR:2ZBQ:YQ77:PYZZ:VR3E:WLUM:OBSV Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:40 OomKillDisable:true NGoroutines:52 SystemTime:2021-11-17 23:10:59.244853898 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:4.19.76-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://inde
x.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:6239399936 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:19.03.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7ad184331fa3e55e52b890ea95e65ba581ae3429 Expected:7ad184331fa3e55e52b890ea95e65ba581ae3429} RuncCommit:{ID:dc9208a3303feef5b3839f4323d9beb36df0a9dd Expected:dc9208a3303feef5b3839f4323d9beb36df0a9dd} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense:Community Engine Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	E1117 23:10:59.519561    2072 oci.go:197] error getting kernel modules path: Unable to locate kernel modules
	I1117 23:10:59.519582    2072 client.go:171] LocalClient.Create took 1.9760372s
	I1117 23:11:01.527859    2072 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1117 23:11:01.531399    2072 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20211117230313-9504
	W1117 23:11:01.630898    2072 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20211117230313-9504 returned with exit code 1
	I1117 23:11:01.631070    2072 retry.go:31] will retry after 178.565968ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "enable-default-cni-20211117230313-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20211117230313-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-20211117230313-9504
	I1117 23:11:01.814364    2072 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20211117230313-9504
	W1117 23:11:01.915706    2072 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20211117230313-9504 returned with exit code 1
	I1117 23:11:01.916118    2072 retry.go:31] will retry after 330.246446ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "enable-default-cni-20211117230313-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20211117230313-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-20211117230313-9504
	I1117 23:11:02.252044    2072 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20211117230313-9504
	W1117 23:11:02.341251    2072 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20211117230313-9504 returned with exit code 1
	I1117 23:11:02.341640    2072 retry.go:31] will retry after 460.157723ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "enable-default-cni-20211117230313-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20211117230313-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-20211117230313-9504
	I1117 23:11:02.807614    2072 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20211117230313-9504
	W1117 23:11:02.892760    2072 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20211117230313-9504 returned with exit code 1
	W1117 23:11:02.893046    2072 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "enable-default-cni-20211117230313-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20211117230313-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-20211117230313-9504
	
	W1117 23:11:02.893046    2072 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "enable-default-cni-20211117230313-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20211117230313-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-20211117230313-9504
	I1117 23:11:02.893046    2072 start.go:129] duration metric: createHost completed in 5.3535029s
	I1117 23:11:02.901618    2072 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1117 23:11:02.905059    2072 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20211117230313-9504
	W1117 23:11:02.994162    2072 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20211117230313-9504 returned with exit code 1
	I1117 23:11:02.994350    2072 retry.go:31] will retry after 195.758538ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "enable-default-cni-20211117230313-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20211117230313-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-20211117230313-9504
	I1117 23:11:03.195565    2072 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20211117230313-9504
	W1117 23:11:03.286600    2072 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20211117230313-9504 returned with exit code 1
	I1117 23:11:03.286600    2072 retry.go:31] will retry after 297.413196ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "enable-default-cni-20211117230313-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20211117230313-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-20211117230313-9504
	I1117 23:11:03.589782    2072 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20211117230313-9504
	W1117 23:11:03.675318    2072 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20211117230313-9504 returned with exit code 1
	I1117 23:11:03.675531    2072 retry.go:31] will retry after 663.23513ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "enable-default-cni-20211117230313-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20211117230313-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-20211117230313-9504
	I1117 23:11:04.343712    2072 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20211117230313-9504
	W1117 23:11:04.436218    2072 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20211117230313-9504 returned with exit code 1
	W1117 23:11:04.436542    2072 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "enable-default-cni-20211117230313-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20211117230313-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-20211117230313-9504
	
	W1117 23:11:04.436634    2072 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "enable-default-cni-20211117230313-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20211117230313-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-20211117230313-9504
	I1117 23:11:04.436679    2072 fix.go:57] fixHost completed within 23.865523s
	I1117 23:11:04.436679    2072 start.go:80] releasing machines lock for "enable-default-cni-20211117230313-9504", held for 23.8656807s
	W1117 23:11:04.436679    2072 out.go:241] * Failed to start docker container. Running "minikube delete -p enable-default-cni-20211117230313-9504" may fix it: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	* Failed to start docker container. Running "minikube delete -p enable-default-cni-20211117230313-9504" may fix it: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	I1117 23:11:04.441306    2072 out.go:176] 
	W1117 23:11:04.441578    2072 out.go:241] X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	W1117 23:11:04.441698    2072 out.go:241] * 
	* 
	W1117 23:11:04.443018    2072 out.go:241] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1117 23:11:04.445274    2072 out.go:176] 

                                                
                                                
** /stderr **
net_test.go:101: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/enable-default-cni/Start (38.07s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (38.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:99: (dbg) Run:  out/minikube-windows-amd64.exe start -p kindnet-20211117230315-9504 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=kindnet --driver=docker

                                                
                                                
=== CONT  TestNetworkPlugins/group/kindnet/Start
net_test.go:99: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p kindnet-20211117230315-9504 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=kindnet --driver=docker: exit status 80 (38.1390715s)

                                                
                                                
-- stdout --
	* [kindnet-20211117230315-9504] minikube v1.24.0 on Microsoft Windows 10 Enterprise N 10.0.18363 Build 18363
	  - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=12739
	* Using the docker driver based on user configuration
	* Starting control plane node kindnet-20211117230315-9504 in cluster kindnet-20211117230315-9504
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* docker "kindnet-20211117230315-9504" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1117 23:10:28.413262    7596 out.go:297] Setting OutFile to fd 1652 ...
	I1117 23:10:28.480705    7596 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 23:10:28.480705    7596 out.go:310] Setting ErrFile to fd 1644...
	I1117 23:10:28.480705    7596 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 23:10:28.496893    7596 out.go:304] Setting JSON to false
	I1117 23:10:28.499332    7596 start.go:112] hostinfo: {"hostname":"minikube2","uptime":79944,"bootTime":1637110684,"procs":133,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.18363 Build 18363","kernelVersion":"10.0.18363 Build 18363","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"29dced62-21fb-45d8-a34e-472b66ced002"}
	W1117 23:10:28.499332    7596 start.go:120] gopshost.Virtualization returned error: not implemented yet
	I1117 23:10:28.503358    7596 out.go:176] * [kindnet-20211117230315-9504] minikube v1.24.0 on Microsoft Windows 10 Enterprise N 10.0.18363 Build 18363
	I1117 23:10:28.503914    7596 notify.go:174] Checking for updates...
	I1117 23:10:28.507724    7596 out.go:176]   - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	I1117 23:10:28.510322    7596 out.go:176]   - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	I1117 23:10:28.512519    7596 out.go:176]   - MINIKUBE_LOCATION=12739
	I1117 23:10:28.513808    7596 config.go:176] Loaded profile config "custom-weave-20211117230315-9504": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.3
	I1117 23:10:28.514366    7596 config.go:176] Loaded profile config "multinode-20211117225530-9504-m01": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.3
	I1117 23:10:28.514754    7596 config.go:176] Loaded profile config "pause-20211117230855-9504": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.3
	I1117 23:10:28.514901    7596 driver.go:343] Setting default libvirt URI to qemu:///system
	I1117 23:10:30.152616    7596 docker.go:132] docker version: linux-19.03.12
	I1117 23:10:30.155989    7596 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I1117 23:10:30.497093    7596 info.go:263] docker info: {ID:L2W4:ZZCI:FAHF:ZZT6:NAWJ:ODFR:2ZBQ:YQ77:PYZZ:VR3E:WLUM:OBSV Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:41 OomKillDisable:true NGoroutines:54 SystemTime:2021-11-17 23:10:30.234812615 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:4.19.76-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://inde
x.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:6239399936 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:19.03.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7ad184331fa3e55e52b890ea95e65ba581ae3429 Expected:7ad184331fa3e55e52b890ea95e65ba581ae3429} RuncCommit:{ID:dc9208a3303feef5b3839f4323d9beb36df0a9dd Expected:dc9208a3303feef5b3839f4323d9beb36df0a9dd} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense:Community Engine Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I1117 23:10:30.502096    7596 out.go:176] * Using the docker driver based on user configuration
	I1117 23:10:30.502096    7596 start.go:280] selected driver: docker
	I1117 23:10:30.502096    7596 start.go:775] validating driver "docker" against <nil>
	I1117 23:10:30.502096    7596 start.go:786] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I1117 23:10:30.563441    7596 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I1117 23:10:30.932245    7596 info.go:263] docker info: {ID:L2W4:ZZCI:FAHF:ZZT6:NAWJ:ODFR:2ZBQ:YQ77:PYZZ:VR3E:WLUM:OBSV Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:46 OomKillDisable:true NGoroutines:70 SystemTime:2021-11-17 23:10:30.645666077 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:4.19.76-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://inde
x.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:6239399936 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:19.03.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7ad184331fa3e55e52b890ea95e65ba581ae3429 Expected:7ad184331fa3e55e52b890ea95e65ba581ae3429} RuncCommit:{ID:dc9208a3303feef5b3839f4323d9beb36df0a9dd Expected:dc9208a3303feef5b3839f4323d9beb36df0a9dd} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense:Community Engine Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I1117 23:10:30.932245    7596 start_flags.go:268] no existing cluster config was found, will generate one from the flags 
	I1117 23:10:30.933033    7596 start_flags.go:758] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1117 23:10:30.933033    7596 cni.go:93] Creating CNI manager for "kindnet"
	I1117 23:10:30.933033    7596 cni.go:217] auto-setting extra-config to "kubelet.cni-conf-dir=/etc/cni/net.mk"
	I1117 23:10:30.933033    7596 cni.go:222] extra-config set to "kubelet.cni-conf-dir=/etc/cni/net.mk"
	I1117 23:10:30.933033    7596 start_flags.go:277] Found "CNI" CNI - setting NetworkPlugin=cni
	I1117 23:10:30.933033    7596 start_flags.go:282] config:
	{Name:kindnet-20211117230315-9504 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.3 ClusterName:kindnet-20211117230315-9504 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: Ne
tworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host}
	I1117 23:10:30.937494    7596 out.go:176] * Starting control plane node kindnet-20211117230315-9504 in cluster kindnet-20211117230315-9504
	I1117 23:10:30.937494    7596 cache.go:118] Beginning downloading kic base image for docker with docker
	I1117 23:10:30.939840    7596 out.go:176] * Pulling base image ...
	I1117 23:10:30.940032    7596 preload.go:132] Checking if preload exists for k8s version v1.22.3 and runtime docker
	I1117 23:10:30.940032    7596 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c in local docker daemon
	I1117 23:10:30.940032    7596 preload.go:148] Found local preload: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4
	I1117 23:10:30.940032    7596 cache.go:57] Caching tarball of preloaded images
	I1117 23:10:30.940787    7596 preload.go:174] Found C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1117 23:10:30.940982    7596 cache.go:60] Finished verifying existence of preloaded tar for  v1.22.3 on docker
	I1117 23:10:30.941229    7596 profile.go:147] Saving config to C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\kindnet-20211117230315-9504\config.json ...
	I1117 23:10:30.941562    7596 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\kindnet-20211117230315-9504\config.json: {Name:mkce7c2e61159f8da28e0400ba5968f4b089ad14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1117 23:10:31.041835    7596 image.go:79] Found gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c in local docker daemon, skipping pull
	I1117 23:10:31.041835    7596 cache.go:140] gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c exists in daemon, skipping load
	I1117 23:10:31.041835    7596 cache.go:206] Successfully downloaded all kic artifacts
	I1117 23:10:31.041835    7596 start.go:313] acquiring machines lock for kindnet-20211117230315-9504: {Name:mk70529de6da3be9b1bbc3811368498293ea214a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 23:10:31.041835    7596 start.go:317] acquired machines lock for "kindnet-20211117230315-9504" in 0s
	I1117 23:10:31.041835    7596 start.go:89] Provisioning new machine with config: &{Name:kindnet-20211117230315-9504 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.3 ClusterName:kindnet-20211117230315-9504 Namespace:default APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.22.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host} &{Name: IP: Port:8443 KubernetesVersion:v1.22.3 ControlPlane:true Worker:true}
	I1117 23:10:31.041835    7596 start.go:126] createHost starting for "" (driver="docker")
	I1117 23:10:31.046834    7596 out.go:203] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I1117 23:10:31.047849    7596 start.go:160] libmachine.API.Create for "kindnet-20211117230315-9504" (driver="docker")
	I1117 23:10:31.047849    7596 client.go:168] LocalClient.Create starting
	I1117 23:10:31.047849    7596 main.go:130] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I1117 23:10:31.048829    7596 main.go:130] libmachine: Decoding PEM data...
	I1117 23:10:31.048829    7596 main.go:130] libmachine: Parsing certificate...
	I1117 23:10:31.048829    7596 main.go:130] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I1117 23:10:31.048829    7596 main.go:130] libmachine: Decoding PEM data...
	I1117 23:10:31.048829    7596 main.go:130] libmachine: Parsing certificate...
	I1117 23:10:31.053825    7596 cli_runner.go:115] Run: docker network inspect kindnet-20211117230315-9504 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1117 23:10:31.150875    7596 cli_runner.go:162] docker network inspect kindnet-20211117230315-9504 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1117 23:10:31.155884    7596 network_create.go:254] running [docker network inspect kindnet-20211117230315-9504] to gather additional debugging logs...
	I1117 23:10:31.155927    7596 cli_runner.go:115] Run: docker network inspect kindnet-20211117230315-9504
	W1117 23:10:31.272617    7596 cli_runner.go:162] docker network inspect kindnet-20211117230315-9504 returned with exit code 1
	I1117 23:10:31.272617    7596 network_create.go:257] error running [docker network inspect kindnet-20211117230315-9504]: docker network inspect kindnet-20211117230315-9504: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: kindnet-20211117230315-9504
	I1117 23:10:31.272617    7596 network_create.go:259] output of [docker network inspect kindnet-20211117230315-9504]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: kindnet-20211117230315-9504
	
	** /stderr **
	I1117 23:10:31.276453    7596 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1117 23:10:31.399146    7596 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc00079a2a8] misses:0}
	I1117 23:10:31.399146    7596 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1117 23:10:31.399146    7596 network_create.go:106] attempt to create docker network kindnet-20211117230315-9504 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1117 23:10:31.403132    7596 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true kindnet-20211117230315-9504
	W1117 23:10:31.499650    7596 cli_runner.go:162] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true kindnet-20211117230315-9504 returned with exit code 1
	W1117 23:10:31.499650    7596 network_create.go:98] failed to create docker network kindnet-20211117230315-9504 192.168.49.0/24, will retry: subnet is taken
	I1117 23:10:31.516161    7596 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00079a2a8] amended:false}} dirty:map[] misses:0}
	I1117 23:10:31.516388    7596 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1117 23:10:31.533084    7596 network.go:288] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00079a2a8] amended:true}} dirty:map[192.168.49.0:0xc00079a2a8 192.168.58.0:0xc000722318] misses:0}
	I1117 23:10:31.533251    7596 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1117 23:10:31.533297    7596 network_create.go:106] attempt to create docker network kindnet-20211117230315-9504 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I1117 23:10:31.538103    7596 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true kindnet-20211117230315-9504
	I1117 23:10:31.738485    7596 network_create.go:90] docker network kindnet-20211117230315-9504 192.168.58.0/24 created
	I1117 23:10:31.738485    7596 kic.go:106] calculated static IP "192.168.58.2" for the "kindnet-20211117230315-9504" container
	I1117 23:10:31.747115    7596 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
	I1117 23:10:31.846475    7596 cli_runner.go:115] Run: docker volume create kindnet-20211117230315-9504 --label name.minikube.sigs.k8s.io=kindnet-20211117230315-9504 --label created_by.minikube.sigs.k8s.io=true
	I1117 23:10:31.951925    7596 oci.go:102] Successfully created a docker volume kindnet-20211117230315-9504
	I1117 23:10:31.956153    7596 cli_runner.go:115] Run: docker run --rm --name kindnet-20211117230315-9504-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kindnet-20211117230315-9504 --entrypoint /usr/bin/test -v kindnet-20211117230315-9504:/var gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -d /var/lib
	I1117 23:10:33.117342    7596 cli_runner.go:168] Completed: docker run --rm --name kindnet-20211117230315-9504-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kindnet-20211117230315-9504 --entrypoint /usr/bin/test -v kindnet-20211117230315-9504:/var gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -d /var/lib: (1.1611811s)
	I1117 23:10:33.117342    7596 oci.go:106] Successfully prepared a docker volume kindnet-20211117230315-9504
	I1117 23:10:33.117342    7596 preload.go:132] Checking if preload exists for k8s version v1.22.3 and runtime docker
	I1117 23:10:33.117714    7596 kic.go:179] Starting extracting preloaded images to volume ...
	I1117 23:10:33.120898    7596 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I1117 23:10:33.121898    7596 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v kindnet-20211117230315-9504:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir
	W1117 23:10:33.241567    7596 cli_runner.go:162] docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v kindnet-20211117230315-9504:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir returned with exit code 125
	I1117 23:10:33.241567    7596 kic.go:186] Unable to extract preloaded tarball to volume: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v kindnet-20211117230315-9504:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: status code not OK but 500: {"Message":"Unhandled exception: The notification platform is unavailable.\r\n\r\nThe notification platform is unavailable.\r\n","StackTrace":"   at Windows.UI.Notifications.ToastNotificationManager.CreateToastNotifier(String applicationId)\r\n   at Docker.WPF.PromptShareDirectory.<PromptUserAsync>d__0.MoveNext() in C:\\workspaces\\stable-2.3.x\\src\\github.com\\docker\\pinata\\win\\src\\Docker.WPF\\PromptShareDirectory.cs:line 25\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at Docker.ApiServices.Mounting.FileSharing.<DoShareAsync>d__6.MoveNext() in C:\\workspaces\\stable-2.3.x\\src\\github.com\\docker\\pinata\\win\\src\\Docker.ApiServices\\Mounting\\FileSharing.cs:line 80\r\n--- End of stack trace from previous location whe
re exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at Docker.ApiServices.Mounting.FileSharing.<ShareAsync>d__4.MoveNext() in C:\\workspaces\\stable-2.3.x\\src\\github.com\\docker\\pinata\\win\\src\\Docker.ApiServices\\Mounting\\FileSharing.cs:line 47\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at Docker.HttpApi.Controllers.FilesharingController.<ShareDirectory>d__2.MoveNext() in C:\\workspaces\\stable-2.3.x\\src\\github.com\\docker\\pinata\\win\\src\\Docker.HttpApi\\Controllers\\FilesharingController.cs:line 21\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.Exceptio
nDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at System.Threading.Tasks.TaskHelpersExtensions.<CastToObject>d__1`1.MoveNext()\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at System.Web.Http.Controllers.ApiControllerActionInvoker.<InvokeActionAsyncCore>d__1.MoveNext()\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at System.Web.Http.Controllers.ActionFilterResult.<ExecuteAsync>d__5.MoveNext()\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.Excepti
onServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at System.Web.Http.Dispatcher.HttpControllerDispatcher.<SendAsync>d__15.MoveNext()"}.
	See 'docker run --help'.
	I1117 23:10:33.457009    7596 info.go:263] docker info: {ID:L2W4:ZZCI:FAHF:ZZT6:NAWJ:ODFR:2ZBQ:YQ77:PYZZ:VR3E:WLUM:OBSV Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:40 OomKillDisable:true NGoroutines:52 SystemTime:2021-11-17 23:10:33.204904016 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:4.19.76-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://inde
x.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:6239399936 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:19.03.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7ad184331fa3e55e52b890ea95e65ba581ae3429 Expected:7ad184331fa3e55e52b890ea95e65ba581ae3429} RuncCommit:{ID:dc9208a3303feef5b3839f4323d9beb36df0a9dd Expected:dc9208a3303feef5b3839f4323d9beb36df0a9dd} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense:Community Engine Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	E1117 23:10:33.457009    7596 oci.go:197] error getting kernel modules path: Unable to locate kernel modules
	I1117 23:10:33.457644    7596 client.go:171] LocalClient.Create took 2.4097762s
	I1117 23:10:35.466908    7596 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1117 23:10:35.471254    7596 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20211117230315-9504
	W1117 23:10:35.568652    7596 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20211117230315-9504 returned with exit code 1
	I1117 23:10:35.568986    7596 retry.go:31] will retry after 276.165072ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kindnet-20211117230315-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20211117230315-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-20211117230315-9504
	I1117 23:10:35.848524    7596 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20211117230315-9504
	W1117 23:10:35.946222    7596 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20211117230315-9504 returned with exit code 1
	I1117 23:10:35.946222    7596 retry.go:31] will retry after 540.190908ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kindnet-20211117230315-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20211117230315-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-20211117230315-9504
	I1117 23:10:36.491212    7596 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20211117230315-9504
	W1117 23:10:36.584732    7596 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20211117230315-9504 returned with exit code 1
	I1117 23:10:36.584732    7596 retry.go:31] will retry after 655.06503ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kindnet-20211117230315-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20211117230315-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-20211117230315-9504
	I1117 23:10:37.243197    7596 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20211117230315-9504
	W1117 23:10:37.340044    7596 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20211117230315-9504 returned with exit code 1
	W1117 23:10:37.340044    7596 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kindnet-20211117230315-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20211117230315-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-20211117230315-9504
	
	W1117 23:10:37.340367    7596 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kindnet-20211117230315-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20211117230315-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-20211117230315-9504
	I1117 23:10:37.340367    7596 start.go:129] duration metric: createHost completed in 6.2984843s
	I1117 23:10:37.340406    7596 start.go:80] releasing machines lock for "kindnet-20211117230315-9504", held for 6.2984843s
	W1117 23:10:37.340554    7596 start.go:532] error starting host: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	I1117 23:10:37.349553    7596 cli_runner.go:115] Run: docker container inspect kindnet-20211117230315-9504 --format={{.State.Status}}
	W1117 23:10:37.444229    7596 cli_runner.go:162] docker container inspect kindnet-20211117230315-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:10:37.444229    7596 delete.go:82] Unable to get host status for kindnet-20211117230315-9504, assuming it has already been deleted: state: unknown state "kindnet-20211117230315-9504": docker container inspect kindnet-20211117230315-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-20211117230315-9504
	W1117 23:10:37.444716    7596 out.go:241] ! StartHost failed, but will try again: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	! StartHost failed, but will try again: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	I1117 23:10:37.444716    7596 start.go:547] Will try again in 5 seconds ...
	I1117 23:10:42.446412    7596 start.go:313] acquiring machines lock for kindnet-20211117230315-9504: {Name:mk70529de6da3be9b1bbc3811368498293ea214a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 23:10:42.446412    7596 start.go:317] acquired machines lock for "kindnet-20211117230315-9504" in 0s
	I1117 23:10:42.446412    7596 start.go:93] Skipping create...Using existing machine configuration
	I1117 23:10:42.446412    7596 fix.go:55] fixHost starting: 
	I1117 23:10:42.455791    7596 cli_runner.go:115] Run: docker container inspect kindnet-20211117230315-9504 --format={{.State.Status}}
	W1117 23:10:42.549375    7596 cli_runner.go:162] docker container inspect kindnet-20211117230315-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:10:42.549431    7596 fix.go:108] recreateIfNeeded on kindnet-20211117230315-9504: state= err=unknown state "kindnet-20211117230315-9504": docker container inspect kindnet-20211117230315-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-20211117230315-9504
	I1117 23:10:42.549431    7596 fix.go:113] machineExists: false. err=machine does not exist
	I1117 23:10:42.554086    7596 out.go:176] * docker "kindnet-20211117230315-9504" container is missing, will recreate.
	I1117 23:10:42.554155    7596 delete.go:124] DEMOLISHING kindnet-20211117230315-9504 ...
	I1117 23:10:42.560167    7596 cli_runner.go:115] Run: docker container inspect kindnet-20211117230315-9504 --format={{.State.Status}}
	W1117 23:10:42.650985    7596 cli_runner.go:162] docker container inspect kindnet-20211117230315-9504 --format={{.State.Status}} returned with exit code 1
	W1117 23:10:42.650985    7596 stop.go:75] unable to get state: unknown state "kindnet-20211117230315-9504": docker container inspect kindnet-20211117230315-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-20211117230315-9504
	I1117 23:10:42.650985    7596 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "kindnet-20211117230315-9504": docker container inspect kindnet-20211117230315-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-20211117230315-9504
	I1117 23:10:42.661138    7596 cli_runner.go:115] Run: docker container inspect kindnet-20211117230315-9504 --format={{.State.Status}}
	W1117 23:10:42.751712    7596 cli_runner.go:162] docker container inspect kindnet-20211117230315-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:10:42.751774    7596 delete.go:82] Unable to get host status for kindnet-20211117230315-9504, assuming it has already been deleted: state: unknown state "kindnet-20211117230315-9504": docker container inspect kindnet-20211117230315-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-20211117230315-9504
	I1117 23:10:42.755597    7596 cli_runner.go:115] Run: docker container inspect -f {{.Id}} kindnet-20211117230315-9504
	W1117 23:10:42.848683    7596 cli_runner.go:162] docker container inspect -f {{.Id}} kindnet-20211117230315-9504 returned with exit code 1
	I1117 23:10:42.848683    7596 kic.go:360] could not find the container kindnet-20211117230315-9504 to remove it. will try anyways
	I1117 23:10:42.853290    7596 cli_runner.go:115] Run: docker container inspect kindnet-20211117230315-9504 --format={{.State.Status}}
	W1117 23:10:42.945003    7596 cli_runner.go:162] docker container inspect kindnet-20211117230315-9504 --format={{.State.Status}} returned with exit code 1
	W1117 23:10:42.945003    7596 oci.go:83] error getting container status, will try to delete anyways: unknown state "kindnet-20211117230315-9504": docker container inspect kindnet-20211117230315-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-20211117230315-9504
	I1117 23:10:42.949946    7596 cli_runner.go:115] Run: docker exec --privileged -t kindnet-20211117230315-9504 /bin/bash -c "sudo init 0"
	W1117 23:10:43.036505    7596 cli_runner.go:162] docker exec --privileged -t kindnet-20211117230315-9504 /bin/bash -c "sudo init 0" returned with exit code 1
	I1117 23:10:43.036505    7596 oci.go:658] error shutdown kindnet-20211117230315-9504: docker exec --privileged -t kindnet-20211117230315-9504 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: kindnet-20211117230315-9504
	I1117 23:10:44.040902    7596 cli_runner.go:115] Run: docker container inspect kindnet-20211117230315-9504 --format={{.State.Status}}
	W1117 23:10:44.151644    7596 cli_runner.go:162] docker container inspect kindnet-20211117230315-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:10:44.151644    7596 oci.go:670] temporary error verifying shutdown: unknown state "kindnet-20211117230315-9504": docker container inspect kindnet-20211117230315-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-20211117230315-9504
	I1117 23:10:44.151935    7596 oci.go:672] temporary error: container kindnet-20211117230315-9504 status is  but expect it to be exited
	I1117 23:10:44.151971    7596 retry.go:31] will retry after 462.318748ms: couldn't verify container is exited. %v: unknown state "kindnet-20211117230315-9504": docker container inspect kindnet-20211117230315-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-20211117230315-9504
	I1117 23:10:44.619986    7596 cli_runner.go:115] Run: docker container inspect kindnet-20211117230315-9504 --format={{.State.Status}}
	W1117 23:10:44.714120    7596 cli_runner.go:162] docker container inspect kindnet-20211117230315-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:10:44.714120    7596 oci.go:670] temporary error verifying shutdown: unknown state "kindnet-20211117230315-9504": docker container inspect kindnet-20211117230315-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-20211117230315-9504
	I1117 23:10:44.714120    7596 oci.go:672] temporary error: container kindnet-20211117230315-9504 status is  but expect it to be exited
	I1117 23:10:44.714120    7596 retry.go:31] will retry after 890.117305ms: couldn't verify container is exited. %v: unknown state "kindnet-20211117230315-9504": docker container inspect kindnet-20211117230315-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-20211117230315-9504
	I1117 23:10:45.609458    7596 cli_runner.go:115] Run: docker container inspect kindnet-20211117230315-9504 --format={{.State.Status}}
	W1117 23:10:45.707444    7596 cli_runner.go:162] docker container inspect kindnet-20211117230315-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:10:45.707444    7596 oci.go:670] temporary error verifying shutdown: unknown state "kindnet-20211117230315-9504": docker container inspect kindnet-20211117230315-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-20211117230315-9504
	I1117 23:10:45.707444    7596 oci.go:672] temporary error: container kindnet-20211117230315-9504 status is  but expect it to be exited
	I1117 23:10:45.707444    7596 retry.go:31] will retry after 636.341646ms: couldn't verify container is exited. %v: unknown state "kindnet-20211117230315-9504": docker container inspect kindnet-20211117230315-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-20211117230315-9504
	I1117 23:10:46.348738    7596 cli_runner.go:115] Run: docker container inspect kindnet-20211117230315-9504 --format={{.State.Status}}
	W1117 23:10:46.444326    7596 cli_runner.go:162] docker container inspect kindnet-20211117230315-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:10:46.444326    7596 oci.go:670] temporary error verifying shutdown: unknown state "kindnet-20211117230315-9504": docker container inspect kindnet-20211117230315-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-20211117230315-9504
	I1117 23:10:46.444326    7596 oci.go:672] temporary error: container kindnet-20211117230315-9504 status is  but expect it to be exited
	I1117 23:10:46.444541    7596 retry.go:31] will retry after 1.107876242s: couldn't verify container is exited. %v: unknown state "kindnet-20211117230315-9504": docker container inspect kindnet-20211117230315-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-20211117230315-9504
	I1117 23:10:47.556135    7596 cli_runner.go:115] Run: docker container inspect kindnet-20211117230315-9504 --format={{.State.Status}}
	W1117 23:10:47.655331    7596 cli_runner.go:162] docker container inspect kindnet-20211117230315-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:10:47.655331    7596 oci.go:670] temporary error verifying shutdown: unknown state "kindnet-20211117230315-9504": docker container inspect kindnet-20211117230315-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-20211117230315-9504
	I1117 23:10:47.655331    7596 oci.go:672] temporary error: container kindnet-20211117230315-9504 status is  but expect it to be exited
	I1117 23:10:47.655331    7596 retry.go:31] will retry after 1.511079094s: couldn't verify container is exited. %v: unknown state "kindnet-20211117230315-9504": docker container inspect kindnet-20211117230315-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-20211117230315-9504
	I1117 23:10:49.171050    7596 cli_runner.go:115] Run: docker container inspect kindnet-20211117230315-9504 --format={{.State.Status}}
	W1117 23:10:49.278426    7596 cli_runner.go:162] docker container inspect kindnet-20211117230315-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:10:49.278426    7596 oci.go:670] temporary error verifying shutdown: unknown state "kindnet-20211117230315-9504": docker container inspect kindnet-20211117230315-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-20211117230315-9504
	I1117 23:10:49.278426    7596 oci.go:672] temporary error: container kindnet-20211117230315-9504 status is  but expect it to be exited
	I1117 23:10:49.278426    7596 retry.go:31] will retry after 3.04096222s: couldn't verify container is exited. %v: unknown state "kindnet-20211117230315-9504": docker container inspect kindnet-20211117230315-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-20211117230315-9504
	I1117 23:10:52.323164    7596 cli_runner.go:115] Run: docker container inspect kindnet-20211117230315-9504 --format={{.State.Status}}
	W1117 23:10:52.417599    7596 cli_runner.go:162] docker container inspect kindnet-20211117230315-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:10:52.417599    7596 oci.go:670] temporary error verifying shutdown: unknown state "kindnet-20211117230315-9504": docker container inspect kindnet-20211117230315-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-20211117230315-9504
	I1117 23:10:52.417599    7596 oci.go:672] temporary error: container kindnet-20211117230315-9504 status is  but expect it to be exited
	I1117 23:10:52.417599    7596 retry.go:31] will retry after 5.781953173s: couldn't verify container is exited. %v: unknown state "kindnet-20211117230315-9504": docker container inspect kindnet-20211117230315-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-20211117230315-9504
	I1117 23:10:58.204231    7596 cli_runner.go:115] Run: docker container inspect kindnet-20211117230315-9504 --format={{.State.Status}}
	W1117 23:10:58.305228    7596 cli_runner.go:162] docker container inspect kindnet-20211117230315-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:10:58.305228    7596 oci.go:670] temporary error verifying shutdown: unknown state "kindnet-20211117230315-9504": docker container inspect kindnet-20211117230315-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-20211117230315-9504
	I1117 23:10:58.305228    7596 oci.go:672] temporary error: container kindnet-20211117230315-9504 status is  but expect it to be exited
	I1117 23:10:58.305228    7596 oci.go:87] couldn't shut down kindnet-20211117230315-9504 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "kindnet-20211117230315-9504": docker container inspect kindnet-20211117230315-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-20211117230315-9504
	 
	I1117 23:10:58.309997    7596 cli_runner.go:115] Run: docker rm -f -v kindnet-20211117230315-9504
	W1117 23:10:58.400743    7596 cli_runner.go:162] docker rm -f -v kindnet-20211117230315-9504 returned with exit code 1
	W1117 23:10:58.401844    7596 delete.go:139] delete failed (probably ok) <nil>
	I1117 23:10:58.401844    7596 fix.go:120] Sleeping 1 second for extra luck!
	I1117 23:10:59.402163    7596 start.go:126] createHost starting for "" (driver="docker")
	I1117 23:10:59.405577    7596 out.go:203] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I1117 23:10:59.405577    7596 start.go:160] libmachine.API.Create for "kindnet-20211117230315-9504" (driver="docker")
	I1117 23:10:59.405577    7596 client.go:168] LocalClient.Create starting
	I1117 23:10:59.406783    7596 main.go:130] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I1117 23:10:59.407044    7596 main.go:130] libmachine: Decoding PEM data...
	I1117 23:10:59.407096    7596 main.go:130] libmachine: Parsing certificate...
	I1117 23:10:59.407320    7596 main.go:130] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I1117 23:10:59.407465    7596 main.go:130] libmachine: Decoding PEM data...
	I1117 23:10:59.407465    7596 main.go:130] libmachine: Parsing certificate...
	I1117 23:10:59.411705    7596 cli_runner.go:115] Run: docker network inspect kindnet-20211117230315-9504 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1117 23:10:59.512491    7596 cli_runner.go:162] docker network inspect kindnet-20211117230315-9504 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1117 23:10:59.515920    7596 network_create.go:254] running [docker network inspect kindnet-20211117230315-9504] to gather additional debugging logs...
	I1117 23:10:59.515920    7596 cli_runner.go:115] Run: docker network inspect kindnet-20211117230315-9504
	W1117 23:10:59.610457    7596 cli_runner.go:162] docker network inspect kindnet-20211117230315-9504 returned with exit code 1
	I1117 23:10:59.610457    7596 network_create.go:257] error running [docker network inspect kindnet-20211117230315-9504]: docker network inspect kindnet-20211117230315-9504: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: kindnet-20211117230315-9504
	I1117 23:10:59.610457    7596 network_create.go:259] output of [docker network inspect kindnet-20211117230315-9504]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: kindnet-20211117230315-9504
	
	** /stderr **
	I1117 23:10:59.614910    7596 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1117 23:10:59.717691    7596 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00079a2a8] amended:true}} dirty:map[192.168.49.0:0xc00079a2a8 192.168.58.0:0xc000722318] misses:0}
	I1117 23:10:59.717691    7596 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1117 23:10:59.729517    7596 network.go:279] skipping subnet 192.168.58.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00079a2a8] amended:true}} dirty:map[192.168.49.0:0xc00079a2a8 192.168.58.0:0xc000722318] misses:1}
	I1117 23:10:59.729761    7596 network.go:238] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1117 23:10:59.741981    7596 network.go:288] reserving subnet 192.168.67.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00079a2a8] amended:true}} dirty:map[192.168.49.0:0xc00079a2a8 192.168.58.0:0xc000722318 192.168.67.0:0xc00079a2b8] misses:1}
	I1117 23:10:59.741981    7596 network.go:235] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1117 23:10:59.741981    7596 network_create.go:106] attempt to create docker network kindnet-20211117230315-9504 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
	I1117 23:10:59.746064    7596 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true kindnet-20211117230315-9504
	I1117 23:10:59.948384    7596 network_create.go:90] docker network kindnet-20211117230315-9504 192.168.67.0/24 created
	I1117 23:10:59.948384    7596 kic.go:106] calculated static IP "192.168.67.2" for the "kindnet-20211117230315-9504" container
	I1117 23:10:59.956812    7596 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
	I1117 23:11:00.062868    7596 cli_runner.go:115] Run: docker volume create kindnet-20211117230315-9504 --label name.minikube.sigs.k8s.io=kindnet-20211117230315-9504 --label created_by.minikube.sigs.k8s.io=true
	I1117 23:11:00.155820    7596 oci.go:102] Successfully created a docker volume kindnet-20211117230315-9504
	I1117 23:11:00.160561    7596 cli_runner.go:115] Run: docker run --rm --name kindnet-20211117230315-9504-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kindnet-20211117230315-9504 --entrypoint /usr/bin/test -v kindnet-20211117230315-9504:/var gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -d /var/lib
	I1117 23:11:01.064590    7596 oci.go:106] Successfully prepared a docker volume kindnet-20211117230315-9504
	I1117 23:11:01.064590    7596 preload.go:132] Checking if preload exists for k8s version v1.22.3 and runtime docker
	I1117 23:11:01.064590    7596 kic.go:179] Starting extracting preloaded images to volume ...
	I1117 23:11:01.069998    7596 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I1117 23:11:01.070203    7596 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v kindnet-20211117230315-9504:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir
	W1117 23:11:01.190472    7596 cli_runner.go:162] docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v kindnet-20211117230315-9504:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir returned with exit code 125
	I1117 23:11:01.190472    7596 kic.go:186] Unable to extract preloaded tarball to volume: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v kindnet-20211117230315-9504:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: status code not OK but 500: {"Message":"Unhandled exception: The notification platform is unavailable.\r\n\r\nThe notification platform is unavailable.\r\n","StackTrace":"   at Windows.UI.Notifications.ToastNotificationManager.CreateToastNotifier(String applicationId)\r\n   at Docker.WPF.PromptShareDirectory.<PromptUserAsync>d__0.MoveNext() in C:\\workspaces\\stable-2.3.x\\src\\github.com\\docker\\pinata\\win\\src\\Docker.WPF\\PromptShareDirectory.cs:line 25\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at Docker.ApiServices.Mounting.FileSharing.<DoShareAsync>d__6.MoveNext() in C:\\workspaces\\stable-2.3.x\\src\\github.com\\docker\\pinata\\win\\src\\Docker.ApiServices\\Mounting\\FileSharing.cs:line 80\r\n--- End of stack trace from previous location whe
re exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at Docker.ApiServices.Mounting.FileSharing.<ShareAsync>d__4.MoveNext() in C:\\workspaces\\stable-2.3.x\\src\\github.com\\docker\\pinata\\win\\src\\Docker.ApiServices\\Mounting\\FileSharing.cs:line 47\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at Docker.HttpApi.Controllers.FilesharingController.<ShareDirectory>d__2.MoveNext() in C:\\workspaces\\stable-2.3.x\\src\\github.com\\docker\\pinata\\win\\src\\Docker.HttpApi\\Controllers\\FilesharingController.cs:line 21\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.Exceptio
nDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at System.Threading.Tasks.TaskHelpersExtensions.<CastToObject>d__1`1.MoveNext()\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at System.Web.Http.Controllers.ApiControllerActionInvoker.<InvokeActionAsyncCore>d__1.MoveNext()\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at System.Web.Http.Controllers.ActionFilterResult.<ExecuteAsync>d__5.MoveNext()\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.Excepti
onServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at System.Web.Http.Dispatcher.HttpControllerDispatcher.<SendAsync>d__15.MoveNext()"}.
	See 'docker run --help'.
	I1117 23:11:01.424723    7596 info.go:263] docker info: {ID:L2W4:ZZCI:FAHF:ZZT6:NAWJ:ODFR:2ZBQ:YQ77:PYZZ:VR3E:WLUM:OBSV Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:40 OomKillDisable:true NGoroutines:52 SystemTime:2021-11-17 23:11:01.1540325 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:4.19.76-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://index.
docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:6239399936 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:19.03.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7ad184331fa3e55e52b890ea95e65ba581ae3429 Expected:7ad184331fa3e55e52b890ea95e65ba581ae3429} RuncCommit:{ID:dc9208a3303feef5b3839f4323d9beb36df0a9dd Expected:dc9208a3303feef5b3839f4323d9beb36df0a9dd} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense:Community Engine Warnings:<nil> Se
rverErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	E1117 23:11:01.425289    7596 oci.go:197] error getting kernel modules path: Unable to locate kernel modules
	I1117 23:11:01.425449    7596 client.go:171] LocalClient.Create took 2.0198566s
	I1117 23:11:03.433394    7596 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1117 23:11:03.437050    7596 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20211117230315-9504
	W1117 23:11:03.527617    7596 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20211117230315-9504 returned with exit code 1
	I1117 23:11:03.527963    7596 retry.go:31] will retry after 178.565968ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kindnet-20211117230315-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20211117230315-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-20211117230315-9504
	I1117 23:11:03.711584    7596 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20211117230315-9504
	W1117 23:11:03.799728    7596 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20211117230315-9504 returned with exit code 1
	I1117 23:11:03.800155    7596 retry.go:31] will retry after 330.246446ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kindnet-20211117230315-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20211117230315-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-20211117230315-9504
	I1117 23:11:04.135225    7596 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20211117230315-9504
	W1117 23:11:04.229225    7596 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20211117230315-9504 returned with exit code 1
	I1117 23:11:04.229645    7596 retry.go:31] will retry after 460.157723ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kindnet-20211117230315-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20211117230315-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-20211117230315-9504
	I1117 23:11:04.693472    7596 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20211117230315-9504
	W1117 23:11:04.783151    7596 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20211117230315-9504 returned with exit code 1
	W1117 23:11:04.783455    7596 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kindnet-20211117230315-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20211117230315-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-20211117230315-9504
	
	W1117 23:11:04.783522    7596 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kindnet-20211117230315-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20211117230315-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-20211117230315-9504
	I1117 23:11:04.783522    7596 start.go:129] duration metric: createHost completed in 5.3813192s
	I1117 23:11:04.791081    7596 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1117 23:11:04.794869    7596 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20211117230315-9504
	W1117 23:11:04.892327    7596 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20211117230315-9504 returned with exit code 1
	I1117 23:11:04.892715    7596 retry.go:31] will retry after 195.758538ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kindnet-20211117230315-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20211117230315-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-20211117230315-9504
	I1117 23:11:05.093589    7596 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20211117230315-9504
	W1117 23:11:05.183348    7596 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20211117230315-9504 returned with exit code 1
	I1117 23:11:05.183619    7596 retry.go:31] will retry after 297.413196ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kindnet-20211117230315-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20211117230315-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-20211117230315-9504
	I1117 23:11:05.485858    7596 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20211117230315-9504
	W1117 23:11:05.576808    7596 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20211117230315-9504 returned with exit code 1
	I1117 23:11:05.577060    7596 retry.go:31] will retry after 663.23513ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kindnet-20211117230315-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20211117230315-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-20211117230315-9504
	I1117 23:11:06.244168    7596 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20211117230315-9504
	W1117 23:11:06.338322    7596 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20211117230315-9504 returned with exit code 1
	W1117 23:11:06.338592    7596 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kindnet-20211117230315-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20211117230315-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-20211117230315-9504
	
	W1117 23:11:06.338640    7596 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kindnet-20211117230315-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20211117230315-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-20211117230315-9504
	I1117 23:11:06.338640    7596 fix.go:57] fixHost completed within 23.8920484s
	I1117 23:11:06.338680    7596 start.go:80] releasing machines lock for "kindnet-20211117230315-9504", held for 23.8920885s
	W1117 23:11:06.339154    7596 out.go:241] * Failed to start docker container. Running "minikube delete -p kindnet-20211117230315-9504" may fix it: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	* Failed to start docker container. Running "minikube delete -p kindnet-20211117230315-9504" may fix it: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	I1117 23:11:06.344594    7596 out.go:176] 
	W1117 23:11:06.344594    7596 out.go:241] X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	W1117 23:11:06.344594    7596 out.go:241] * 
	* 
	W1117 23:11:06.347283    7596 out.go:241] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1117 23:11:06.350416    7596 out.go:176] 

                                                
                                                
** /stderr **
net_test.go:101: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kindnet/Start (38.24s)

                                                
                                    
x
+
TestPause/serial/Pause (5.73s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:108: (dbg) Run:  out/minikube-windows-amd64.exe pause -p pause-20211117230855-9504 --alsologtostderr -v=5

                                                
                                                
=== CONT  TestPause/serial/Pause
pause_test.go:108: (dbg) Non-zero exit: out/minikube-windows-amd64.exe pause -p pause-20211117230855-9504 --alsologtostderr -v=5: exit status 80 (1.8597339s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1117 23:10:40.144899     196 out.go:297] Setting OutFile to fd 1448 ...
	I1117 23:10:40.215654     196 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 23:10:40.215725     196 out.go:310] Setting ErrFile to fd 1560...
	I1117 23:10:40.215776     196 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 23:10:40.225439     196 out.go:304] Setting JSON to false
	I1117 23:10:40.226413     196 mustload.go:65] Loading cluster: pause-20211117230855-9504
	I1117 23:10:40.226680     196 config.go:176] Loaded profile config "pause-20211117230855-9504": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.3
	I1117 23:10:40.235560     196 cli_runner.go:115] Run: docker container inspect pause-20211117230855-9504 --format={{.State.Status}}
	W1117 23:10:41.783311     196 cli_runner.go:162] docker container inspect pause-20211117230855-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:10:41.783511     196 cli_runner.go:168] Completed: docker container inspect pause-20211117230855-9504 --format={{.State.Status}}: (1.5477396s)
	I1117 23:10:41.788597     196 out.go:176] 
	W1117 23:10:41.788730     196 out.go:241] X Exiting due to GUEST_STATUS: state: unknown state "pause-20211117230855-9504": docker container inspect pause-20211117230855-9504 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: pause-20211117230855-9504
	
	X Exiting due to GUEST_STATUS: state: unknown state "pause-20211117230855-9504": docker container inspect pause-20211117230855-9504 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: pause-20211117230855-9504
	
	W1117 23:10:41.788792     196 out.go:241] * 
	* 
	W1117 23:10:41.798548     196 out.go:241] ╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                       │
	│    * If the above advice does not help, please let us know:                                                           │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                         │
	│                                                                                                                       │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                              │
	│    * Please also attach the following file to the GitHub issue:                                                       │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_pause_8893f1c977cc86351b34571029ffce3d31854fd6_1.log    │
	│                                                                                                                       │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                       │
	│    * If the above advice does not help, please let us know:                                                           │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                         │
	│                                                                                                                       │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                              │
	│    * Please also attach the following file to the GitHub issue:                                                       │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_pause_8893f1c977cc86351b34571029ffce3d31854fd6_1.log    │
	│                                                                                                                       │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I1117 23:10:41.801547     196 out.go:176] 

                                                
                                                
** /stderr **
pause_test.go:110: failed to pause minikube with args: "out/minikube-windows-amd64.exe pause -p pause-20211117230855-9504 --alsologtostderr -v=5" : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect pause-20211117230855-9504
helpers_test.go:235: (dbg) docker inspect pause-20211117230855-9504:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "pause-20211117230855-9504",
	        "Id": "1526ca367da2e2230c9e8171330d5ce95b1affd84562077fb1bfd799901f6da5",
	        "Created": "2021-11-17T23:10:30.916394294Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.85.0/24",
	                    "Gateway": "192.168.85.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p pause-20211117230855-9504 -n pause-20211117230855-9504

                                                
                                                
=== CONT  TestPause/serial/Pause
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p pause-20211117230855-9504 -n pause-20211117230855-9504: exit status 7 (1.8438337s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 23:10:43.754288   12168 status.go:247] status error: host: state: unknown state "pause-20211117230855-9504": docker container inspect pause-20211117230855-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: pause-20211117230855-9504

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "pause-20211117230855-9504" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect pause-20211117230855-9504
helpers_test.go:235: (dbg) docker inspect pause-20211117230855-9504:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "pause-20211117230855-9504",
	        "Id": "1526ca367da2e2230c9e8171330d5ce95b1affd84562077fb1bfd799901f6da5",
	        "Created": "2021-11-17T23:10:30.916394294Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.85.0/24",
	                    "Gateway": "192.168.85.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p pause-20211117230855-9504 -n pause-20211117230855-9504
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p pause-20211117230855-9504 -n pause-20211117230855-9504: exit status 7 (1.8208856s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 23:10:45.670533    5204 status.go:247] status error: host: state: unknown state "pause-20211117230855-9504": docker container inspect pause-20211117230855-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: pause-20211117230855-9504

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "pause-20211117230855-9504" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestPause/serial/Pause (5.73s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (3.89s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:77: (dbg) Run:  out/minikube-windows-amd64.exe status -p pause-20211117230855-9504 --output=json --layout=cluster

                                                
                                                
=== CONT  TestPause/serial/VerifyStatus
status_test.go:77: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status -p pause-20211117230855-9504 --output=json --layout=cluster: exit status 7 (1.9394329s)

                                                
                                                
-- stdout --
	{"Name":"pause-20211117230855-9504","StatusCode":100,"StatusName":"Starting","Step":"Creating Container","StepDetail":"* Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.24.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":520,"StatusName":"Unknown"}},"Nodes":[{"Name":"pause-20211117230855-9504","StatusCode":520,"StatusName":"Unknown","Components":{"apiserver":{"Name":"apiserver","StatusCode":520,"StatusName":"Unknown"},"kubelet":{"Name":"kubelet","StatusCode":520,"StatusName":"Unknown"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 23:10:47.604293    8624 status.go:258] status error: host: state: unknown state "pause-20211117230855-9504": docker container inspect pause-20211117230855-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: pause-20211117230855-9504
	E1117 23:10:47.604293    8624 status.go:261] The "pause-20211117230855-9504" host does not exist!
	E1117 23:10:47.605285    8624 status.go:602] exit code not found: strconv.Atoi: parsing "": invalid syntax
	E1117 23:10:47.605499    8624 status.go:602] exit code not found: strconv.Atoi: parsing "": invalid syntax
	E1117 23:10:47.605499    8624 status.go:602] exit code not found: strconv.Atoi: parsing "": invalid syntax
	E1117 23:10:47.605579    8624 status.go:602] exit code not found: strconv.Atoi: parsing "": invalid syntax

                                                
                                                
** /stderr **
pause_test.go:198: incorrect status code: 100
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestPause/serial/VerifyStatus]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect pause-20211117230855-9504
helpers_test.go:235: (dbg) docker inspect pause-20211117230855-9504:

                                                
                                                
-- stdout --
	[
	    {
	        "CreatedAt": "2021-11-17T23:09:00Z",
	        "Driver": "local",
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "pause-20211117230855-9504"
	        },
	        "Mountpoint": "/var/lib/docker/volumes/pause-20211117230855-9504/_data",
	        "Name": "pause-20211117230855-9504",
	        "Options": {},
	        "Scope": "local"
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p pause-20211117230855-9504 -n pause-20211117230855-9504
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p pause-20211117230855-9504 -n pause-20211117230855-9504: exit status 7 (1.8388389s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 23:10:49.557191    4344 status.go:247] status error: host: state: unknown state "pause-20211117230855-9504": docker container inspect pause-20211117230855-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: pause-20211117230855-9504

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "pause-20211117230855-9504" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestPause/serial/VerifyStatus (3.89s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (38.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:99: (dbg) Run:  out/minikube-windows-amd64.exe start -p bridge-20211117230313-9504 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=bridge --driver=docker

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/Start
net_test.go:99: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p bridge-20211117230313-9504 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=bridge --driver=docker: exit status 80 (38.1789628s)

                                                
                                                
-- stdout --
	* [bridge-20211117230313-9504] minikube v1.24.0 on Microsoft Windows 10 Enterprise N 10.0.18363 Build 18363
	  - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=12739
	* Using the docker driver based on user configuration
	* Starting control plane node bridge-20211117230313-9504 in cluster bridge-20211117230313-9504
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* docker "bridge-20211117230313-9504" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1117 23:10:47.149122   12028 out.go:297] Setting OutFile to fd 1916 ...
	I1117 23:10:47.214861   12028 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 23:10:47.214861   12028 out.go:310] Setting ErrFile to fd 1920...
	I1117 23:10:47.214861   12028 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 23:10:47.229706   12028 out.go:304] Setting JSON to false
	I1117 23:10:47.232144   12028 start.go:112] hostinfo: {"hostname":"minikube2","uptime":79963,"bootTime":1637110684,"procs":132,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.18363 Build 18363","kernelVersion":"10.0.18363 Build 18363","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"29dced62-21fb-45d8-a34e-472b66ced002"}
	W1117 23:10:47.232257   12028 start.go:120] gopshost.Virtualization returned error: not implemented yet
	I1117 23:10:47.236476   12028 out.go:176] * [bridge-20211117230313-9504] minikube v1.24.0 on Microsoft Windows 10 Enterprise N 10.0.18363 Build 18363
	I1117 23:10:47.236476   12028 notify.go:174] Checking for updates...
	I1117 23:10:47.240252   12028 out.go:176]   - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	I1117 23:10:47.242578   12028 out.go:176]   - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	I1117 23:10:47.244737   12028 out.go:176]   - MINIKUBE_LOCATION=12739
	I1117 23:10:47.245771   12028 config.go:176] Loaded profile config "enable-default-cni-20211117230313-9504": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.3
	I1117 23:10:47.246409   12028 config.go:176] Loaded profile config "kindnet-20211117230315-9504": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.3
	I1117 23:10:47.246409   12028 config.go:176] Loaded profile config "multinode-20211117225530-9504-m01": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.3
	I1117 23:10:47.247059   12028 config.go:176] Loaded profile config "pause-20211117230855-9504": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.3
	I1117 23:10:47.247059   12028 driver.go:343] Setting default libvirt URI to qemu:///system
	I1117 23:10:48.883391   12028 docker.go:132] docker version: linux-19.03.12
	I1117 23:10:48.887345   12028 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I1117 23:10:49.232074   12028 info.go:263] docker info: {ID:L2W4:ZZCI:FAHF:ZZT6:NAWJ:ODFR:2ZBQ:YQ77:PYZZ:VR3E:WLUM:OBSV Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:40 OomKillDisable:true NGoroutines:52 SystemTime:2021-11-17 23:10:48.968549293 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:4.19.76-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://inde
x.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:6239399936 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:19.03.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7ad184331fa3e55e52b890ea95e65ba581ae3429 Expected:7ad184331fa3e55e52b890ea95e65ba581ae3429} RuncCommit:{ID:dc9208a3303feef5b3839f4323d9beb36df0a9dd Expected:dc9208a3303feef5b3839f4323d9beb36df0a9dd} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense:Community Engine Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I1117 23:10:49.240616   12028 out.go:176] * Using the docker driver based on user configuration
	I1117 23:10:49.240616   12028 start.go:280] selected driver: docker
	I1117 23:10:49.240616   12028 start.go:775] validating driver "docker" against <nil>
	I1117 23:10:49.241173   12028 start.go:786] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I1117 23:10:49.300474   12028 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I1117 23:10:49.646144   12028 info.go:263] docker info: {ID:L2W4:ZZCI:FAHF:ZZT6:NAWJ:ODFR:2ZBQ:YQ77:PYZZ:VR3E:WLUM:OBSV Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:40 OomKillDisable:true NGoroutines:52 SystemTime:2021-11-17 23:10:49.376797947 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:4.19.76-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://inde
x.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:6239399936 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:19.03.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7ad184331fa3e55e52b890ea95e65ba581ae3429 Expected:7ad184331fa3e55e52b890ea95e65ba581ae3429} RuncCommit:{ID:dc9208a3303feef5b3839f4323d9beb36df0a9dd Expected:dc9208a3303feef5b3839f4323d9beb36df0a9dd} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense:Community Engine Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I1117 23:10:49.646330   12028 start_flags.go:268] no existing cluster config was found, will generate one from the flags 
	I1117 23:10:49.646892   12028 start_flags.go:758] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1117 23:10:49.646942   12028 cni.go:93] Creating CNI manager for "bridge"
	I1117 23:10:49.647005   12028 start_flags.go:277] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1117 23:10:49.647056   12028 start_flags.go:282] config:
	{Name:bridge-20211117230313-9504 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.3 ClusterName:bridge-20211117230313-9504 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: Netw
orkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host}
	I1117 23:10:49.651254   12028 out.go:176] * Starting control plane node bridge-20211117230313-9504 in cluster bridge-20211117230313-9504
	I1117 23:10:49.651373   12028 cache.go:118] Beginning downloading kic base image for docker with docker
	I1117 23:10:49.654982   12028 out.go:176] * Pulling base image ...
	I1117 23:10:49.654982   12028 preload.go:132] Checking if preload exists for k8s version v1.22.3 and runtime docker
	I1117 23:10:49.654982   12028 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c in local docker daemon
	I1117 23:10:49.654982   12028 preload.go:148] Found local preload: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4
	I1117 23:10:49.655524   12028 cache.go:57] Caching tarball of preloaded images
	I1117 23:10:49.655986   12028 preload.go:174] Found C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1117 23:10:49.656268   12028 cache.go:60] Finished verifying existence of preloaded tar for  v1.22.3 on docker
	I1117 23:10:49.656268   12028 profile.go:147] Saving config to C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\bridge-20211117230313-9504\config.json ...
	I1117 23:10:49.656268   12028 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\bridge-20211117230313-9504\config.json: {Name:mkfc6eea393717436a2964e03d66beddb8ff1ba7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1117 23:10:49.753054   12028 image.go:79] Found gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c in local docker daemon, skipping pull
	I1117 23:10:49.753054   12028 cache.go:140] gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c exists in daemon, skipping load
	I1117 23:10:49.753054   12028 cache.go:206] Successfully downloaded all kic artifacts
	I1117 23:10:49.753054   12028 start.go:313] acquiring machines lock for bridge-20211117230313-9504: {Name:mk3ea166a72b2ffcb96acf2d8c2aa2630862e1f1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 23:10:49.753503   12028 start.go:317] acquired machines lock for "bridge-20211117230313-9504" in 164.7µs
	I1117 23:10:49.753705   12028 start.go:89] Provisioning new machine with config: &{Name:bridge-20211117230313-9504 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.3 ClusterName:bridge-20211117230313-9504 Namespace:default APIServerName:minikubeCA APIServerNames:
[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.22.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host} &{Name: IP: Port:8443 KubernetesVersion:v1.22.3 ControlPlane:true Worker:true}
	I1117 23:10:49.753936   12028 start.go:126] createHost starting for "" (driver="docker")
	I1117 23:10:49.757123   12028 out.go:203] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I1117 23:10:49.757123   12028 start.go:160] libmachine.API.Create for "bridge-20211117230313-9504" (driver="docker")
	I1117 23:10:49.757123   12028 client.go:168] LocalClient.Create starting
	I1117 23:10:49.757123   12028 main.go:130] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I1117 23:10:49.758244   12028 main.go:130] libmachine: Decoding PEM data...
	I1117 23:10:49.758308   12028 main.go:130] libmachine: Parsing certificate...
	I1117 23:10:49.758471   12028 main.go:130] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I1117 23:10:49.758700   12028 main.go:130] libmachine: Decoding PEM data...
	I1117 23:10:49.758700   12028 main.go:130] libmachine: Parsing certificate...
	I1117 23:10:49.763223   12028 cli_runner.go:115] Run: docker network inspect bridge-20211117230313-9504 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1117 23:10:49.869264   12028 cli_runner.go:162] docker network inspect bridge-20211117230313-9504 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1117 23:10:49.873662   12028 network_create.go:254] running [docker network inspect bridge-20211117230313-9504] to gather additional debugging logs...
	I1117 23:10:49.874273   12028 cli_runner.go:115] Run: docker network inspect bridge-20211117230313-9504
	W1117 23:10:49.961413   12028 cli_runner.go:162] docker network inspect bridge-20211117230313-9504 returned with exit code 1
	I1117 23:10:49.961413   12028 network_create.go:257] error running [docker network inspect bridge-20211117230313-9504]: docker network inspect bridge-20211117230313-9504: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: bridge-20211117230313-9504
	I1117 23:10:49.961413   12028 network_create.go:259] output of [docker network inspect bridge-20211117230313-9504]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: bridge-20211117230313-9504
	
	** /stderr **
	I1117 23:10:49.967016   12028 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1117 23:10:50.075901   12028 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc00099c470] misses:0}
	I1117 23:10:50.076565   12028 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1117 23:10:50.076616   12028 network_create.go:106] attempt to create docker network bridge-20211117230313-9504 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1117 23:10:50.081248   12028 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true bridge-20211117230313-9504
	I1117 23:10:50.309102   12028 network_create.go:90] docker network bridge-20211117230313-9504 192.168.49.0/24 created
	I1117 23:10:50.309102   12028 kic.go:106] calculated static IP "192.168.49.2" for the "bridge-20211117230313-9504" container
	I1117 23:10:50.318506   12028 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
	I1117 23:10:50.412101   12028 cli_runner.go:115] Run: docker volume create bridge-20211117230313-9504 --label name.minikube.sigs.k8s.io=bridge-20211117230313-9504 --label created_by.minikube.sigs.k8s.io=true
	I1117 23:10:50.512700   12028 oci.go:102] Successfully created a docker volume bridge-20211117230313-9504
	I1117 23:10:50.520027   12028 cli_runner.go:115] Run: docker run --rm --name bridge-20211117230313-9504-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=bridge-20211117230313-9504 --entrypoint /usr/bin/test -v bridge-20211117230313-9504:/var gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -d /var/lib
	I1117 23:10:51.650596   12028 cli_runner.go:168] Completed: docker run --rm --name bridge-20211117230313-9504-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=bridge-20211117230313-9504 --entrypoint /usr/bin/test -v bridge-20211117230313-9504:/var gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -d /var/lib: (1.130561s)
	I1117 23:10:51.650596   12028 oci.go:106] Successfully prepared a docker volume bridge-20211117230313-9504
	I1117 23:10:51.650596   12028 preload.go:132] Checking if preload exists for k8s version v1.22.3 and runtime docker
	I1117 23:10:51.650596   12028 kic.go:179] Starting extracting preloaded images to volume ...
	I1117 23:10:51.651616   12028 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I1117 23:10:51.657133   12028 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v bridge-20211117230313-9504:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir
	W1117 23:10:51.785457   12028 cli_runner.go:162] docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v bridge-20211117230313-9504:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir returned with exit code 125
	I1117 23:10:51.785512   12028 kic.go:186] Unable to extract preloaded tarball to volume: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v bridge-20211117230313-9504:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: status code not OK but 500: {"Message":"Unhandled exception: The notification platform is unavailable.\r\n\r\nThe notification platform is unavailable.\r\n","StackTrace":"   at Windows.UI.Notifications.ToastNotificationManager.CreateToastNotifier(String applicationId)\r\n   at Docker.WPF.PromptShareDirectory.<PromptUserAsync>d__0.MoveNext() in C:\\workspaces\\stable-2.3.x\\src\\github.com\\docker\\pinata\\win\\src\\Docker.WPF\\PromptShareDirectory.cs:line 25\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at Docker.ApiServices.Mounting.FileSharing.<DoShareAsync>d__6.MoveNext() in C:\\workspaces\\stable-2.3.x\\src\\github.com\\docker\\pinata\\win\\src\\Docker.ApiServices\\Mounting\\FileSharing.cs:line 80\r\n--- End of stack trace from previous location whe
re exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at Docker.ApiServices.Mounting.FileSharing.<ShareAsync>d__4.MoveNext() in C:\\workspaces\\stable-2.3.x\\src\\github.com\\docker\\pinata\\win\\src\\Docker.ApiServices\\Mounting\\FileSharing.cs:line 47\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at Docker.HttpApi.Controllers.FilesharingController.<ShareDirectory>d__2.MoveNext() in C:\\workspaces\\stable-2.3.x\\src\\github.com\\docker\\pinata\\win\\src\\Docker.HttpApi\\Controllers\\FilesharingController.cs:line 21\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.Exceptio
nDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at System.Threading.Tasks.TaskHelpersExtensions.<CastToObject>d__1`1.MoveNext()\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at System.Web.Http.Controllers.ApiControllerActionInvoker.<InvokeActionAsyncCore>d__1.MoveNext()\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at System.Web.Http.Controllers.ActionFilterResult.<ExecuteAsync>d__5.MoveNext()\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.Excepti
onServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at System.Web.Http.Dispatcher.HttpControllerDispatcher.<SendAsync>d__15.MoveNext()"}.
	See 'docker run --help'.
	I1117 23:10:52.023615   12028 info.go:263] docker info: {ID:L2W4:ZZCI:FAHF:ZZT6:NAWJ:ODFR:2ZBQ:YQ77:PYZZ:VR3E:WLUM:OBSV Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:40 OomKillDisable:true NGoroutines:52 SystemTime:2021-11-17 23:10:51.740120126 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:4.19.76-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://inde
x.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:6239399936 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:19.03.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7ad184331fa3e55e52b890ea95e65ba581ae3429 Expected:7ad184331fa3e55e52b890ea95e65ba581ae3429} RuncCommit:{ID:dc9208a3303feef5b3839f4323d9beb36df0a9dd Expected:dc9208a3303feef5b3839f4323d9beb36df0a9dd} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense:Community Engine Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	E1117 23:10:52.024136   12028 oci.go:197] error getting kernel modules path: Unable to locate kernel modules
	I1117 23:10:52.024136   12028 client.go:171] LocalClient.Create took 2.2669955s
	I1117 23:10:54.033310   12028 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1117 23:10:54.035980   12028 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20211117230313-9504
	W1117 23:10:54.131540   12028 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20211117230313-9504 returned with exit code 1
	I1117 23:10:54.131847   12028 retry.go:31] will retry after 276.165072ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "bridge-20211117230313-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20211117230313-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-20211117230313-9504
	I1117 23:10:54.415690   12028 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20211117230313-9504
	W1117 23:10:54.519373   12028 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20211117230313-9504 returned with exit code 1
	I1117 23:10:54.519373   12028 retry.go:31] will retry after 540.190908ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "bridge-20211117230313-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20211117230313-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-20211117230313-9504
	I1117 23:10:55.066203   12028 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20211117230313-9504
	W1117 23:10:55.155634   12028 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20211117230313-9504 returned with exit code 1
	I1117 23:10:55.155818   12028 retry.go:31] will retry after 655.06503ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "bridge-20211117230313-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20211117230313-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-20211117230313-9504
	I1117 23:10:55.819338   12028 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20211117230313-9504
	W1117 23:10:55.908271   12028 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20211117230313-9504 returned with exit code 1
	W1117 23:10:55.908474   12028 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "bridge-20211117230313-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20211117230313-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-20211117230313-9504
	
	W1117 23:10:55.908511   12028 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "bridge-20211117230313-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20211117230313-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-20211117230313-9504
	I1117 23:10:55.908511   12028 start.go:129] duration metric: createHost completed in 6.1545291s
	I1117 23:10:55.908576   12028 start.go:80] releasing machines lock for "bridge-20211117230313-9504", held for 6.1548888s
	W1117 23:10:55.908769   12028 start.go:532] error starting host: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	I1117 23:10:55.917719   12028 cli_runner.go:115] Run: docker container inspect bridge-20211117230313-9504 --format={{.State.Status}}
	W1117 23:10:56.019858   12028 cli_runner.go:162] docker container inspect bridge-20211117230313-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:10:56.019858   12028 delete.go:82] Unable to get host status for bridge-20211117230313-9504, assuming it has already been deleted: state: unknown state "bridge-20211117230313-9504": docker container inspect bridge-20211117230313-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-20211117230313-9504
	W1117 23:10:56.020501   12028 out.go:241] ! StartHost failed, but will try again: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	! StartHost failed, but will try again: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	I1117 23:10:56.020567   12028 start.go:547] Will try again in 5 seconds ...
	I1117 23:11:01.021010   12028 start.go:313] acquiring machines lock for bridge-20211117230313-9504: {Name:mk3ea166a72b2ffcb96acf2d8c2aa2630862e1f1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 23:11:01.021373   12028 start.go:317] acquired machines lock for "bridge-20211117230313-9504" in 293.5µs
	I1117 23:11:01.021594   12028 start.go:93] Skipping create...Using existing machine configuration
	I1117 23:11:01.021594   12028 fix.go:55] fixHost starting: 
	I1117 23:11:01.029403   12028 cli_runner.go:115] Run: docker container inspect bridge-20211117230313-9504 --format={{.State.Status}}
	W1117 23:11:01.136275   12028 cli_runner.go:162] docker container inspect bridge-20211117230313-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:11:01.136494   12028 fix.go:108] recreateIfNeeded on bridge-20211117230313-9504: state= err=unknown state "bridge-20211117230313-9504": docker container inspect bridge-20211117230313-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-20211117230313-9504
	I1117 23:11:01.136494   12028 fix.go:113] machineExists: false. err=machine does not exist
	I1117 23:11:01.139824   12028 out.go:176] * docker "bridge-20211117230313-9504" container is missing, will recreate.
	I1117 23:11:01.139892   12028 delete.go:124] DEMOLISHING bridge-20211117230313-9504 ...
	I1117 23:11:01.149253   12028 cli_runner.go:115] Run: docker container inspect bridge-20211117230313-9504 --format={{.State.Status}}
	W1117 23:11:01.249547   12028 cli_runner.go:162] docker container inspect bridge-20211117230313-9504 --format={{.State.Status}} returned with exit code 1
	W1117 23:11:01.249736   12028 stop.go:75] unable to get state: unknown state "bridge-20211117230313-9504": docker container inspect bridge-20211117230313-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-20211117230313-9504
	I1117 23:11:01.249736   12028 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "bridge-20211117230313-9504": docker container inspect bridge-20211117230313-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-20211117230313-9504
	I1117 23:11:01.258602   12028 cli_runner.go:115] Run: docker container inspect bridge-20211117230313-9504 --format={{.State.Status}}
	W1117 23:11:01.350510   12028 cli_runner.go:162] docker container inspect bridge-20211117230313-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:11:01.350510   12028 delete.go:82] Unable to get host status for bridge-20211117230313-9504, assuming it has already been deleted: state: unknown state "bridge-20211117230313-9504": docker container inspect bridge-20211117230313-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-20211117230313-9504
	I1117 23:11:01.354885   12028 cli_runner.go:115] Run: docker container inspect -f {{.Id}} bridge-20211117230313-9504
	W1117 23:11:01.445265   12028 cli_runner.go:162] docker container inspect -f {{.Id}} bridge-20211117230313-9504 returned with exit code 1
	I1117 23:11:01.445265   12028 kic.go:360] could not find the container bridge-20211117230313-9504 to remove it. will try anyways
	I1117 23:11:01.449928   12028 cli_runner.go:115] Run: docker container inspect bridge-20211117230313-9504 --format={{.State.Status}}
	W1117 23:11:01.543727   12028 cli_runner.go:162] docker container inspect bridge-20211117230313-9504 --format={{.State.Status}} returned with exit code 1
	W1117 23:11:01.543727   12028 oci.go:83] error getting container status, will try to delete anyways: unknown state "bridge-20211117230313-9504": docker container inspect bridge-20211117230313-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-20211117230313-9504
	I1117 23:11:01.547820   12028 cli_runner.go:115] Run: docker exec --privileged -t bridge-20211117230313-9504 /bin/bash -c "sudo init 0"
	W1117 23:11:01.635316   12028 cli_runner.go:162] docker exec --privileged -t bridge-20211117230313-9504 /bin/bash -c "sudo init 0" returned with exit code 1
	I1117 23:11:01.635316   12028 oci.go:658] error shutdown bridge-20211117230313-9504: docker exec --privileged -t bridge-20211117230313-9504 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: bridge-20211117230313-9504
	I1117 23:11:02.640313   12028 cli_runner.go:115] Run: docker container inspect bridge-20211117230313-9504 --format={{.State.Status}}
	W1117 23:11:02.732750   12028 cli_runner.go:162] docker container inspect bridge-20211117230313-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:11:02.733030   12028 oci.go:670] temporary error verifying shutdown: unknown state "bridge-20211117230313-9504": docker container inspect bridge-20211117230313-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-20211117230313-9504
	I1117 23:11:02.733084   12028 oci.go:672] temporary error: container bridge-20211117230313-9504 status is  but expect it to be exited
	I1117 23:11:02.733084   12028 retry.go:31] will retry after 462.318748ms: couldn't verify container is exited. %v: unknown state "bridge-20211117230313-9504": docker container inspect bridge-20211117230313-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-20211117230313-9504
	I1117 23:11:03.200954   12028 cli_runner.go:115] Run: docker container inspect bridge-20211117230313-9504 --format={{.State.Status}}
	W1117 23:11:03.290172   12028 cli_runner.go:162] docker container inspect bridge-20211117230313-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:11:03.290378   12028 oci.go:670] temporary error verifying shutdown: unknown state "bridge-20211117230313-9504": docker container inspect bridge-20211117230313-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-20211117230313-9504
	I1117 23:11:03.290378   12028 oci.go:672] temporary error: container bridge-20211117230313-9504 status is  but expect it to be exited
	I1117 23:11:03.290378   12028 retry.go:31] will retry after 890.117305ms: couldn't verify container is exited. %v: unknown state "bridge-20211117230313-9504": docker container inspect bridge-20211117230313-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-20211117230313-9504
	I1117 23:11:04.185666   12028 cli_runner.go:115] Run: docker container inspect bridge-20211117230313-9504 --format={{.State.Status}}
	W1117 23:11:04.291651   12028 cli_runner.go:162] docker container inspect bridge-20211117230313-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:11:04.291742   12028 oci.go:670] temporary error verifying shutdown: unknown state "bridge-20211117230313-9504": docker container inspect bridge-20211117230313-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-20211117230313-9504
	I1117 23:11:04.291742   12028 oci.go:672] temporary error: container bridge-20211117230313-9504 status is  but expect it to be exited
	I1117 23:11:04.291742   12028 retry.go:31] will retry after 636.341646ms: couldn't verify container is exited. %v: unknown state "bridge-20211117230313-9504": docker container inspect bridge-20211117230313-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-20211117230313-9504
	I1117 23:11:04.933735   12028 cli_runner.go:115] Run: docker container inspect bridge-20211117230313-9504 --format={{.State.Status}}
	W1117 23:11:05.025040   12028 cli_runner.go:162] docker container inspect bridge-20211117230313-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:11:05.025301   12028 oci.go:670] temporary error verifying shutdown: unknown state "bridge-20211117230313-9504": docker container inspect bridge-20211117230313-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-20211117230313-9504
	I1117 23:11:05.025301   12028 oci.go:672] temporary error: container bridge-20211117230313-9504 status is  but expect it to be exited
	I1117 23:11:05.025377   12028 retry.go:31] will retry after 1.107876242s: couldn't verify container is exited. %v: unknown state "bridge-20211117230313-9504": docker container inspect bridge-20211117230313-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-20211117230313-9504
	I1117 23:11:06.138808   12028 cli_runner.go:115] Run: docker container inspect bridge-20211117230313-9504 --format={{.State.Status}}
	W1117 23:11:06.224570   12028 cli_runner.go:162] docker container inspect bridge-20211117230313-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:11:06.224570   12028 oci.go:670] temporary error verifying shutdown: unknown state "bridge-20211117230313-9504": docker container inspect bridge-20211117230313-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-20211117230313-9504
	I1117 23:11:06.224570   12028 oci.go:672] temporary error: container bridge-20211117230313-9504 status is  but expect it to be exited
	I1117 23:11:06.224570   12028 retry.go:31] will retry after 1.511079094s: couldn't verify container is exited. %v: unknown state "bridge-20211117230313-9504": docker container inspect bridge-20211117230313-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-20211117230313-9504
	I1117 23:11:07.740410   12028 cli_runner.go:115] Run: docker container inspect bridge-20211117230313-9504 --format={{.State.Status}}
	W1117 23:11:07.835840   12028 cli_runner.go:162] docker container inspect bridge-20211117230313-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:11:07.836030   12028 oci.go:670] temporary error verifying shutdown: unknown state "bridge-20211117230313-9504": docker container inspect bridge-20211117230313-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-20211117230313-9504
	I1117 23:11:07.836175   12028 oci.go:672] temporary error: container bridge-20211117230313-9504 status is  but expect it to be exited
	I1117 23:11:07.836256   12028 retry.go:31] will retry after 3.04096222s: couldn't verify container is exited. %v: unknown state "bridge-20211117230313-9504": docker container inspect bridge-20211117230313-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-20211117230313-9504
	I1117 23:11:10.881960   12028 cli_runner.go:115] Run: docker container inspect bridge-20211117230313-9504 --format={{.State.Status}}
	W1117 23:11:10.985506   12028 cli_runner.go:162] docker container inspect bridge-20211117230313-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:11:10.985506   12028 oci.go:670] temporary error verifying shutdown: unknown state "bridge-20211117230313-9504": docker container inspect bridge-20211117230313-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-20211117230313-9504
	I1117 23:11:10.985506   12028 oci.go:672] temporary error: container bridge-20211117230313-9504 status is  but expect it to be exited
	I1117 23:11:10.985506   12028 retry.go:31] will retry after 5.781953173s: couldn't verify container is exited. %v: unknown state "bridge-20211117230313-9504": docker container inspect bridge-20211117230313-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-20211117230313-9504
	I1117 23:11:16.773490   12028 cli_runner.go:115] Run: docker container inspect bridge-20211117230313-9504 --format={{.State.Status}}
	W1117 23:11:16.866597   12028 cli_runner.go:162] docker container inspect bridge-20211117230313-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:11:16.866597   12028 oci.go:670] temporary error verifying shutdown: unknown state "bridge-20211117230313-9504": docker container inspect bridge-20211117230313-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-20211117230313-9504
	I1117 23:11:16.866597   12028 oci.go:672] temporary error: container bridge-20211117230313-9504 status is  but expect it to be exited
	I1117 23:11:16.866597   12028 oci.go:87] couldn't shut down bridge-20211117230313-9504 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "bridge-20211117230313-9504": docker container inspect bridge-20211117230313-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-20211117230313-9504
	 
	I1117 23:11:16.871638   12028 cli_runner.go:115] Run: docker rm -f -v bridge-20211117230313-9504
	W1117 23:11:16.956961   12028 cli_runner.go:162] docker rm -f -v bridge-20211117230313-9504 returned with exit code 1
	W1117 23:11:16.957839   12028 delete.go:139] delete failed (probably ok) <nil>
	I1117 23:11:16.957839   12028 fix.go:120] Sleeping 1 second for extra luck!
	I1117 23:11:17.958157   12028 start.go:126] createHost starting for "" (driver="docker")
	I1117 23:11:17.961785   12028 out.go:203] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I1117 23:11:17.962280   12028 start.go:160] libmachine.API.Create for "bridge-20211117230313-9504" (driver="docker")
	I1117 23:11:17.962404   12028 client.go:168] LocalClient.Create starting
	I1117 23:11:17.962929   12028 main.go:130] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I1117 23:11:17.963269   12028 main.go:130] libmachine: Decoding PEM data...
	I1117 23:11:17.963388   12028 main.go:130] libmachine: Parsing certificate...
	I1117 23:11:17.963619   12028 main.go:130] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I1117 23:11:17.963953   12028 main.go:130] libmachine: Decoding PEM data...
	I1117 23:11:17.963953   12028 main.go:130] libmachine: Parsing certificate...
	I1117 23:11:17.970589   12028 cli_runner.go:115] Run: docker network inspect bridge-20211117230313-9504 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1117 23:11:18.058569   12028 cli_runner.go:162] docker network inspect bridge-20211117230313-9504 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1117 23:11:18.062237   12028 network_create.go:254] running [docker network inspect bridge-20211117230313-9504] to gather additional debugging logs...
	I1117 23:11:18.062953   12028 cli_runner.go:115] Run: docker network inspect bridge-20211117230313-9504
	W1117 23:11:18.153212   12028 cli_runner.go:162] docker network inspect bridge-20211117230313-9504 returned with exit code 1
	I1117 23:11:18.153212   12028 network_create.go:257] error running [docker network inspect bridge-20211117230313-9504]: docker network inspect bridge-20211117230313-9504: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: bridge-20211117230313-9504
	I1117 23:11:18.153212   12028 network_create.go:259] output of [docker network inspect bridge-20211117230313-9504]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: bridge-20211117230313-9504
	
	** /stderr **
	I1117 23:11:18.157880   12028 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1117 23:11:18.260777   12028 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00099c470] amended:false}} dirty:map[] misses:0}
	I1117 23:11:18.260777   12028 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1117 23:11:18.274870   12028 network.go:288] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00099c470] amended:true}} dirty:map[192.168.49.0:0xc00099c470 192.168.58.0:0xc0000067d0] misses:0}
	I1117 23:11:18.274870   12028 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1117 23:11:18.274870   12028 network_create.go:106] attempt to create docker network bridge-20211117230313-9504 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I1117 23:11:18.279112   12028 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true bridge-20211117230313-9504
	W1117 23:11:18.370493   12028 cli_runner.go:162] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true bridge-20211117230313-9504 returned with exit code 1
	W1117 23:11:18.370531   12028 network_create.go:98] failed to create docker network bridge-20211117230313-9504 192.168.58.0/24, will retry: subnet is taken
	I1117 23:11:18.383898   12028 network.go:279] skipping subnet 192.168.58.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00099c470] amended:true}} dirty:map[192.168.49.0:0xc00099c470 192.168.58.0:0xc0000067d0] misses:1}
	I1117 23:11:18.383898   12028 network.go:238] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1117 23:11:18.396566   12028 network.go:288] reserving subnet 192.168.67.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00099c470] amended:true}} dirty:map[192.168.49.0:0xc00099c470 192.168.58.0:0xc0000067d0 192.168.67.0:0xc000711048] misses:1}
	I1117 23:11:18.396566   12028 network.go:235] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1117 23:11:18.396566   12028 network_create.go:106] attempt to create docker network bridge-20211117230313-9504 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
	I1117 23:11:18.401487   12028 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true bridge-20211117230313-9504
	W1117 23:11:18.488519   12028 cli_runner.go:162] docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true bridge-20211117230313-9504 returned with exit code 1
	W1117 23:11:18.488667   12028 network_create.go:98] failed to create docker network bridge-20211117230313-9504 192.168.67.0/24, will retry: subnet is taken
	I1117 23:11:18.501393   12028 network.go:279] skipping subnet 192.168.67.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00099c470] amended:true}} dirty:map[192.168.49.0:0xc00099c470 192.168.58.0:0xc0000067d0 192.168.67.0:0xc000711048] misses:2}
	I1117 23:11:18.501393   12028 network.go:238] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1117 23:11:18.513509   12028 network.go:288] reserving subnet 192.168.76.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00099c470] amended:true}} dirty:map[192.168.49.0:0xc00099c470 192.168.58.0:0xc0000067d0 192.168.67.0:0xc000711048 192.168.76.0:0xc000988380] misses:2}
	I1117 23:11:18.513752   12028 network.go:235] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1117 23:11:18.513752   12028 network_create.go:106] attempt to create docker network bridge-20211117230313-9504 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1117 23:11:18.517895   12028 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true bridge-20211117230313-9504
	I1117 23:11:18.713649   12028 network_create.go:90] docker network bridge-20211117230313-9504 192.168.76.0/24 created
	I1117 23:11:18.713649   12028 kic.go:106] calculated static IP "192.168.76.2" for the "bridge-20211117230313-9504" container
	I1117 23:11:18.722185   12028 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
	I1117 23:11:18.812899   12028 cli_runner.go:115] Run: docker volume create bridge-20211117230313-9504 --label name.minikube.sigs.k8s.io=bridge-20211117230313-9504 --label created_by.minikube.sigs.k8s.io=true
	I1117 23:11:18.902775   12028 oci.go:102] Successfully created a docker volume bridge-20211117230313-9504
	I1117 23:11:18.910177   12028 cli_runner.go:115] Run: docker run --rm --name bridge-20211117230313-9504-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=bridge-20211117230313-9504 --entrypoint /usr/bin/test -v bridge-20211117230313-9504:/var gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -d /var/lib
	I1117 23:11:19.817586   12028 oci.go:106] Successfully prepared a docker volume bridge-20211117230313-9504
	I1117 23:11:19.817586   12028 preload.go:132] Checking if preload exists for k8s version v1.22.3 and runtime docker
	I1117 23:11:19.817942   12028 kic.go:179] Starting extracting preloaded images to volume ...
	I1117 23:11:19.823041   12028 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v bridge-20211117230313-9504:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir
	I1117 23:11:19.825268   12028 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	W1117 23:11:19.929219   12028 cli_runner.go:162] docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v bridge-20211117230313-9504:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir returned with exit code 125
	I1117 23:11:19.929420   12028 kic.go:186] Unable to extract preloaded tarball to volume: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v bridge-20211117230313-9504:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: status code not OK but 500: {"Message":"Unhandled exception: The notification platform is unavailable.\r\n\r\nThe notification platform is unavailable.\r\n","StackTrace":"   at Windows.UI.Notifications.ToastNotificationManager.CreateToastNotifier(String applicationId)\r\n   at Docker.WPF.PromptShareDirectory.<PromptUserAsync>d__0.MoveNext() in C:\\workspaces\\stable-2.3.x\\src\\github.com\\docker\\pinata\\win\\src\\Docker.WPF\\PromptShareDirectory.cs:line 25\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at Docker.ApiServices.Mounting.FileSharing.<DoShareAsync>d__6.MoveNext() in C:\\workspaces\\stable-2.3.x\\src\\github.com\\docker\\pinata\\win\\src\\Docker.ApiServices\\Mounting\\FileSharing.cs:line 80\r\n--- End of stack trace from previous location whe
re exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at Docker.ApiServices.Mounting.FileSharing.<ShareAsync>d__4.MoveNext() in C:\\workspaces\\stable-2.3.x\\src\\github.com\\docker\\pinata\\win\\src\\Docker.ApiServices\\Mounting\\FileSharing.cs:line 47\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at Docker.HttpApi.Controllers.FilesharingController.<ShareDirectory>d__2.MoveNext() in C:\\workspaces\\stable-2.3.x\\src\\github.com\\docker\\pinata\\win\\src\\Docker.HttpApi\\Controllers\\FilesharingController.cs:line 21\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.Exceptio
nDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at System.Threading.Tasks.TaskHelpersExtensions.<CastToObject>d__1`1.MoveNext()\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at System.Web.Http.Controllers.ApiControllerActionInvoker.<InvokeActionAsyncCore>d__1.MoveNext()\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at System.Web.Http.Controllers.ActionFilterResult.<ExecuteAsync>d__5.MoveNext()\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.Excepti
onServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at System.Web.Http.Dispatcher.HttpControllerDispatcher.<SendAsync>d__15.MoveNext()"}.
	See 'docker run --help'.
	I1117 23:11:20.187169   12028 info.go:263] docker info: {ID:L2W4:ZZCI:FAHF:ZZT6:NAWJ:ODFR:2ZBQ:YQ77:PYZZ:VR3E:WLUM:OBSV Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:41 OomKillDisable:true NGoroutines:53 SystemTime:2021-11-17 23:11:19.913434801 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:4.19.76-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://inde
x.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:6239399936 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:19.03.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7ad184331fa3e55e52b890ea95e65ba581ae3429 Expected:7ad184331fa3e55e52b890ea95e65ba581ae3429} RuncCommit:{ID:dc9208a3303feef5b3839f4323d9beb36df0a9dd Expected:dc9208a3303feef5b3839f4323d9beb36df0a9dd} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense:Community Engine Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	E1117 23:11:20.187169   12028 oci.go:197] error getting kernel modules path: Unable to locate kernel modules
	I1117 23:11:20.187169   12028 client.go:171] LocalClient.Create took 2.2246932s
	I1117 23:11:22.195177   12028 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1117 23:11:22.199040   12028 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20211117230313-9504
	W1117 23:11:22.289098   12028 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20211117230313-9504 returned with exit code 1
	I1117 23:11:22.289098   12028 retry.go:31] will retry after 178.565968ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "bridge-20211117230313-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20211117230313-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-20211117230313-9504
	I1117 23:11:22.472643   12028 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20211117230313-9504
	W1117 23:11:22.562188   12028 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20211117230313-9504 returned with exit code 1
	I1117 23:11:22.562596   12028 retry.go:31] will retry after 330.246446ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "bridge-20211117230313-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20211117230313-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-20211117230313-9504
	I1117 23:11:22.897169   12028 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20211117230313-9504
	W1117 23:11:22.991836   12028 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20211117230313-9504 returned with exit code 1
	I1117 23:11:22.992208   12028 retry.go:31] will retry after 460.157723ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "bridge-20211117230313-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20211117230313-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-20211117230313-9504
	I1117 23:11:23.457403   12028 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20211117230313-9504
	W1117 23:11:23.546351   12028 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20211117230313-9504 returned with exit code 1
	W1117 23:11:23.546570   12028 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "bridge-20211117230313-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20211117230313-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-20211117230313-9504
	
	W1117 23:11:23.546643   12028 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "bridge-20211117230313-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20211117230313-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-20211117230313-9504
	I1117 23:11:23.546643   12028 start.go:129] duration metric: createHost completed in 5.5884436s
	I1117 23:11:23.554330   12028 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1117 23:11:23.558275   12028 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20211117230313-9504
	W1117 23:11:23.651481   12028 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20211117230313-9504 returned with exit code 1
	I1117 23:11:23.651554   12028 retry.go:31] will retry after 195.758538ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "bridge-20211117230313-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20211117230313-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-20211117230313-9504
	I1117 23:11:23.854132   12028 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20211117230313-9504
	W1117 23:11:23.949456   12028 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20211117230313-9504 returned with exit code 1
	I1117 23:11:23.949456   12028 retry.go:31] will retry after 297.413196ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "bridge-20211117230313-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20211117230313-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-20211117230313-9504
	I1117 23:11:24.253143   12028 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20211117230313-9504
	W1117 23:11:24.352228   12028 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20211117230313-9504 returned with exit code 1
	I1117 23:11:24.352228   12028 retry.go:31] will retry after 663.23513ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "bridge-20211117230313-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20211117230313-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-20211117230313-9504
	I1117 23:11:25.021833   12028 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20211117230313-9504
	W1117 23:11:25.113535   12028 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20211117230313-9504 returned with exit code 1
	W1117 23:11:25.113757   12028 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "bridge-20211117230313-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20211117230313-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-20211117230313-9504
	
	W1117 23:11:25.113865   12028 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "bridge-20211117230313-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20211117230313-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-20211117230313-9504
	I1117 23:11:25.113865   12028 fix.go:57] fixHost completed within 24.0920901s
	I1117 23:11:25.113865   12028 start.go:80] releasing machines lock for "bridge-20211117230313-9504", held for 24.0921989s
	W1117 23:11:25.114404   12028 out.go:241] * Failed to start docker container. Running "minikube delete -p bridge-20211117230313-9504" may fix it: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	* Failed to start docker container. Running "minikube delete -p bridge-20211117230313-9504" may fix it: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	I1117 23:11:25.118677   12028 out.go:176] 
	W1117 23:11:25.118772   12028 out.go:241] X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	W1117 23:11:25.118922   12028 out.go:241] * 
	* 
	W1117 23:11:25.119799   12028 out.go:241] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1117 23:11:25.122758   12028 out.go:176] 

                                                
                                                
** /stderr **
net_test.go:101: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/bridge/Start (38.26s)

                                                
                                    
x
+
TestPause/serial/Unpause (5.69s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:119: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p pause-20211117230855-9504 --alsologtostderr -v=5
pause_test.go:119: (dbg) Non-zero exit: out/minikube-windows-amd64.exe unpause -p pause-20211117230855-9504 --alsologtostderr -v=5: exit status 80 (1.8918752s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1117 23:10:49.769740    6532 out.go:297] Setting OutFile to fd 1880 ...
	I1117 23:10:49.840029    6532 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 23:10:49.840105    6532 out.go:310] Setting ErrFile to fd 1776...
	I1117 23:10:49.840105    6532 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 23:10:49.850077    6532 mustload.go:65] Loading cluster: pause-20211117230855-9504
	I1117 23:10:49.850834    6532 config.go:176] Loaded profile config "pause-20211117230855-9504": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.3
	I1117 23:10:49.860658    6532 cli_runner.go:115] Run: docker container inspect pause-20211117230855-9504 --format={{.State.Status}}
	W1117 23:10:51.434505    6532 cli_runner.go:162] docker container inspect pause-20211117230855-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:10:51.434505    6532 cli_runner.go:168] Completed: docker container inspect pause-20211117230855-9504 --format={{.State.Status}}: (1.573649s)
	I1117 23:10:51.439107    6532 out.go:176] 
	W1117 23:10:51.439107    6532 out.go:241] X Exiting due to GUEST_STATUS: state: unknown state "pause-20211117230855-9504": docker container inspect pause-20211117230855-9504 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: pause-20211117230855-9504
	
	X Exiting due to GUEST_STATUS: state: unknown state "pause-20211117230855-9504": docker container inspect pause-20211117230855-9504 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: pause-20211117230855-9504
	
	W1117 23:10:51.439107    6532 out.go:241] * 
	* 
	W1117 23:10:51.447569    6532 out.go:241] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_unpause_00b12d9cedab4ae1bb930a621bdee2ada68dbd98_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_unpause_00b12d9cedab4ae1bb930a621bdee2ada68dbd98_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I1117 23:10:51.450350    6532 out.go:176] 

                                                
                                                
** /stderr **
pause_test.go:121: failed to unpause minikube with args: "out/minikube-windows-amd64.exe unpause -p pause-20211117230855-9504 --alsologtostderr -v=5" : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestPause/serial/Unpause]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect pause-20211117230855-9504
helpers_test.go:235: (dbg) docker inspect pause-20211117230855-9504:

                                                
                                                
-- stdout --
	[
	    {
	        "CreatedAt": "2021-11-17T23:09:00Z",
	        "Driver": "local",
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "pause-20211117230855-9504"
	        },
	        "Mountpoint": "/var/lib/docker/volumes/pause-20211117230855-9504/_data",
	        "Name": "pause-20211117230855-9504",
	        "Options": {},
	        "Scope": "local"
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p pause-20211117230855-9504 -n pause-20211117230855-9504
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p pause-20211117230855-9504 -n pause-20211117230855-9504: exit status 7 (1.8021606s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 23:10:53.359334    9196 status.go:247] status error: host: state: unknown state "pause-20211117230855-9504": docker container inspect pause-20211117230855-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: pause-20211117230855-9504

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "pause-20211117230855-9504" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestPause/serial/Unpause]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect pause-20211117230855-9504
helpers_test.go:235: (dbg) docker inspect pause-20211117230855-9504:

                                                
                                                
-- stdout --
	[
	    {
	        "CreatedAt": "2021-11-17T23:09:00Z",
	        "Driver": "local",
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "pause-20211117230855-9504"
	        },
	        "Mountpoint": "/var/lib/docker/volumes/pause-20211117230855-9504/_data",
	        "Name": "pause-20211117230855-9504",
	        "Options": {},
	        "Scope": "local"
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p pause-20211117230855-9504 -n pause-20211117230855-9504
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p pause-20211117230855-9504 -n pause-20211117230855-9504: exit status 7 (1.7823337s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 23:10:55.250409    7460 status.go:247] status error: host: state: unknown state "pause-20211117230855-9504": docker container inspect pause-20211117230855-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: pause-20211117230855-9504

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "pause-20211117230855-9504" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestPause/serial/Unpause (5.69s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (5.67s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:108: (dbg) Run:  out/minikube-windows-amd64.exe pause -p pause-20211117230855-9504 --alsologtostderr -v=5
pause_test.go:108: (dbg) Non-zero exit: out/minikube-windows-amd64.exe pause -p pause-20211117230855-9504 --alsologtostderr -v=5: exit status 80 (1.7692372s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1117 23:10:55.455802   11336 out.go:297] Setting OutFile to fd 1420 ...
	I1117 23:10:55.521734   11336 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 23:10:55.521821   11336 out.go:310] Setting ErrFile to fd 1856...
	I1117 23:10:55.521821   11336 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 23:10:55.532014   11336 out.go:304] Setting JSON to false
	I1117 23:10:55.532014   11336 mustload.go:65] Loading cluster: pause-20211117230855-9504
	I1117 23:10:55.533042   11336 config.go:176] Loaded profile config "pause-20211117230855-9504": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.3
	I1117 23:10:55.539744   11336 cli_runner.go:115] Run: docker container inspect pause-20211117230855-9504 --format={{.State.Status}}
	W1117 23:10:57.006172   11336 cli_runner.go:162] docker container inspect pause-20211117230855-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:10:57.006258   11336 cli_runner.go:168] Completed: docker container inspect pause-20211117230855-9504 --format={{.State.Status}}: (1.4664166s)
	I1117 23:10:57.010039   11336 out.go:176] 
	W1117 23:10:57.010307   11336 out.go:241] X Exiting due to GUEST_STATUS: state: unknown state "pause-20211117230855-9504": docker container inspect pause-20211117230855-9504 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: pause-20211117230855-9504
	
	X Exiting due to GUEST_STATUS: state: unknown state "pause-20211117230855-9504": docker container inspect pause-20211117230855-9504 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: pause-20211117230855-9504
	
	W1117 23:10:57.010344   11336 out.go:241] * 
	* 
	W1117 23:10:57.017282   11336 out.go:241] ╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                       │
	│    * If the above advice does not help, please let us know:                                                           │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                         │
	│                                                                                                                       │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                              │
	│    * Please also attach the following file to the GitHub issue:                                                       │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_pause_8893f1c977cc86351b34571029ffce3d31854fd6_1.log    │
	│                                                                                                                       │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                       │
	│    * If the above advice does not help, please let us know:                                                           │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                         │
	│                                                                                                                       │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                              │
	│    * Please also attach the following file to the GitHub issue:                                                       │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_pause_8893f1c977cc86351b34571029ffce3d31854fd6_1.log    │
	│                                                                                                                       │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I1117 23:10:57.020021   11336 out.go:176] 

                                                
                                                
** /stderr **
pause_test.go:110: failed to pause minikube with args: "out/minikube-windows-amd64.exe pause -p pause-20211117230855-9504 --alsologtostderr -v=5" : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestPause/serial/PauseAgain]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect pause-20211117230855-9504
helpers_test.go:235: (dbg) docker inspect pause-20211117230855-9504:

                                                
                                                
-- stdout --
	[
	    {
	        "CreatedAt": "2021-11-17T23:09:00Z",
	        "Driver": "local",
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "pause-20211117230855-9504"
	        },
	        "Mountpoint": "/var/lib/docker/volumes/pause-20211117230855-9504/_data",
	        "Name": "pause-20211117230855-9504",
	        "Options": {},
	        "Scope": "local"
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p pause-20211117230855-9504 -n pause-20211117230855-9504
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p pause-20211117230855-9504 -n pause-20211117230855-9504: exit status 7 (1.8418863s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 23:10:58.971004   12008 status.go:247] status error: host: state: unknown state "pause-20211117230855-9504": docker container inspect pause-20211117230855-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: pause-20211117230855-9504

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "pause-20211117230855-9504" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestPause/serial/PauseAgain]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect pause-20211117230855-9504
helpers_test.go:235: (dbg) docker inspect pause-20211117230855-9504:

                                                
                                                
-- stdout --
	[
	    {
	        "CreatedAt": "2021-11-17T23:09:00Z",
	        "Driver": "local",
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "pause-20211117230855-9504"
	        },
	        "Mountpoint": "/var/lib/docker/volumes/pause-20211117230855-9504/_data",
	        "Name": "pause-20211117230855-9504",
	        "Options": {},
	        "Scope": "local"
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p pause-20211117230855-9504 -n pause-20211117230855-9504
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p pause-20211117230855-9504 -n pause-20211117230855-9504: exit status 7 (1.8420771s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 23:11:00.922715    6484 status.go:247] status error: host: state: unknown state "pause-20211117230855-9504": docker container inspect pause-20211117230855-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: pause-20211117230855-9504

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "pause-20211117230855-9504" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestPause/serial/PauseAgain (5.67s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (3.33s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:140: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json

                                                
                                                
=== CONT  TestPause/serial/VerifyDeletedResources
pause_test.go:140: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (2.2714918s)
pause_test.go:166: (dbg) Run:  docker ps -a
pause_test.go:171: (dbg) Run:  docker volume inspect pause-20211117230855-9504
pause_test.go:171: (dbg) Non-zero exit: docker volume inspect pause-20211117230855-9504: exit status 1 (89.9902ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such volume: pause-20211117230855-9504

                                                
                                                
** /stderr **
pause_test.go:176: (dbg) Run:  sudo docker network ls
pause_test.go:176: (dbg) Non-zero exit: sudo docker network ls: exec: "sudo": executable file not found in %PATH% (0s)
pause_test.go:178: failed to get list of networks: exec: "sudo": executable file not found in %PATH%
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestPause/serial/VerifyDeletedResources]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect pause-20211117230855-9504
helpers_test.go:231: (dbg) Non-zero exit: docker inspect pause-20211117230855-9504: exit status 1 (133.7428ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: pause-20211117230855-9504

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p pause-20211117230855-9504 -n pause-20211117230855-9504
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p pause-20211117230855-9504 -n pause-20211117230855-9504: exit status 85 (307.7158ms)

                                                
                                                
-- stdout --
	* Profile "pause-20211117230855-9504" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p pause-20211117230855-9504"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "pause-20211117230855-9504" host is not running, skipping log retrieval (state="* Profile \"pause-20211117230855-9504\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p pause-20211117230855-9504\"")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestPause/serial/VerifyDeletedResources]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect pause-20211117230855-9504
helpers_test.go:231: (dbg) Non-zero exit: docker inspect pause-20211117230855-9504: exit status 1 (130.268ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: pause-20211117230855-9504

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p pause-20211117230855-9504 -n pause-20211117230855-9504
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p pause-20211117230855-9504 -n pause-20211117230855-9504: exit status 85 (279.4139ms)

                                                
                                                
-- stdout --
	* Profile "pause-20211117230855-9504" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p pause-20211117230855-9504"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "pause-20211117230855-9504" host is not running, skipping log retrieval (state="* Profile \"pause-20211117230855-9504\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p pause-20211117230855-9504\"")
--- FAIL: TestPause/serial/VerifyDeletedResources (3.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (38.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:99: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubenet-20211117230313-9504 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --network-plugin=kubenet --driver=docker

                                                
                                                
=== CONT  TestNetworkPlugins/group/kubenet/Start
net_test.go:99: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p kubenet-20211117230313-9504 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --network-plugin=kubenet --driver=docker: exit status 80 (38.0751274s)

                                                
                                                
-- stdout --
	* [kubenet-20211117230313-9504] minikube v1.24.0 on Microsoft Windows 10 Enterprise N 10.0.18363 Build 18363
	  - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=12739
	* Using the docker driver based on user configuration
	* Starting control plane node kubenet-20211117230313-9504 in cluster kubenet-20211117230313-9504
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* docker "kubenet-20211117230313-9504" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1117 23:11:09.182309    4420 out.go:297] Setting OutFile to fd 1924 ...
	I1117 23:11:09.248841    4420 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 23:11:09.248841    4420 out.go:310] Setting ErrFile to fd 1712...
	I1117 23:11:09.248841    4420 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 23:11:09.260617    4420 out.go:304] Setting JSON to false
	I1117 23:11:09.263047    4420 start.go:112] hostinfo: {"hostname":"minikube2","uptime":79985,"bootTime":1637110684,"procs":133,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.18363 Build 18363","kernelVersion":"10.0.18363 Build 18363","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"29dced62-21fb-45d8-a34e-472b66ced002"}
	W1117 23:11:09.264009    4420 start.go:120] gopshost.Virtualization returned error: not implemented yet
	I1117 23:11:09.269764    4420 out.go:176] * [kubenet-20211117230313-9504] minikube v1.24.0 on Microsoft Windows 10 Enterprise N 10.0.18363 Build 18363
	I1117 23:11:09.269764    4420 notify.go:174] Checking for updates...
	I1117 23:11:09.273857    4420 out.go:176]   - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	I1117 23:11:09.275822    4420 out.go:176]   - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	I1117 23:11:09.278720    4420 out.go:176]   - MINIKUBE_LOCATION=12739
	I1117 23:11:09.281574    4420 config.go:176] Loaded profile config "bridge-20211117230313-9504": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.3
	I1117 23:11:09.282511    4420 config.go:176] Loaded profile config "kindnet-20211117230315-9504": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.3
	I1117 23:11:09.282511    4420 config.go:176] Loaded profile config "multinode-20211117225530-9504-m01": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.3
	I1117 23:11:09.282511    4420 driver.go:343] Setting default libvirt URI to qemu:///system
	I1117 23:11:10.922313    4420 docker.go:132] docker version: linux-19.03.12
	I1117 23:11:10.925273    4420 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I1117 23:11:11.297077    4420 info.go:263] docker info: {ID:L2W4:ZZCI:FAHF:ZZT6:NAWJ:ODFR:2ZBQ:YQ77:PYZZ:VR3E:WLUM:OBSV Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:40 OomKillDisable:true NGoroutines:52 SystemTime:2021-11-17 23:11:11.018152212 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:4.19.76-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://inde
x.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:6239399936 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:19.03.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7ad184331fa3e55e52b890ea95e65ba581ae3429 Expected:7ad184331fa3e55e52b890ea95e65ba581ae3429} RuncCommit:{ID:dc9208a3303feef5b3839f4323d9beb36df0a9dd Expected:dc9208a3303feef5b3839f4323d9beb36df0a9dd} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense:Community Engine Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I1117 23:11:11.302616    4420 out.go:176] * Using the docker driver based on user configuration
	I1117 23:11:11.302864    4420 start.go:280] selected driver: docker
	I1117 23:11:11.302864    4420 start.go:775] validating driver "docker" against <nil>
	I1117 23:11:11.302933    4420 start.go:786] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I1117 23:11:11.412855    4420 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I1117 23:11:11.758813    4420 info.go:263] docker info: {ID:L2W4:ZZCI:FAHF:ZZT6:NAWJ:ODFR:2ZBQ:YQ77:PYZZ:VR3E:WLUM:OBSV Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:40 OomKillDisable:true NGoroutines:52 SystemTime:2021-11-17 23:11:11.496376896 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:4.19.76-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://inde
x.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:6239399936 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:19.03.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7ad184331fa3e55e52b890ea95e65ba581ae3429 Expected:7ad184331fa3e55e52b890ea95e65ba581ae3429} RuncCommit:{ID:dc9208a3303feef5b3839f4323d9beb36df0a9dd Expected:dc9208a3303feef5b3839f4323d9beb36df0a9dd} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense:Community Engine Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I1117 23:11:11.758813    4420 start_flags.go:268] no existing cluster config was found, will generate one from the flags 
	I1117 23:11:11.759903    4420 start_flags.go:758] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1117 23:11:11.759903    4420 cni.go:89] network plugin configured as "kubenet", returning disabled
	I1117 23:11:11.759903    4420 start_flags.go:282] config:
	{Name:kubenet-20211117230313-9504 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.3 ClusterName:kubenet-20211117230313-9504 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: Ne
tworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host}
	I1117 23:11:11.765176    4420 out.go:176] * Starting control plane node kubenet-20211117230313-9504 in cluster kubenet-20211117230313-9504
	I1117 23:11:11.765176    4420 cache.go:118] Beginning downloading kic base image for docker with docker
	I1117 23:11:11.768526    4420 out.go:176] * Pulling base image ...
	I1117 23:11:11.768526    4420 preload.go:132] Checking if preload exists for k8s version v1.22.3 and runtime docker
	I1117 23:11:11.768526    4420 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c in local docker daemon
	I1117 23:11:11.768526    4420 preload.go:148] Found local preload: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4
	I1117 23:11:11.768526    4420 cache.go:57] Caching tarball of preloaded images
	I1117 23:11:11.769552    4420 preload.go:174] Found C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1117 23:11:11.769552    4420 cache.go:60] Finished verifying existence of preloaded tar for  v1.22.3 on docker
	I1117 23:11:11.769552    4420 profile.go:147] Saving config to C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\kubenet-20211117230313-9504\config.json ...
	I1117 23:11:11.770177    4420 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\kubenet-20211117230313-9504\config.json: {Name:mk6abfb3aa0d7258ad7080ee43f6bc6b232beeb9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1117 23:11:11.868107    4420 image.go:79] Found gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c in local docker daemon, skipping pull
	I1117 23:11:11.868107    4420 cache.go:140] gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c exists in daemon, skipping load
	I1117 23:11:11.868107    4420 cache.go:206] Successfully downloaded all kic artifacts
	I1117 23:11:11.868316    4420 start.go:313] acquiring machines lock for kubenet-20211117230313-9504: {Name:mkdf60edeeebb51f163c17fa7e5eea0c03b563dc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 23:11:11.868612    4420 start.go:317] acquired machines lock for "kubenet-20211117230313-9504" in 152.3µs
	I1117 23:11:11.868889    4420 start.go:89] Provisioning new machine with config: &{Name:kubenet-20211117230313-9504 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.3 ClusterName:kubenet-20211117230313-9504 Namespace:default APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.22.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host} &{Name: IP: Port:8443 KubernetesVersion:v1.22.3 ControlPlane:true Worker:true}
	I1117 23:11:11.868990    4420 start.go:126] createHost starting for "" (driver="docker")
	I1117 23:11:11.873583    4420 out.go:203] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I1117 23:11:11.874008    4420 start.go:160] libmachine.API.Create for "kubenet-20211117230313-9504" (driver="docker")
	I1117 23:11:11.874097    4420 client.go:168] LocalClient.Create starting
	I1117 23:11:11.874760    4420 main.go:130] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I1117 23:11:11.875005    4420 main.go:130] libmachine: Decoding PEM data...
	I1117 23:11:11.875005    4420 main.go:130] libmachine: Parsing certificate...
	I1117 23:11:11.875216    4420 main.go:130] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I1117 23:11:11.875216    4420 main.go:130] libmachine: Decoding PEM data...
	I1117 23:11:11.875216    4420 main.go:130] libmachine: Parsing certificate...
	I1117 23:11:11.880028    4420 cli_runner.go:115] Run: docker network inspect kubenet-20211117230313-9504 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1117 23:11:11.970367    4420 cli_runner.go:162] docker network inspect kubenet-20211117230313-9504 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1117 23:11:11.973372    4420 network_create.go:254] running [docker network inspect kubenet-20211117230313-9504] to gather additional debugging logs...
	I1117 23:11:11.973372    4420 cli_runner.go:115] Run: docker network inspect kubenet-20211117230313-9504
	W1117 23:11:12.078735    4420 cli_runner.go:162] docker network inspect kubenet-20211117230313-9504 returned with exit code 1
	I1117 23:11:12.078962    4420 network_create.go:257] error running [docker network inspect kubenet-20211117230313-9504]: docker network inspect kubenet-20211117230313-9504: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: kubenet-20211117230313-9504
	I1117 23:11:12.078962    4420 network_create.go:259] output of [docker network inspect kubenet-20211117230313-9504]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: kubenet-20211117230313-9504
	
	** /stderr **
	I1117 23:11:12.082296    4420 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1117 23:11:12.188237    4420 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc0006622c8] misses:0}
	I1117 23:11:12.188237    4420 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1117 23:11:12.188769    4420 network_create.go:106] attempt to create docker network kubenet-20211117230313-9504 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1117 23:11:12.192532    4420 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true kubenet-20211117230313-9504
	I1117 23:11:12.399121    4420 network_create.go:90] docker network kubenet-20211117230313-9504 192.168.49.0/24 created
	I1117 23:11:12.399121    4420 kic.go:106] calculated static IP "192.168.49.2" for the "kubenet-20211117230313-9504" container
	I1117 23:11:12.406120    4420 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
	I1117 23:11:12.496303    4420 cli_runner.go:115] Run: docker volume create kubenet-20211117230313-9504 --label name.minikube.sigs.k8s.io=kubenet-20211117230313-9504 --label created_by.minikube.sigs.k8s.io=true
	I1117 23:11:12.593115    4420 oci.go:102] Successfully created a docker volume kubenet-20211117230313-9504
	I1117 23:11:12.597728    4420 cli_runner.go:115] Run: docker run --rm --name kubenet-20211117230313-9504-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubenet-20211117230313-9504 --entrypoint /usr/bin/test -v kubenet-20211117230313-9504:/var gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -d /var/lib
	I1117 23:11:13.780233    4420 cli_runner.go:168] Completed: docker run --rm --name kubenet-20211117230313-9504-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubenet-20211117230313-9504 --entrypoint /usr/bin/test -v kubenet-20211117230313-9504:/var gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -d /var/lib: (1.1824965s)
	I1117 23:11:13.780233    4420 oci.go:106] Successfully prepared a docker volume kubenet-20211117230313-9504
	I1117 23:11:13.780233    4420 preload.go:132] Checking if preload exists for k8s version v1.22.3 and runtime docker
	I1117 23:11:13.780233    4420 kic.go:179] Starting extracting preloaded images to volume ...
	I1117 23:11:13.784230    4420 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I1117 23:11:13.784230    4420 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v kubenet-20211117230313-9504:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir
	W1117 23:11:13.891566    4420 cli_runner.go:162] docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v kubenet-20211117230313-9504:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir returned with exit code 125
	I1117 23:11:13.891566    4420 kic.go:186] Unable to extract preloaded tarball to volume: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v kubenet-20211117230313-9504:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: status code not OK but 500: {"Message":"Unhandled exception: The notification platform is unavailable.\r\n\r\nThe notification platform is unavailable.\r\n","StackTrace":"   at Windows.UI.Notifications.ToastNotificationManager.CreateToastNotifier(String applicationId)\r\n   at Docker.WPF.PromptShareDirectory.<PromptUserAsync>d__0.MoveNext() in C:\\workspaces\\stable-2.3.x\\src\\github.com\\docker\\pinata\\win\\src\\Docker.WPF\\PromptShareDirectory.cs:line 25\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at Docker.ApiServices.Mounting.FileSharing.<DoShareAsync>d__6.MoveNext() in C:\\workspaces\\stable-2.3.x\\src\\github.com\\docker\\pinata\\win\\src\\Docker.ApiServices\\Mounting\\FileSharing.cs:line 80\r\n--- End of stack trace from previous location whe
re exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at Docker.ApiServices.Mounting.FileSharing.<ShareAsync>d__4.MoveNext() in C:\\workspaces\\stable-2.3.x\\src\\github.com\\docker\\pinata\\win\\src\\Docker.ApiServices\\Mounting\\FileSharing.cs:line 47\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at Docker.HttpApi.Controllers.FilesharingController.<ShareDirectory>d__2.MoveNext() in C:\\workspaces\\stable-2.3.x\\src\\github.com\\docker\\pinata\\win\\src\\Docker.HttpApi\\Controllers\\FilesharingController.cs:line 21\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.Exceptio
nDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at System.Threading.Tasks.TaskHelpersExtensions.<CastToObject>d__1`1.MoveNext()\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at System.Web.Http.Controllers.ApiControllerActionInvoker.<InvokeActionAsyncCore>d__1.MoveNext()\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at System.Web.Http.Controllers.ActionFilterResult.<ExecuteAsync>d__5.MoveNext()\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.Excepti
onServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at System.Web.Http.Dispatcher.HttpControllerDispatcher.<SendAsync>d__15.MoveNext()"}.
	See 'docker run --help'.
	I1117 23:11:14.150339    4420 info.go:263] docker info: {ID:L2W4:ZZCI:FAHF:ZZT6:NAWJ:ODFR:2ZBQ:YQ77:PYZZ:VR3E:WLUM:OBSV Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:42 OomKillDisable:true NGoroutines:55 SystemTime:2021-11-17 23:11:13.880670393 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:4.19.76-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://inde
x.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:6239399936 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:19.03.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7ad184331fa3e55e52b890ea95e65ba581ae3429 Expected:7ad184331fa3e55e52b890ea95e65ba581ae3429} RuncCommit:{ID:dc9208a3303feef5b3839f4323d9beb36df0a9dd Expected:dc9208a3303feef5b3839f4323d9beb36df0a9dd} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense:Community Engine Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	E1117 23:11:14.151075    4420 oci.go:197] error getting kernel modules path: Unable to locate kernel modules
	I1117 23:11:14.151297    4420 client.go:171] LocalClient.Create took 2.2771822s
	I1117 23:11:16.161495    4420 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1117 23:11:16.165629    4420 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20211117230313-9504
	W1117 23:11:16.260931    4420 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20211117230313-9504 returned with exit code 1
	I1117 23:11:16.261252    4420 retry.go:31] will retry after 276.165072ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubenet-20211117230313-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20211117230313-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubenet-20211117230313-9504
	I1117 23:11:16.542276    4420 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20211117230313-9504
	W1117 23:11:16.635352    4420 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20211117230313-9504 returned with exit code 1
	I1117 23:11:16.635738    4420 retry.go:31] will retry after 540.190908ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubenet-20211117230313-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20211117230313-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubenet-20211117230313-9504
	I1117 23:11:17.181510    4420 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20211117230313-9504
	W1117 23:11:17.279178    4420 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20211117230313-9504 returned with exit code 1
	I1117 23:11:17.279380    4420 retry.go:31] will retry after 655.06503ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubenet-20211117230313-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20211117230313-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubenet-20211117230313-9504
	I1117 23:11:17.940144    4420 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20211117230313-9504
	W1117 23:11:18.031442    4420 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20211117230313-9504 returned with exit code 1
	W1117 23:11:18.031442    4420 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubenet-20211117230313-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20211117230313-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubenet-20211117230313-9504
	
	W1117 23:11:18.031442    4420 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubenet-20211117230313-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20211117230313-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubenet-20211117230313-9504
	I1117 23:11:18.031442    4420 start.go:129] duration metric: createHost completed in 6.1623305s
	I1117 23:11:18.031442    4420 start.go:80] releasing machines lock for "kubenet-20211117230313-9504", held for 6.162784s
	W1117 23:11:18.031442    4420 start.go:532] error starting host: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	I1117 23:11:18.040944    4420 cli_runner.go:115] Run: docker container inspect kubenet-20211117230313-9504 --format={{.State.Status}}
	W1117 23:11:18.130513    4420 cli_runner.go:162] docker container inspect kubenet-20211117230313-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:11:18.130741    4420 delete.go:82] Unable to get host status for kubenet-20211117230313-9504, assuming it has already been deleted: state: unknown state "kubenet-20211117230313-9504": docker container inspect kubenet-20211117230313-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubenet-20211117230313-9504
	W1117 23:11:18.130907    4420 out.go:241] ! StartHost failed, but will try again: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	! StartHost failed, but will try again: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	I1117 23:11:18.130907    4420 start.go:547] Will try again in 5 seconds ...
	I1117 23:11:23.133082    4420 start.go:313] acquiring machines lock for kubenet-20211117230313-9504: {Name:mkdf60edeeebb51f163c17fa7e5eea0c03b563dc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 23:11:23.133510    4420 start.go:317] acquired machines lock for "kubenet-20211117230313-9504" in 218.3µs
	I1117 23:11:23.133662    4420 start.go:93] Skipping create...Using existing machine configuration
	I1117 23:11:23.133739    4420 fix.go:55] fixHost starting: 
	I1117 23:11:23.142190    4420 cli_runner.go:115] Run: docker container inspect kubenet-20211117230313-9504 --format={{.State.Status}}
	W1117 23:11:23.231097    4420 cli_runner.go:162] docker container inspect kubenet-20211117230313-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:11:23.231393    4420 fix.go:108] recreateIfNeeded on kubenet-20211117230313-9504: state= err=unknown state "kubenet-20211117230313-9504": docker container inspect kubenet-20211117230313-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubenet-20211117230313-9504
	I1117 23:11:23.231556    4420 fix.go:113] machineExists: false. err=machine does not exist
	I1117 23:11:23.235896    4420 out.go:176] * docker "kubenet-20211117230313-9504" container is missing, will recreate.
	I1117 23:11:23.236107    4420 delete.go:124] DEMOLISHING kubenet-20211117230313-9504 ...
	I1117 23:11:23.244815    4420 cli_runner.go:115] Run: docker container inspect kubenet-20211117230313-9504 --format={{.State.Status}}
	W1117 23:11:23.331066    4420 cli_runner.go:162] docker container inspect kubenet-20211117230313-9504 --format={{.State.Status}} returned with exit code 1
	W1117 23:11:23.331066    4420 stop.go:75] unable to get state: unknown state "kubenet-20211117230313-9504": docker container inspect kubenet-20211117230313-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubenet-20211117230313-9504
	I1117 23:11:23.331066    4420 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "kubenet-20211117230313-9504": docker container inspect kubenet-20211117230313-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubenet-20211117230313-9504
	I1117 23:11:23.340180    4420 cli_runner.go:115] Run: docker container inspect kubenet-20211117230313-9504 --format={{.State.Status}}
	W1117 23:11:23.428279    4420 cli_runner.go:162] docker container inspect kubenet-20211117230313-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:11:23.428417    4420 delete.go:82] Unable to get host status for kubenet-20211117230313-9504, assuming it has already been deleted: state: unknown state "kubenet-20211117230313-9504": docker container inspect kubenet-20211117230313-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubenet-20211117230313-9504
	I1117 23:11:23.432785    4420 cli_runner.go:115] Run: docker container inspect -f {{.Id}} kubenet-20211117230313-9504
	W1117 23:11:23.524958    4420 cli_runner.go:162] docker container inspect -f {{.Id}} kubenet-20211117230313-9504 returned with exit code 1
	I1117 23:11:23.525014    4420 kic.go:360] could not find the container kubenet-20211117230313-9504 to remove it. will try anyways
	I1117 23:11:23.529364    4420 cli_runner.go:115] Run: docker container inspect kubenet-20211117230313-9504 --format={{.State.Status}}
	W1117 23:11:23.622343    4420 cli_runner.go:162] docker container inspect kubenet-20211117230313-9504 --format={{.State.Status}} returned with exit code 1
	W1117 23:11:23.622557    4420 oci.go:83] error getting container status, will try to delete anyways: unknown state "kubenet-20211117230313-9504": docker container inspect kubenet-20211117230313-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubenet-20211117230313-9504
	I1117 23:11:23.627711    4420 cli_runner.go:115] Run: docker exec --privileged -t kubenet-20211117230313-9504 /bin/bash -c "sudo init 0"
	W1117 23:11:23.735054    4420 cli_runner.go:162] docker exec --privileged -t kubenet-20211117230313-9504 /bin/bash -c "sudo init 0" returned with exit code 1
	I1117 23:11:23.735324    4420 oci.go:658] error shutdown kubenet-20211117230313-9504: docker exec --privileged -t kubenet-20211117230313-9504 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: kubenet-20211117230313-9504
	I1117 23:11:24.740415    4420 cli_runner.go:115] Run: docker container inspect kubenet-20211117230313-9504 --format={{.State.Status}}
	W1117 23:11:24.824097    4420 cli_runner.go:162] docker container inspect kubenet-20211117230313-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:11:24.824520    4420 oci.go:670] temporary error verifying shutdown: unknown state "kubenet-20211117230313-9504": docker container inspect kubenet-20211117230313-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubenet-20211117230313-9504
	I1117 23:11:24.824520    4420 oci.go:672] temporary error: container kubenet-20211117230313-9504 status is  but expect it to be exited
	I1117 23:11:24.824520    4420 retry.go:31] will retry after 462.318748ms: couldn't verify container is exited. %v: unknown state "kubenet-20211117230313-9504": docker container inspect kubenet-20211117230313-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubenet-20211117230313-9504
	I1117 23:11:25.291280    4420 cli_runner.go:115] Run: docker container inspect kubenet-20211117230313-9504 --format={{.State.Status}}
	W1117 23:11:25.391803    4420 cli_runner.go:162] docker container inspect kubenet-20211117230313-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:11:25.391803    4420 oci.go:670] temporary error verifying shutdown: unknown state "kubenet-20211117230313-9504": docker container inspect kubenet-20211117230313-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubenet-20211117230313-9504
	I1117 23:11:25.391803    4420 oci.go:672] temporary error: container kubenet-20211117230313-9504 status is  but expect it to be exited
	I1117 23:11:25.391803    4420 retry.go:31] will retry after 890.117305ms: couldn't verify container is exited. %v: unknown state "kubenet-20211117230313-9504": docker container inspect kubenet-20211117230313-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubenet-20211117230313-9504
	I1117 23:11:26.285429    4420 cli_runner.go:115] Run: docker container inspect kubenet-20211117230313-9504 --format={{.State.Status}}
	W1117 23:11:26.386426    4420 cli_runner.go:162] docker container inspect kubenet-20211117230313-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:11:26.386426    4420 oci.go:670] temporary error verifying shutdown: unknown state "kubenet-20211117230313-9504": docker container inspect kubenet-20211117230313-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubenet-20211117230313-9504
	I1117 23:11:26.386426    4420 oci.go:672] temporary error: container kubenet-20211117230313-9504 status is  but expect it to be exited
	I1117 23:11:26.386426    4420 retry.go:31] will retry after 636.341646ms: couldn't verify container is exited. %v: unknown state "kubenet-20211117230313-9504": docker container inspect kubenet-20211117230313-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubenet-20211117230313-9504
	I1117 23:11:27.026775    4420 cli_runner.go:115] Run: docker container inspect kubenet-20211117230313-9504 --format={{.State.Status}}
	W1117 23:11:27.121651    4420 cli_runner.go:162] docker container inspect kubenet-20211117230313-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:11:27.121889    4420 oci.go:670] temporary error verifying shutdown: unknown state "kubenet-20211117230313-9504": docker container inspect kubenet-20211117230313-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubenet-20211117230313-9504
	I1117 23:11:27.121889    4420 oci.go:672] temporary error: container kubenet-20211117230313-9504 status is  but expect it to be exited
	I1117 23:11:27.121889    4420 retry.go:31] will retry after 1.107876242s: couldn't verify container is exited. %v: unknown state "kubenet-20211117230313-9504": docker container inspect kubenet-20211117230313-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubenet-20211117230313-9504
	I1117 23:11:28.236580    4420 cli_runner.go:115] Run: docker container inspect kubenet-20211117230313-9504 --format={{.State.Status}}
	W1117 23:11:28.354568    4420 cli_runner.go:162] docker container inspect kubenet-20211117230313-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:11:28.354568    4420 oci.go:670] temporary error verifying shutdown: unknown state "kubenet-20211117230313-9504": docker container inspect kubenet-20211117230313-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubenet-20211117230313-9504
	I1117 23:11:28.354568    4420 oci.go:672] temporary error: container kubenet-20211117230313-9504 status is  but expect it to be exited
	I1117 23:11:28.354568    4420 retry.go:31] will retry after 1.511079094s: couldn't verify container is exited. %v: unknown state "kubenet-20211117230313-9504": docker container inspect kubenet-20211117230313-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubenet-20211117230313-9504
	I1117 23:11:29.872091    4420 cli_runner.go:115] Run: docker container inspect kubenet-20211117230313-9504 --format={{.State.Status}}
	W1117 23:11:29.966378    4420 cli_runner.go:162] docker container inspect kubenet-20211117230313-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:11:29.966718    4420 oci.go:670] temporary error verifying shutdown: unknown state "kubenet-20211117230313-9504": docker container inspect kubenet-20211117230313-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubenet-20211117230313-9504
	I1117 23:11:29.966718    4420 oci.go:672] temporary error: container kubenet-20211117230313-9504 status is  but expect it to be exited
	I1117 23:11:29.966789    4420 retry.go:31] will retry after 3.04096222s: couldn't verify container is exited. %v: unknown state "kubenet-20211117230313-9504": docker container inspect kubenet-20211117230313-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubenet-20211117230313-9504
	I1117 23:11:33.012776    4420 cli_runner.go:115] Run: docker container inspect kubenet-20211117230313-9504 --format={{.State.Status}}
	W1117 23:11:33.108412    4420 cli_runner.go:162] docker container inspect kubenet-20211117230313-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:11:33.108412    4420 oci.go:670] temporary error verifying shutdown: unknown state "kubenet-20211117230313-9504": docker container inspect kubenet-20211117230313-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubenet-20211117230313-9504
	I1117 23:11:33.108412    4420 oci.go:672] temporary error: container kubenet-20211117230313-9504 status is  but expect it to be exited
	I1117 23:11:33.108412    4420 retry.go:31] will retry after 5.781953173s: couldn't verify container is exited. %v: unknown state "kubenet-20211117230313-9504": docker container inspect kubenet-20211117230313-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubenet-20211117230313-9504
	I1117 23:11:38.895781    4420 cli_runner.go:115] Run: docker container inspect kubenet-20211117230313-9504 --format={{.State.Status}}
	W1117 23:11:38.984759    4420 cli_runner.go:162] docker container inspect kubenet-20211117230313-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:11:38.984956    4420 oci.go:670] temporary error verifying shutdown: unknown state "kubenet-20211117230313-9504": docker container inspect kubenet-20211117230313-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubenet-20211117230313-9504
	I1117 23:11:38.984956    4420 oci.go:672] temporary error: container kubenet-20211117230313-9504 status is  but expect it to be exited
	I1117 23:11:38.985073    4420 oci.go:87] couldn't shut down kubenet-20211117230313-9504 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "kubenet-20211117230313-9504": docker container inspect kubenet-20211117230313-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubenet-20211117230313-9504
	 
	I1117 23:11:38.989292    4420 cli_runner.go:115] Run: docker rm -f -v kubenet-20211117230313-9504
	W1117 23:11:39.078535    4420 cli_runner.go:162] docker rm -f -v kubenet-20211117230313-9504 returned with exit code 1
	W1117 23:11:39.079603    4420 delete.go:139] delete failed (probably ok) <nil>
	I1117 23:11:39.079718    4420 fix.go:120] Sleeping 1 second for extra luck!
	I1117 23:11:40.080618    4420 start.go:126] createHost starting for "" (driver="docker")
	I1117 23:11:40.084744    4420 out.go:203] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I1117 23:11:40.084997    4420 start.go:160] libmachine.API.Create for "kubenet-20211117230313-9504" (driver="docker")
	I1117 23:11:40.084997    4420 client.go:168] LocalClient.Create starting
	I1117 23:11:40.084997    4420 main.go:130] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I1117 23:11:40.085570    4420 main.go:130] libmachine: Decoding PEM data...
	I1117 23:11:40.085570    4420 main.go:130] libmachine: Parsing certificate...
	I1117 23:11:40.085737    4420 main.go:130] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I1117 23:11:40.085926    4420 main.go:130] libmachine: Decoding PEM data...
	I1117 23:11:40.085926    4420 main.go:130] libmachine: Parsing certificate...
	I1117 23:11:40.090709    4420 cli_runner.go:115] Run: docker network inspect kubenet-20211117230313-9504 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1117 23:11:40.184368    4420 cli_runner.go:162] docker network inspect kubenet-20211117230313-9504 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1117 23:11:40.188594    4420 network_create.go:254] running [docker network inspect kubenet-20211117230313-9504] to gather additional debugging logs...
	I1117 23:11:40.188682    4420 cli_runner.go:115] Run: docker network inspect kubenet-20211117230313-9504
	W1117 23:11:40.281910    4420 cli_runner.go:162] docker network inspect kubenet-20211117230313-9504 returned with exit code 1
	I1117 23:11:40.281957    4420 network_create.go:257] error running [docker network inspect kubenet-20211117230313-9504]: docker network inspect kubenet-20211117230313-9504: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: kubenet-20211117230313-9504
	I1117 23:11:40.282128    4420 network_create.go:259] output of [docker network inspect kubenet-20211117230313-9504]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: kubenet-20211117230313-9504
	
	** /stderr **
	I1117 23:11:40.286107    4420 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1117 23:11:40.392403    4420 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0006622c8] amended:false}} dirty:map[] misses:0}
	I1117 23:11:40.392403    4420 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1117 23:11:40.404057    4420 network.go:288] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0006622c8] amended:true}} dirty:map[192.168.49.0:0xc0006622c8 192.168.58.0:0xc0005d6600] misses:0}
	I1117 23:11:40.405088    4420 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1117 23:11:40.405146    4420 network_create.go:106] attempt to create docker network kubenet-20211117230313-9504 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I1117 23:11:40.409344    4420 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true kubenet-20211117230313-9504
	I1117 23:11:40.616980    4420 network_create.go:90] docker network kubenet-20211117230313-9504 192.168.58.0/24 created
	I1117 23:11:40.617230    4420 kic.go:106] calculated static IP "192.168.58.2" for the "kubenet-20211117230313-9504" container
	I1117 23:11:40.624679    4420 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
	I1117 23:11:40.714159    4420 cli_runner.go:115] Run: docker volume create kubenet-20211117230313-9504 --label name.minikube.sigs.k8s.io=kubenet-20211117230313-9504 --label created_by.minikube.sigs.k8s.io=true
	I1117 23:11:40.799396    4420 oci.go:102] Successfully created a docker volume kubenet-20211117230313-9504
	I1117 23:11:40.804197    4420 cli_runner.go:115] Run: docker run --rm --name kubenet-20211117230313-9504-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubenet-20211117230313-9504 --entrypoint /usr/bin/test -v kubenet-20211117230313-9504:/var gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -d /var/lib
	I1117 23:11:41.667902    4420 oci.go:106] Successfully prepared a docker volume kubenet-20211117230313-9504
	I1117 23:11:41.667964    4420 preload.go:132] Checking if preload exists for k8s version v1.22.3 and runtime docker
	I1117 23:11:41.667964    4420 kic.go:179] Starting extracting preloaded images to volume ...
	I1117 23:11:41.672753    4420 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I1117 23:11:41.672753    4420 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v kubenet-20211117230313-9504:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir
	W1117 23:11:41.779460    4420 cli_runner.go:162] docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v kubenet-20211117230313-9504:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir returned with exit code 125
	I1117 23:11:41.779460    4420 kic.go:186] Unable to extract preloaded tarball to volume: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v kubenet-20211117230313-9504:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: status code not OK but 500: {"Message":"Unhandled exception: The notification platform is unavailable.\r\n\r\nThe notification platform is unavailable.\r\n","StackTrace":"   at Windows.UI.Notifications.ToastNotificationManager.CreateToastNotifier(String applicationId)\r\n   at Docker.WPF.PromptShareDirectory.<PromptUserAsync>d__0.MoveNext() in C:\\workspaces\\stable-2.3.x\\src\\github.com\\docker\\pinata\\win\\src\\Docker.WPF\\PromptShareDirectory.cs:line 25\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at Docker.ApiServices.Mounting.FileSharing.<DoShareAsync>d__6.MoveNext() in C:\\workspaces\\stable-2.3.x\\src\\github.com\\docker\\pinata\\win\\src\\Docker.ApiServices\\Mounting\\FileSharing.cs:line 80\r\n--- End of stack trace from previous location whe
re exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at Docker.ApiServices.Mounting.FileSharing.<ShareAsync>d__4.MoveNext() in C:\\workspaces\\stable-2.3.x\\src\\github.com\\docker\\pinata\\win\\src\\Docker.ApiServices\\Mounting\\FileSharing.cs:line 47\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at Docker.HttpApi.Controllers.FilesharingController.<ShareDirectory>d__2.MoveNext() in C:\\workspaces\\stable-2.3.x\\src\\github.com\\docker\\pinata\\win\\src\\Docker.HttpApi\\Controllers\\FilesharingController.cs:line 21\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.Exceptio
nDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at System.Threading.Tasks.TaskHelpersExtensions.<CastToObject>d__1`1.MoveNext()\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at System.Web.Http.Controllers.ApiControllerActionInvoker.<InvokeActionAsyncCore>d__1.MoveNext()\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at System.Web.Http.Controllers.ActionFilterResult.<ExecuteAsync>d__5.MoveNext()\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.Excepti
onServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at System.Web.Http.Dispatcher.HttpControllerDispatcher.<SendAsync>d__15.MoveNext()"}.
	See 'docker run --help'.
	I1117 23:11:42.018838    4420 info.go:263] docker info: {ID:L2W4:ZZCI:FAHF:ZZT6:NAWJ:ODFR:2ZBQ:YQ77:PYZZ:VR3E:WLUM:OBSV Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:47 OomKillDisable:true NGoroutines:55 SystemTime:2021-11-17 23:11:41.754425945 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:4.19.76-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://inde
x.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:6239399936 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:19.03.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7ad184331fa3e55e52b890ea95e65ba581ae3429 Expected:7ad184331fa3e55e52b890ea95e65ba581ae3429} RuncCommit:{ID:dc9208a3303feef5b3839f4323d9beb36df0a9dd Expected:dc9208a3303feef5b3839f4323d9beb36df0a9dd} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense:Community Engine Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	E1117 23:11:42.019358    4420 oci.go:197] error getting kernel modules path: Unable to locate kernel modules
	I1117 23:11:42.019524    4420 client.go:171] LocalClient.Create took 1.9345126s
	I1117 23:11:44.027736    4420 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1117 23:11:44.033844    4420 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20211117230313-9504
	W1117 23:11:44.152826    4420 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20211117230313-9504 returned with exit code 1
	I1117 23:11:44.153192    4420 retry.go:31] will retry after 178.565968ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubenet-20211117230313-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20211117230313-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubenet-20211117230313-9504
	I1117 23:11:44.337954    4420 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20211117230313-9504
	W1117 23:11:44.425528    4420 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20211117230313-9504 returned with exit code 1
	I1117 23:11:44.425818    4420 retry.go:31] will retry after 330.246446ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubenet-20211117230313-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20211117230313-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubenet-20211117230313-9504
	I1117 23:11:44.761027    4420 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20211117230313-9504
	W1117 23:11:44.885667    4420 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20211117230313-9504 returned with exit code 1
	I1117 23:11:44.886090    4420 retry.go:31] will retry after 460.157723ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubenet-20211117230313-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20211117230313-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubenet-20211117230313-9504
	I1117 23:11:45.351210    4420 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20211117230313-9504
	W1117 23:11:45.446670    4420 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20211117230313-9504 returned with exit code 1
	W1117 23:11:45.446994    4420 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubenet-20211117230313-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20211117230313-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubenet-20211117230313-9504
	
	W1117 23:11:45.446994    4420 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubenet-20211117230313-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20211117230313-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubenet-20211117230313-9504
	I1117 23:11:45.446994    4420 start.go:129] duration metric: createHost completed in 5.3663359s
	I1117 23:11:45.454272    4420 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1117 23:11:45.457364    4420 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20211117230313-9504
	W1117 23:11:45.548571    4420 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20211117230313-9504 returned with exit code 1
	I1117 23:11:45.548977    4420 retry.go:31] will retry after 195.758538ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubenet-20211117230313-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20211117230313-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubenet-20211117230313-9504
	I1117 23:11:45.754572    4420 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20211117230313-9504
	W1117 23:11:45.863894    4420 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20211117230313-9504 returned with exit code 1
	I1117 23:11:45.864250    4420 retry.go:31] will retry after 297.413196ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubenet-20211117230313-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20211117230313-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubenet-20211117230313-9504
	I1117 23:11:46.166964    4420 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20211117230313-9504
	W1117 23:11:46.257682    4420 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20211117230313-9504 returned with exit code 1
	I1117 23:11:46.257821    4420 retry.go:31] will retry after 663.23513ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubenet-20211117230313-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20211117230313-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubenet-20211117230313-9504
	I1117 23:11:46.924677    4420 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20211117230313-9504
	W1117 23:11:47.029906    4420 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20211117230313-9504 returned with exit code 1
	W1117 23:11:47.030137    4420 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubenet-20211117230313-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20211117230313-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubenet-20211117230313-9504
	
	W1117 23:11:47.030137    4420 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubenet-20211117230313-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20211117230313-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubenet-20211117230313-9504
	I1117 23:11:47.030137    4420 fix.go:57] fixHost completed within 23.8962194s
	I1117 23:11:47.030137    4420 start.go:80] releasing machines lock for "kubenet-20211117230313-9504", held for 23.896448s
	W1117 23:11:47.030137    4420 out.go:241] * Failed to start docker container. Running "minikube delete -p kubenet-20211117230313-9504" may fix it: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	* Failed to start docker container. Running "minikube delete -p kubenet-20211117230313-9504" may fix it: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	I1117 23:11:47.035391    4420 out.go:176] 
	W1117 23:11:47.035663    4420 out.go:241] X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	W1117 23:11:47.035701    4420 out.go:241] * 
	* 
	W1117 23:11:47.036642    4420 out.go:241] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1117 23:11:47.040738    4420 out.go:176] 

                                                
                                                
** /stderr **
net_test.go:101: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kubenet/Start (38.16s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (40.33s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:171: (dbg) Run:  out/minikube-windows-amd64.exe start -p old-k8s-version-20211117231110-9504 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --kubernetes-version=v1.14.0

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:171: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p old-k8s-version-20211117231110-9504 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --kubernetes-version=v1.14.0: exit status 80 (38.2374026s)

                                                
                                                
-- stdout --
	* [old-k8s-version-20211117231110-9504] minikube v1.24.0 on Microsoft Windows 10 Enterprise N 10.0.18363 Build 18363
	  - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=12739
	* Using the docker driver based on user configuration
	* Starting control plane node old-k8s-version-20211117231110-9504 in cluster old-k8s-version-20211117231110-9504
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* docker "old-k8s-version-20211117231110-9504" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1117 23:11:10.252753   10928 out.go:297] Setting OutFile to fd 1844 ...
	I1117 23:11:10.323328   10928 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 23:11:10.323328   10928 out.go:310] Setting ErrFile to fd 1448...
	I1117 23:11:10.323328   10928 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 23:11:10.333263   10928 out.go:304] Setting JSON to false
	I1117 23:11:10.337673   10928 start.go:112] hostinfo: {"hostname":"minikube2","uptime":79986,"bootTime":1637110684,"procs":133,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.18363 Build 18363","kernelVersion":"10.0.18363 Build 18363","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"29dced62-21fb-45d8-a34e-472b66ced002"}
	W1117 23:11:10.337673   10928 start.go:120] gopshost.Virtualization returned error: not implemented yet
	I1117 23:11:10.340698   10928 out.go:176] * [old-k8s-version-20211117231110-9504] minikube v1.24.0 on Microsoft Windows 10 Enterprise N 10.0.18363 Build 18363
	I1117 23:11:10.341400   10928 notify.go:174] Checking for updates...
	I1117 23:11:10.343072   10928 out.go:176]   - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	I1117 23:11:10.346795   10928 out.go:176]   - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	I1117 23:11:10.349372   10928 out.go:176]   - MINIKUBE_LOCATION=12739
	I1117 23:11:10.351370   10928 config.go:176] Loaded profile config "bridge-20211117230313-9504": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.3
	I1117 23:11:10.351816   10928 config.go:176] Loaded profile config "kindnet-20211117230315-9504": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.3
	I1117 23:11:10.352361   10928 config.go:176] Loaded profile config "multinode-20211117225530-9504-m01": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.3
	I1117 23:11:10.352361   10928 driver.go:343] Setting default libvirt URI to qemu:///system
	I1117 23:11:11.987378   10928 docker.go:132] docker version: linux-19.03.12
	I1117 23:11:11.990473   10928 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I1117 23:11:12.337971   10928 info.go:263] docker info: {ID:L2W4:ZZCI:FAHF:ZZT6:NAWJ:ODFR:2ZBQ:YQ77:PYZZ:VR3E:WLUM:OBSV Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:40 OomKillDisable:true NGoroutines:52 SystemTime:2021-11-17 23:11:12.068868086 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:4.19.76-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://inde
x.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:6239399936 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:19.03.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7ad184331fa3e55e52b890ea95e65ba581ae3429 Expected:7ad184331fa3e55e52b890ea95e65ba581ae3429} RuncCommit:{ID:dc9208a3303feef5b3839f4323d9beb36df0a9dd Expected:dc9208a3303feef5b3839f4323d9beb36df0a9dd} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense:Community Engine Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I1117 23:11:12.341164   10928 out.go:176] * Using the docker driver based on user configuration
	I1117 23:11:12.341164   10928 start.go:280] selected driver: docker
	I1117 23:11:12.341164   10928 start.go:775] validating driver "docker" against <nil>
	I1117 23:11:12.341164   10928 start.go:786] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I1117 23:11:12.400140   10928 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I1117 23:11:12.757773   10928 info.go:263] docker info: {ID:L2W4:ZZCI:FAHF:ZZT6:NAWJ:ODFR:2ZBQ:YQ77:PYZZ:VR3E:WLUM:OBSV Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:40 OomKillDisable:true NGoroutines:52 SystemTime:2021-11-17 23:11:12.476306546 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:4.19.76-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://inde
x.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:6239399936 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:19.03.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7ad184331fa3e55e52b890ea95e65ba581ae3429 Expected:7ad184331fa3e55e52b890ea95e65ba581ae3429} RuncCommit:{ID:dc9208a3303feef5b3839f4323d9beb36df0a9dd Expected:dc9208a3303feef5b3839f4323d9beb36df0a9dd} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense:Community Engine Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I1117 23:11:12.757988   10928 start_flags.go:268] no existing cluster config was found, will generate one from the flags 
	I1117 23:11:12.758520   10928 start_flags.go:758] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1117 23:11:12.758587   10928 cni.go:93] Creating CNI manager for ""
	I1117 23:11:12.758587   10928 cni.go:167] CNI unnecessary in this configuration, recommending no CNI
	I1117 23:11:12.758587   10928 start_flags.go:282] config:
	{Name:old-k8s-version-20211117231110-9504 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.14.0 ClusterName:old-k8s-version-20211117231110-9504 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docke
r CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host}
	I1117 23:11:12.761554   10928 out.go:176] * Starting control plane node old-k8s-version-20211117231110-9504 in cluster old-k8s-version-20211117231110-9504
	I1117 23:11:12.761554   10928 cache.go:118] Beginning downloading kic base image for docker with docker
	I1117 23:11:12.764080   10928 out.go:176] * Pulling base image ...
	I1117 23:11:12.764080   10928 preload.go:132] Checking if preload exists for k8s version v1.14.0 and runtime docker
	I1117 23:11:12.765088   10928 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c in local docker daemon
	I1117 23:11:12.765088   10928 preload.go:148] Found local preload: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v14-v1.14.0-docker-overlay2-amd64.tar.lz4
	I1117 23:11:12.765088   10928 cache.go:57] Caching tarball of preloaded images
	I1117 23:11:12.765088   10928 preload.go:174] Found C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v14-v1.14.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1117 23:11:12.765088   10928 cache.go:60] Finished verifying existence of preloaded tar for  v1.14.0 on docker
	I1117 23:11:12.765088   10928 profile.go:147] Saving config to C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\old-k8s-version-20211117231110-9504\config.json ...
	I1117 23:11:12.766078   10928 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\old-k8s-version-20211117231110-9504\config.json: {Name:mk7913e93fcd712794024f1d8dfbc301041cb6d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1117 23:11:12.868350   10928 image.go:79] Found gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c in local docker daemon, skipping pull
	I1117 23:11:12.868350   10928 cache.go:140] gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c exists in daemon, skipping load
	I1117 23:11:12.868350   10928 cache.go:206] Successfully downloaded all kic artifacts
	I1117 23:11:12.868350   10928 start.go:313] acquiring machines lock for old-k8s-version-20211117231110-9504: {Name:mkf20483f474415f88720279d1dc914d2f1e71fc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 23:11:12.868350   10928 start.go:317] acquired machines lock for "old-k8s-version-20211117231110-9504" in 0s
	I1117 23:11:12.868350   10928 start.go:89] Provisioning new machine with config: &{Name:old-k8s-version-20211117231110-9504 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.14.0 ClusterName:old-k8s-version-20211117231110-9504 Namespace:default APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.14.0 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host} &{Name: IP: Port:8443 KubernetesVersion:v1.14.0 ControlPlane:true Worker:true}
	I1117 23:11:12.868350   10928 start.go:126] createHost starting for "" (driver="docker")
	I1117 23:11:12.872363   10928 out.go:203] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I1117 23:11:12.873337   10928 start.go:160] libmachine.API.Create for "old-k8s-version-20211117231110-9504" (driver="docker")
	I1117 23:11:12.873337   10928 client.go:168] LocalClient.Create starting
	I1117 23:11:12.873337   10928 main.go:130] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I1117 23:11:12.873337   10928 main.go:130] libmachine: Decoding PEM data...
	I1117 23:11:12.873337   10928 main.go:130] libmachine: Parsing certificate...
	I1117 23:11:12.874339   10928 main.go:130] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I1117 23:11:12.874339   10928 main.go:130] libmachine: Decoding PEM data...
	I1117 23:11:12.874339   10928 main.go:130] libmachine: Parsing certificate...
	I1117 23:11:12.879350   10928 cli_runner.go:115] Run: docker network inspect old-k8s-version-20211117231110-9504 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1117 23:11:12.973628   10928 cli_runner.go:162] docker network inspect old-k8s-version-20211117231110-9504 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1117 23:11:12.978010   10928 network_create.go:254] running [docker network inspect old-k8s-version-20211117231110-9504] to gather additional debugging logs...
	I1117 23:11:12.978010   10928 cli_runner.go:115] Run: docker network inspect old-k8s-version-20211117231110-9504
	W1117 23:11:13.073948   10928 cli_runner.go:162] docker network inspect old-k8s-version-20211117231110-9504 returned with exit code 1
	I1117 23:11:13.073948   10928 network_create.go:257] error running [docker network inspect old-k8s-version-20211117231110-9504]: docker network inspect old-k8s-version-20211117231110-9504: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: old-k8s-version-20211117231110-9504
	I1117 23:11:13.073948   10928 network_create.go:259] output of [docker network inspect old-k8s-version-20211117231110-9504]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: old-k8s-version-20211117231110-9504
	
	** /stderr **
	I1117 23:11:13.077624   10928 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1117 23:11:13.185696   10928 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc0006d0148] misses:0}
	I1117 23:11:13.186691   10928 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1117 23:11:13.186691   10928 network_create.go:106] attempt to create docker network old-k8s-version-20211117231110-9504 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1117 23:11:13.189512   10928 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true old-k8s-version-20211117231110-9504
	W1117 23:11:13.276404   10928 cli_runner.go:162] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true old-k8s-version-20211117231110-9504 returned with exit code 1
	W1117 23:11:13.276404   10928 network_create.go:98] failed to create docker network old-k8s-version-20211117231110-9504 192.168.49.0/24, will retry: subnet is taken
	I1117 23:11:13.292352   10928 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0006d0148] amended:false}} dirty:map[] misses:0}
	I1117 23:11:13.292352   10928 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1117 23:11:13.320940   10928 network.go:288] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0006d0148] amended:true}} dirty:map[192.168.49.0:0xc0006d0148 192.168.58.0:0xc000766558] misses:0}
	I1117 23:11:13.320940   10928 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1117 23:11:13.320940   10928 network_create.go:106] attempt to create docker network old-k8s-version-20211117231110-9504 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I1117 23:11:13.324530   10928 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true old-k8s-version-20211117231110-9504
	I1117 23:11:13.550916   10928 network_create.go:90] docker network old-k8s-version-20211117231110-9504 192.168.58.0/24 created
	I1117 23:11:13.551168   10928 kic.go:106] calculated static IP "192.168.58.2" for the "old-k8s-version-20211117231110-9504" container
	I1117 23:11:13.561021   10928 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
	I1117 23:11:13.666411   10928 cli_runner.go:115] Run: docker volume create old-k8s-version-20211117231110-9504 --label name.minikube.sigs.k8s.io=old-k8s-version-20211117231110-9504 --label created_by.minikube.sigs.k8s.io=true
	I1117 23:11:13.778234   10928 oci.go:102] Successfully created a docker volume old-k8s-version-20211117231110-9504
	I1117 23:11:13.782233   10928 cli_runner.go:115] Run: docker run --rm --name old-k8s-version-20211117231110-9504-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-20211117231110-9504 --entrypoint /usr/bin/test -v old-k8s-version-20211117231110-9504:/var gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -d /var/lib
	I1117 23:11:14.973338   10928 cli_runner.go:168] Completed: docker run --rm --name old-k8s-version-20211117231110-9504-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-20211117231110-9504 --entrypoint /usr/bin/test -v old-k8s-version-20211117231110-9504:/var gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -d /var/lib: (1.1910962s)
	I1117 23:11:14.973338   10928 oci.go:106] Successfully prepared a docker volume old-k8s-version-20211117231110-9504
	I1117 23:11:14.973338   10928 preload.go:132] Checking if preload exists for k8s version v1.14.0 and runtime docker
	I1117 23:11:14.973338   10928 kic.go:179] Starting extracting preloaded images to volume ...
	I1117 23:11:14.978269   10928 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I1117 23:11:14.978393   10928 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v14-v1.14.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-20211117231110-9504:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir
	W1117 23:11:15.088896   10928 cli_runner.go:162] docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v14-v1.14.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-20211117231110-9504:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir returned with exit code 125
	I1117 23:11:15.088986   10928 kic.go:186] Unable to extract preloaded tarball to volume: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v14-v1.14.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-20211117231110-9504:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: status code not OK but 500: {"Message":"Unhandled exception: The notification platform is unavailable.\r\n\r\nThe notification platform is unavailable.\r\n","StackTrace":"   at Windows.UI.Notifications.ToastNotificationManager.CreateToastNotifier(String applicationId)\r\n   at Docker.WPF.PromptShareDirectory.<PromptUserAsync>d__0.MoveNext() in C:\\workspaces\\stable-2.3.x\\src\\github.com\\docker\\pinata\\win\\src\\Docker.WPF\\PromptShareDirectory.cs:line 25\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at Docker.ApiServices.Mounting.FileSharing.<DoShareAsync>d__6.MoveNext() in C:\\workspaces\\stable-2.3.x\\src\\github.com\\docker\\pinata\\win\\src\\Docker.ApiServices\\Mounting\\FileSharing.cs:line 80\r\n--- End of stack trace from previous location whe
re exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at Docker.ApiServices.Mounting.FileSharing.<ShareAsync>d__4.MoveNext() in C:\\workspaces\\stable-2.3.x\\src\\github.com\\docker\\pinata\\win\\src\\Docker.ApiServices\\Mounting\\FileSharing.cs:line 47\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at Docker.HttpApi.Controllers.FilesharingController.<ShareDirectory>d__2.MoveNext() in C:\\workspaces\\stable-2.3.x\\src\\github.com\\docker\\pinata\\win\\src\\Docker.HttpApi\\Controllers\\FilesharingController.cs:line 21\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.Exceptio
nDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at System.Threading.Tasks.TaskHelpersExtensions.<CastToObject>d__1`1.MoveNext()\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at System.Web.Http.Controllers.ApiControllerActionInvoker.<InvokeActionAsyncCore>d__1.MoveNext()\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at System.Web.Http.Controllers.ActionFilterResult.<ExecuteAsync>d__5.MoveNext()\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.Excepti
onServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at System.Web.Http.Dispatcher.HttpControllerDispatcher.<SendAsync>d__15.MoveNext()"}.
	See 'docker run --help'.
	I1117 23:11:15.346154   10928 info.go:263] docker info: {ID:L2W4:ZZCI:FAHF:ZZT6:NAWJ:ODFR:2ZBQ:YQ77:PYZZ:VR3E:WLUM:OBSV Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:41 OomKillDisable:true NGoroutines:54 SystemTime:2021-11-17 23:11:15.081873715 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:4.19.76-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://inde
x.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:6239399936 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:19.03.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7ad184331fa3e55e52b890ea95e65ba581ae3429 Expected:7ad184331fa3e55e52b890ea95e65ba581ae3429} RuncCommit:{ID:dc9208a3303feef5b3839f4323d9beb36df0a9dd Expected:dc9208a3303feef5b3839f4323d9beb36df0a9dd} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense:Community Engine Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	E1117 23:11:15.346154   10928 oci.go:197] error getting kernel modules path: Unable to locate kernel modules
	I1117 23:11:15.346154   10928 client.go:171] LocalClient.Create took 2.4727986s
	I1117 23:11:17.354181   10928 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1117 23:11:17.358259   10928 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20211117231110-9504
	W1117 23:11:17.447715   10928 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20211117231110-9504 returned with exit code 1
	I1117 23:11:17.448021   10928 retry.go:31] will retry after 276.165072ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-20211117231110-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20211117231110-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20211117231110-9504
	I1117 23:11:17.729243   10928 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20211117231110-9504
	W1117 23:11:17.816444   10928 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20211117231110-9504 returned with exit code 1
	I1117 23:11:17.816705   10928 retry.go:31] will retry after 540.190908ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-20211117231110-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20211117231110-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20211117231110-9504
	I1117 23:11:18.362412   10928 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20211117231110-9504
	W1117 23:11:18.456707   10928 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20211117231110-9504 returned with exit code 1
	I1117 23:11:18.456909   10928 retry.go:31] will retry after 655.06503ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-20211117231110-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20211117231110-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20211117231110-9504
	I1117 23:11:19.116276   10928 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20211117231110-9504
	W1117 23:11:19.223234   10928 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20211117231110-9504 returned with exit code 1
	W1117 23:11:19.223532   10928 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-20211117231110-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20211117231110-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20211117231110-9504
	
	W1117 23:11:19.223621   10928 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-20211117231110-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20211117231110-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20211117231110-9504
	I1117 23:11:19.223621   10928 start.go:129] duration metric: createHost completed in 6.3552231s
	I1117 23:11:19.223621   10928 start.go:80] releasing machines lock for "old-k8s-version-20211117231110-9504", held for 6.3552231s
	W1117 23:11:19.223772   10928 start.go:532] error starting host: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	I1117 23:11:19.232875   10928 cli_runner.go:115] Run: docker container inspect old-k8s-version-20211117231110-9504 --format={{.State.Status}}
	W1117 23:11:19.337344   10928 cli_runner.go:162] docker container inspect old-k8s-version-20211117231110-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:11:19.337344   10928 delete.go:82] Unable to get host status for old-k8s-version-20211117231110-9504, assuming it has already been deleted: state: unknown state "old-k8s-version-20211117231110-9504": docker container inspect old-k8s-version-20211117231110-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20211117231110-9504
	W1117 23:11:19.337344   10928 out.go:241] ! StartHost failed, but will try again: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	! StartHost failed, but will try again: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	I1117 23:11:19.337344   10928 start.go:547] Will try again in 5 seconds ...
	I1117 23:11:24.338730   10928 start.go:313] acquiring machines lock for old-k8s-version-20211117231110-9504: {Name:mkf20483f474415f88720279d1dc914d2f1e71fc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 23:11:24.339051   10928 start.go:317] acquired machines lock for "old-k8s-version-20211117231110-9504" in 276.5µs
	I1117 23:11:24.339262   10928 start.go:93] Skipping create...Using existing machine configuration
	I1117 23:11:24.339298   10928 fix.go:55] fixHost starting: 
	I1117 23:11:24.346441   10928 cli_runner.go:115] Run: docker container inspect old-k8s-version-20211117231110-9504 --format={{.State.Status}}
	W1117 23:11:24.430022   10928 cli_runner.go:162] docker container inspect old-k8s-version-20211117231110-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:11:24.430022   10928 fix.go:108] recreateIfNeeded on old-k8s-version-20211117231110-9504: state= err=unknown state "old-k8s-version-20211117231110-9504": docker container inspect old-k8s-version-20211117231110-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20211117231110-9504
	I1117 23:11:24.430294   10928 fix.go:113] machineExists: false. err=machine does not exist
	I1117 23:11:24.433225   10928 out.go:176] * docker "old-k8s-version-20211117231110-9504" container is missing, will recreate.
	I1117 23:11:24.433225   10928 delete.go:124] DEMOLISHING old-k8s-version-20211117231110-9504 ...
	I1117 23:11:24.439836   10928 cli_runner.go:115] Run: docker container inspect old-k8s-version-20211117231110-9504 --format={{.State.Status}}
	W1117 23:11:24.532260   10928 cli_runner.go:162] docker container inspect old-k8s-version-20211117231110-9504 --format={{.State.Status}} returned with exit code 1
	W1117 23:11:24.532260   10928 stop.go:75] unable to get state: unknown state "old-k8s-version-20211117231110-9504": docker container inspect old-k8s-version-20211117231110-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20211117231110-9504
	I1117 23:11:24.532260   10928 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "old-k8s-version-20211117231110-9504": docker container inspect old-k8s-version-20211117231110-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20211117231110-9504
	I1117 23:11:24.543076   10928 cli_runner.go:115] Run: docker container inspect old-k8s-version-20211117231110-9504 --format={{.State.Status}}
	W1117 23:11:24.632280   10928 cli_runner.go:162] docker container inspect old-k8s-version-20211117231110-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:11:24.632596   10928 delete.go:82] Unable to get host status for old-k8s-version-20211117231110-9504, assuming it has already been deleted: state: unknown state "old-k8s-version-20211117231110-9504": docker container inspect old-k8s-version-20211117231110-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20211117231110-9504
	I1117 23:11:24.638266   10928 cli_runner.go:115] Run: docker container inspect -f {{.Id}} old-k8s-version-20211117231110-9504
	W1117 23:11:24.725476   10928 cli_runner.go:162] docker container inspect -f {{.Id}} old-k8s-version-20211117231110-9504 returned with exit code 1
	I1117 23:11:24.725565   10928 kic.go:360] could not find the container old-k8s-version-20211117231110-9504 to remove it. will try anyways
	I1117 23:11:24.729745   10928 cli_runner.go:115] Run: docker container inspect old-k8s-version-20211117231110-9504 --format={{.State.Status}}
	W1117 23:11:24.823447   10928 cli_runner.go:162] docker container inspect old-k8s-version-20211117231110-9504 --format={{.State.Status}} returned with exit code 1
	W1117 23:11:24.823572   10928 oci.go:83] error getting container status, will try to delete anyways: unknown state "old-k8s-version-20211117231110-9504": docker container inspect old-k8s-version-20211117231110-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20211117231110-9504
	I1117 23:11:24.827044   10928 cli_runner.go:115] Run: docker exec --privileged -t old-k8s-version-20211117231110-9504 /bin/bash -c "sudo init 0"
	W1117 23:11:24.915464   10928 cli_runner.go:162] docker exec --privileged -t old-k8s-version-20211117231110-9504 /bin/bash -c "sudo init 0" returned with exit code 1
	I1117 23:11:24.915621   10928 oci.go:658] error shutdown old-k8s-version-20211117231110-9504: docker exec --privileged -t old-k8s-version-20211117231110-9504 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: old-k8s-version-20211117231110-9504
	I1117 23:11:25.923671   10928 cli_runner.go:115] Run: docker container inspect old-k8s-version-20211117231110-9504 --format={{.State.Status}}
	W1117 23:11:26.021366   10928 cli_runner.go:162] docker container inspect old-k8s-version-20211117231110-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:11:26.021789   10928 oci.go:670] temporary error verifying shutdown: unknown state "old-k8s-version-20211117231110-9504": docker container inspect old-k8s-version-20211117231110-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20211117231110-9504
	I1117 23:11:26.021851   10928 oci.go:672] temporary error: container old-k8s-version-20211117231110-9504 status is  but expect it to be exited
	I1117 23:11:26.021851   10928 retry.go:31] will retry after 462.318748ms: couldn't verify container is exited. %v: unknown state "old-k8s-version-20211117231110-9504": docker container inspect old-k8s-version-20211117231110-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20211117231110-9504
	I1117 23:11:26.492001   10928 cli_runner.go:115] Run: docker container inspect old-k8s-version-20211117231110-9504 --format={{.State.Status}}
	W1117 23:11:26.595823   10928 cli_runner.go:162] docker container inspect old-k8s-version-20211117231110-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:11:26.595823   10928 oci.go:670] temporary error verifying shutdown: unknown state "old-k8s-version-20211117231110-9504": docker container inspect old-k8s-version-20211117231110-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20211117231110-9504
	I1117 23:11:26.595940   10928 oci.go:672] temporary error: container old-k8s-version-20211117231110-9504 status is  but expect it to be exited
	I1117 23:11:26.595940   10928 retry.go:31] will retry after 890.117305ms: couldn't verify container is exited. %v: unknown state "old-k8s-version-20211117231110-9504": docker container inspect old-k8s-version-20211117231110-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20211117231110-9504
	I1117 23:11:27.492803   10928 cli_runner.go:115] Run: docker container inspect old-k8s-version-20211117231110-9504 --format={{.State.Status}}
	W1117 23:11:27.594484   10928 cli_runner.go:162] docker container inspect old-k8s-version-20211117231110-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:11:27.594568   10928 oci.go:670] temporary error verifying shutdown: unknown state "old-k8s-version-20211117231110-9504": docker container inspect old-k8s-version-20211117231110-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20211117231110-9504
	I1117 23:11:27.594568   10928 oci.go:672] temporary error: container old-k8s-version-20211117231110-9504 status is  but expect it to be exited
	I1117 23:11:27.594568   10928 retry.go:31] will retry after 636.341646ms: couldn't verify container is exited. %v: unknown state "old-k8s-version-20211117231110-9504": docker container inspect old-k8s-version-20211117231110-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20211117231110-9504
	I1117 23:11:28.236580   10928 cli_runner.go:115] Run: docker container inspect old-k8s-version-20211117231110-9504 --format={{.State.Status}}
	W1117 23:11:28.351511   10928 cli_runner.go:162] docker container inspect old-k8s-version-20211117231110-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:11:28.351805   10928 oci.go:670] temporary error verifying shutdown: unknown state "old-k8s-version-20211117231110-9504": docker container inspect old-k8s-version-20211117231110-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20211117231110-9504
	I1117 23:11:28.351884   10928 oci.go:672] temporary error: container old-k8s-version-20211117231110-9504 status is  but expect it to be exited
	I1117 23:11:28.351884   10928 retry.go:31] will retry after 1.107876242s: couldn't verify container is exited. %v: unknown state "old-k8s-version-20211117231110-9504": docker container inspect old-k8s-version-20211117231110-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20211117231110-9504
	I1117 23:11:29.465583   10928 cli_runner.go:115] Run: docker container inspect old-k8s-version-20211117231110-9504 --format={{.State.Status}}
	W1117 23:11:29.569034   10928 cli_runner.go:162] docker container inspect old-k8s-version-20211117231110-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:11:29.569034   10928 oci.go:670] temporary error verifying shutdown: unknown state "old-k8s-version-20211117231110-9504": docker container inspect old-k8s-version-20211117231110-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20211117231110-9504
	I1117 23:11:29.569352   10928 oci.go:672] temporary error: container old-k8s-version-20211117231110-9504 status is  but expect it to be exited
	I1117 23:11:29.569445   10928 retry.go:31] will retry after 1.511079094s: couldn't verify container is exited. %v: unknown state "old-k8s-version-20211117231110-9504": docker container inspect old-k8s-version-20211117231110-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20211117231110-9504
	I1117 23:11:31.088048   10928 cli_runner.go:115] Run: docker container inspect old-k8s-version-20211117231110-9504 --format={{.State.Status}}
	W1117 23:11:31.180128   10928 cli_runner.go:162] docker container inspect old-k8s-version-20211117231110-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:11:31.180341   10928 oci.go:670] temporary error verifying shutdown: unknown state "old-k8s-version-20211117231110-9504": docker container inspect old-k8s-version-20211117231110-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20211117231110-9504
	I1117 23:11:31.180381   10928 oci.go:672] temporary error: container old-k8s-version-20211117231110-9504 status is  but expect it to be exited
	I1117 23:11:31.180381   10928 retry.go:31] will retry after 3.04096222s: couldn't verify container is exited. %v: unknown state "old-k8s-version-20211117231110-9504": docker container inspect old-k8s-version-20211117231110-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20211117231110-9504
	I1117 23:11:34.227716   10928 cli_runner.go:115] Run: docker container inspect old-k8s-version-20211117231110-9504 --format={{.State.Status}}
	W1117 23:11:34.317158   10928 cli_runner.go:162] docker container inspect old-k8s-version-20211117231110-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:11:34.317479   10928 oci.go:670] temporary error verifying shutdown: unknown state "old-k8s-version-20211117231110-9504": docker container inspect old-k8s-version-20211117231110-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20211117231110-9504
	I1117 23:11:34.317479   10928 oci.go:672] temporary error: container old-k8s-version-20211117231110-9504 status is  but expect it to be exited
	I1117 23:11:34.317617   10928 retry.go:31] will retry after 5.781953173s: couldn't verify container is exited. %v: unknown state "old-k8s-version-20211117231110-9504": docker container inspect old-k8s-version-20211117231110-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20211117231110-9504
	I1117 23:11:40.104539   10928 cli_runner.go:115] Run: docker container inspect old-k8s-version-20211117231110-9504 --format={{.State.Status}}
	W1117 23:11:40.196311   10928 cli_runner.go:162] docker container inspect old-k8s-version-20211117231110-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:11:40.196311   10928 oci.go:670] temporary error verifying shutdown: unknown state "old-k8s-version-20211117231110-9504": docker container inspect old-k8s-version-20211117231110-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20211117231110-9504
	I1117 23:11:40.196311   10928 oci.go:672] temporary error: container old-k8s-version-20211117231110-9504 status is  but expect it to be exited
	I1117 23:11:40.196311   10928 oci.go:87] couldn't shut down old-k8s-version-20211117231110-9504 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "old-k8s-version-20211117231110-9504": docker container inspect old-k8s-version-20211117231110-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20211117231110-9504
	 
	I1117 23:11:40.200227   10928 cli_runner.go:115] Run: docker rm -f -v old-k8s-version-20211117231110-9504
	W1117 23:11:40.286280   10928 cli_runner.go:162] docker rm -f -v old-k8s-version-20211117231110-9504 returned with exit code 1
	W1117 23:11:40.287363   10928 delete.go:139] delete failed (probably ok) <nil>
	I1117 23:11:40.287363   10928 fix.go:120] Sleeping 1 second for extra luck!
	I1117 23:11:41.287590   10928 start.go:126] createHost starting for "" (driver="docker")
	I1117 23:11:41.292968   10928 out.go:203] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I1117 23:11:41.292968   10928 start.go:160] libmachine.API.Create for "old-k8s-version-20211117231110-9504" (driver="docker")
	I1117 23:11:41.293521   10928 client.go:168] LocalClient.Create starting
	I1117 23:11:41.293693   10928 main.go:130] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I1117 23:11:41.293693   10928 main.go:130] libmachine: Decoding PEM data...
	I1117 23:11:41.294272   10928 main.go:130] libmachine: Parsing certificate...
	I1117 23:11:41.294272   10928 main.go:130] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I1117 23:11:41.294627   10928 main.go:130] libmachine: Decoding PEM data...
	I1117 23:11:41.294703   10928 main.go:130] libmachine: Parsing certificate...
	I1117 23:11:41.298588   10928 cli_runner.go:115] Run: docker network inspect old-k8s-version-20211117231110-9504 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1117 23:11:41.392711   10928 cli_runner.go:162] docker network inspect old-k8s-version-20211117231110-9504 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1117 23:11:41.396598   10928 network_create.go:254] running [docker network inspect old-k8s-version-20211117231110-9504] to gather additional debugging logs...
	I1117 23:11:41.396598   10928 cli_runner.go:115] Run: docker network inspect old-k8s-version-20211117231110-9504
	W1117 23:11:41.487575   10928 cli_runner.go:162] docker network inspect old-k8s-version-20211117231110-9504 returned with exit code 1
	I1117 23:11:41.487739   10928 network_create.go:257] error running [docker network inspect old-k8s-version-20211117231110-9504]: docker network inspect old-k8s-version-20211117231110-9504: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: old-k8s-version-20211117231110-9504
	I1117 23:11:41.487739   10928 network_create.go:259] output of [docker network inspect old-k8s-version-20211117231110-9504]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: old-k8s-version-20211117231110-9504
	
	** /stderr **
	I1117 23:11:41.491352   10928 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1117 23:11:41.602969   10928 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0006d0148] amended:true}} dirty:map[192.168.49.0:0xc0006d0148 192.168.58.0:0xc000766558] misses:0}
	I1117 23:11:41.602969   10928 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1117 23:11:41.615052   10928 network.go:279] skipping subnet 192.168.58.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0006d0148] amended:true}} dirty:map[192.168.49.0:0xc0006d0148 192.168.58.0:0xc000766558] misses:1}
	I1117 23:11:41.615052   10928 network.go:238] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1117 23:11:41.626902   10928 network.go:288] reserving subnet 192.168.67.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0006d0148] amended:true}} dirty:map[192.168.49.0:0xc0006d0148 192.168.58.0:0xc000766558 192.168.67.0:0xc0001f0198] misses:1}
	I1117 23:11:41.626902   10928 network.go:235] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1117 23:11:41.626902   10928 network_create.go:106] attempt to create docker network old-k8s-version-20211117231110-9504 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
	I1117 23:11:41.631838   10928 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true old-k8s-version-20211117231110-9504
	I1117 23:11:41.848235   10928 network_create.go:90] docker network old-k8s-version-20211117231110-9504 192.168.67.0/24 created
	I1117 23:11:41.848459   10928 kic.go:106] calculated static IP "192.168.67.2" for the "old-k8s-version-20211117231110-9504" container
	I1117 23:11:41.856845   10928 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
	I1117 23:11:41.958329   10928 cli_runner.go:115] Run: docker volume create old-k8s-version-20211117231110-9504 --label name.minikube.sigs.k8s.io=old-k8s-version-20211117231110-9504 --label created_by.minikube.sigs.k8s.io=true
	I1117 23:11:42.045353   10928 oci.go:102] Successfully created a docker volume old-k8s-version-20211117231110-9504
	I1117 23:11:42.049518   10928 cli_runner.go:115] Run: docker run --rm --name old-k8s-version-20211117231110-9504-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-20211117231110-9504 --entrypoint /usr/bin/test -v old-k8s-version-20211117231110-9504:/var gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -d /var/lib
	I1117 23:11:42.918238   10928 oci.go:106] Successfully prepared a docker volume old-k8s-version-20211117231110-9504
	I1117 23:11:42.918238   10928 preload.go:132] Checking if preload exists for k8s version v1.14.0 and runtime docker
	I1117 23:11:42.918238   10928 kic.go:179] Starting extracting preloaded images to volume ...
	I1117 23:11:42.921848   10928 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v14-v1.14.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-20211117231110-9504:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir
	I1117 23:11:42.922850   10928 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	W1117 23:11:43.041950   10928 cli_runner.go:162] docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v14-v1.14.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-20211117231110-9504:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir returned with exit code 125
	I1117 23:11:43.041950   10928 kic.go:186] Unable to extract preloaded tarball to volume: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v14-v1.14.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-20211117231110-9504:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: status code not OK but 500: {"Message":"Unhandled exception: The notification platform is unavailable.\r\n\r\nThe notification platform is unavailable.\r\n","StackTrace":"   at Windows.UI.Notifications.ToastNotificationManager.CreateToastNotifier(String applicationId)\r\n   at Docker.WPF.PromptShareDirectory.<PromptUserAsync>d__0.MoveNext() in C:\\workspaces\\stable-2.3.x\\src\\github.com\\docker\\pinata\\win\\src\\Docker.WPF\\PromptShareDirectory.cs:line 25\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at Docker.ApiServices.Mounting.FileSharing.<DoShareAsync>d__6.MoveNext() in C:\\workspaces\\stable-2.3.x\\src\\github.com\\docker\\pinata\\win\\src\\Docker.ApiServices\\Mounting\\FileSharing.cs:line 80\r\n--- End of stack trace from previous location whe
re exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at Docker.ApiServices.Mounting.FileSharing.<ShareAsync>d__4.MoveNext() in C:\\workspaces\\stable-2.3.x\\src\\github.com\\docker\\pinata\\win\\src\\Docker.ApiServices\\Mounting\\FileSharing.cs:line 47\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at Docker.HttpApi.Controllers.FilesharingController.<ShareDirectory>d__2.MoveNext() in C:\\workspaces\\stable-2.3.x\\src\\github.com\\docker\\pinata\\win\\src\\Docker.HttpApi\\Controllers\\FilesharingController.cs:line 21\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.Exceptio
nDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at System.Threading.Tasks.TaskHelpersExtensions.<CastToObject>d__1`1.MoveNext()\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at System.Web.Http.Controllers.ApiControllerActionInvoker.<InvokeActionAsyncCore>d__1.MoveNext()\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at System.Web.Http.Controllers.ActionFilterResult.<ExecuteAsync>d__5.MoveNext()\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.Excepti
onServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at System.Web.Http.Dispatcher.HttpControllerDispatcher.<SendAsync>d__15.MoveNext()"}.
	See 'docker run --help'.
	I1117 23:11:43.272694   10928 info.go:263] docker info: {ID:L2W4:ZZCI:FAHF:ZZT6:NAWJ:ODFR:2ZBQ:YQ77:PYZZ:VR3E:WLUM:OBSV Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:40 OomKillDisable:true NGoroutines:52 SystemTime:2021-11-17 23:11:43.006400528 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:4.19.76-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://inde
x.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:6239399936 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:19.03.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7ad184331fa3e55e52b890ea95e65ba581ae3429 Expected:7ad184331fa3e55e52b890ea95e65ba581ae3429} RuncCommit:{ID:dc9208a3303feef5b3839f4323d9beb36df0a9dd Expected:dc9208a3303feef5b3839f4323d9beb36df0a9dd} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense:Community Engine Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	E1117 23:11:43.272976   10928 oci.go:197] error getting kernel modules path: Unable to locate kernel modules
	I1117 23:11:43.273015   10928 client.go:171] LocalClient.Create took 1.9794792s
	I1117 23:11:45.280864   10928 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1117 23:11:45.284441   10928 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20211117231110-9504
	W1117 23:11:45.388253   10928 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20211117231110-9504 returned with exit code 1
	I1117 23:11:45.388481   10928 retry.go:31] will retry after 178.565968ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-20211117231110-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20211117231110-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20211117231110-9504
	I1117 23:11:45.573423   10928 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20211117231110-9504
	W1117 23:11:45.672512   10928 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20211117231110-9504 returned with exit code 1
	I1117 23:11:45.673045   10928 retry.go:31] will retry after 330.246446ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-20211117231110-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20211117231110-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20211117231110-9504
	I1117 23:11:46.007419   10928 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20211117231110-9504
	W1117 23:11:46.110538   10928 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20211117231110-9504 returned with exit code 1
	I1117 23:11:46.110601   10928 retry.go:31] will retry after 460.157723ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-20211117231110-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20211117231110-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20211117231110-9504
	I1117 23:11:46.576832   10928 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20211117231110-9504
	W1117 23:11:46.686513   10928 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20211117231110-9504 returned with exit code 1
	W1117 23:11:46.689797   10928 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-20211117231110-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20211117231110-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20211117231110-9504
	
	W1117 23:11:46.689797   10928 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-20211117231110-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20211117231110-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20211117231110-9504
	I1117 23:11:46.689797   10928 start.go:129] duration metric: createHost completed in 5.4019456s
	I1117 23:11:46.697489   10928 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1117 23:11:46.700806   10928 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20211117231110-9504
	W1117 23:11:46.788316   10928 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20211117231110-9504 returned with exit code 1
	I1117 23:11:46.788555   10928 retry.go:31] will retry after 195.758538ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-20211117231110-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20211117231110-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20211117231110-9504
	I1117 23:11:46.988446   10928 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20211117231110-9504
	W1117 23:11:47.083726   10928 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20211117231110-9504 returned with exit code 1
	I1117 23:11:47.083726   10928 retry.go:31] will retry after 297.413196ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-20211117231110-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20211117231110-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20211117231110-9504
	I1117 23:11:47.385949   10928 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20211117231110-9504
	W1117 23:11:47.494010   10928 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20211117231110-9504 returned with exit code 1
	I1117 23:11:47.494010   10928 retry.go:31] will retry after 663.23513ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-20211117231110-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20211117231110-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20211117231110-9504
	I1117 23:11:48.162040   10928 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20211117231110-9504
	W1117 23:11:48.267665   10928 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20211117231110-9504 returned with exit code 1
	W1117 23:11:48.267909   10928 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-20211117231110-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20211117231110-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20211117231110-9504
	
	W1117 23:11:48.267909   10928 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-20211117231110-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20211117231110-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20211117231110-9504
	I1117 23:11:48.267909   10928 fix.go:57] fixHost completed within 23.928432s
	I1117 23:11:48.267909   10928 start.go:80] releasing machines lock for "old-k8s-version-20211117231110-9504", held for 23.9286547s
	W1117 23:11:48.268524   10928 out.go:241] * Failed to start docker container. Running "minikube delete -p old-k8s-version-20211117231110-9504" may fix it: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	* Failed to start docker container. Running "minikube delete -p old-k8s-version-20211117231110-9504" may fix it: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	I1117 23:11:48.273850   10928 out.go:176] 
	W1117 23:11:48.274067   10928 out.go:241] X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	W1117 23:11:48.274067   10928 out.go:241] * 
	* 
	W1117 23:11:48.276596   10928 out.go:241] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1117 23:11:48.279287   10928 out.go:176] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:173: failed starting minikube -first start-. args "out/minikube-windows-amd64.exe start -p old-k8s-version-20211117231110-9504 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --kubernetes-version=v1.14.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/FirstStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-20211117231110-9504
helpers_test.go:235: (dbg) docker inspect old-k8s-version-20211117231110-9504:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "old-k8s-version-20211117231110-9504",
	        "Id": "6009a0bc1d99ae6c82ca17b35f379158c558287ff62cc23322707284d430eba4",
	        "Created": "2021-11-17T23:11:41.713808941Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.67.0/24",
	                    "Gateway": "192.168.67.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-20211117231110-9504 -n old-k8s-version-20211117231110-9504

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/FirstStart
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-20211117231110-9504 -n old-k8s-version-20211117231110-9504: exit status 7 (1.8847224s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 23:11:50.382869    8284 status.go:247] status error: host: state: unknown state "old-k8s-version-20211117231110-9504": docker container inspect old-k8s-version-20211117231110-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20211117231110-9504

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-20211117231110-9504" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (40.33s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (40.36s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:171: (dbg) Run:  out/minikube-windows-amd64.exe start -p embed-certs-20211117231110-9504 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker --kubernetes-version=v1.22.3

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:171: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p embed-certs-20211117231110-9504 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker --kubernetes-version=v1.22.3: exit status 80 (38.2922884s)

                                                
                                                
-- stdout --
	* [embed-certs-20211117231110-9504] minikube v1.24.0 on Microsoft Windows 10 Enterprise N 10.0.18363 Build 18363
	  - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=12739
	* Using the docker driver based on user configuration
	* Starting control plane node embed-certs-20211117231110-9504 in cluster embed-certs-20211117231110-9504
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* docker "embed-certs-20211117231110-9504" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1117 23:11:11.128091    3748 out.go:297] Setting OutFile to fd 1896 ...
	I1117 23:11:11.197089    3748 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 23:11:11.197089    3748 out.go:310] Setting ErrFile to fd 1632...
	I1117 23:11:11.197089    3748 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 23:11:11.207089    3748 out.go:304] Setting JSON to false
	I1117 23:11:11.211077    3748 start.go:112] hostinfo: {"hostname":"minikube2","uptime":79987,"bootTime":1637110684,"procs":134,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.18363 Build 18363","kernelVersion":"10.0.18363 Build 18363","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"29dced62-21fb-45d8-a34e-472b66ced002"}
	W1117 23:11:11.211077    3748 start.go:120] gopshost.Virtualization returned error: not implemented yet
	I1117 23:11:11.215084    3748 out.go:176] * [embed-certs-20211117231110-9504] minikube v1.24.0 on Microsoft Windows 10 Enterprise N 10.0.18363 Build 18363
	I1117 23:11:11.224163    3748 out.go:176]   - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	I1117 23:11:11.218134    3748 notify.go:174] Checking for updates...
	I1117 23:11:11.226675    3748 out.go:176]   - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	I1117 23:11:11.228846    3748 out.go:176]   - MINIKUBE_LOCATION=12739
	I1117 23:11:11.231743    3748 config.go:176] Loaded profile config "bridge-20211117230313-9504": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.3
	I1117 23:11:11.232285    3748 config.go:176] Loaded profile config "multinode-20211117225530-9504-m01": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.3
	I1117 23:11:11.232590    3748 driver.go:343] Setting default libvirt URI to qemu:///system
	I1117 23:11:12.846481    3748 docker.go:132] docker version: linux-19.03.12
	I1117 23:11:12.849563    3748 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I1117 23:11:13.210740    3748 info.go:263] docker info: {ID:L2W4:ZZCI:FAHF:ZZT6:NAWJ:ODFR:2ZBQ:YQ77:PYZZ:VR3E:WLUM:OBSV Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:41 OomKillDisable:true NGoroutines:54 SystemTime:2021-11-17 23:11:12.935021291 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:4.19.76-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://inde
x.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:6239399936 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:19.03.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7ad184331fa3e55e52b890ea95e65ba581ae3429 Expected:7ad184331fa3e55e52b890ea95e65ba581ae3429} RuncCommit:{ID:dc9208a3303feef5b3839f4323d9beb36df0a9dd Expected:dc9208a3303feef5b3839f4323d9beb36df0a9dd} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense:Community Engine Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I1117 23:11:13.216546    3748 out.go:176] * Using the docker driver based on user configuration
	I1117 23:11:13.216546    3748 start.go:280] selected driver: docker
	I1117 23:11:13.216546    3748 start.go:775] validating driver "docker" against <nil>
	I1117 23:11:13.216546    3748 start.go:786] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I1117 23:11:13.272406    3748 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I1117 23:11:13.624301    3748 info.go:263] docker info: {ID:L2W4:ZZCI:FAHF:ZZT6:NAWJ:ODFR:2ZBQ:YQ77:PYZZ:VR3E:WLUM:OBSV Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:49 OomKillDisable:true NGoroutines:71 SystemTime:2021-11-17 23:11:13.351523519 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:4.19.76-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://inde
x.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:6239399936 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:19.03.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7ad184331fa3e55e52b890ea95e65ba581ae3429 Expected:7ad184331fa3e55e52b890ea95e65ba581ae3429} RuncCommit:{ID:dc9208a3303feef5b3839f4323d9beb36df0a9dd Expected:dc9208a3303feef5b3839f4323d9beb36df0a9dd} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense:Community Engine Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I1117 23:11:13.624301    3748 start_flags.go:268] no existing cluster config was found, will generate one from the flags 
	I1117 23:11:13.624996    3748 start_flags.go:758] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1117 23:11:13.624996    3748 cni.go:93] Creating CNI manager for ""
	I1117 23:11:13.624996    3748 cni.go:167] CNI unnecessary in this configuration, recommending no CNI
	I1117 23:11:13.624996    3748 start_flags.go:282] config:
	{Name:embed-certs-20211117231110-9504 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.3 ClusterName:embed-certs-20211117231110-9504 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISoc
ket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host}
	I1117 23:11:13.639927    3748 out.go:176] * Starting control plane node embed-certs-20211117231110-9504 in cluster embed-certs-20211117231110-9504
	I1117 23:11:13.640136    3748 cache.go:118] Beginning downloading kic base image for docker with docker
	I1117 23:11:13.643921    3748 out.go:176] * Pulling base image ...
	I1117 23:11:13.644018    3748 preload.go:132] Checking if preload exists for k8s version v1.22.3 and runtime docker
	I1117 23:11:13.644018    3748 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c in local docker daemon
	I1117 23:11:13.644018    3748 preload.go:148] Found local preload: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4
	I1117 23:11:13.644018    3748 cache.go:57] Caching tarball of preloaded images
	I1117 23:11:13.644735    3748 preload.go:174] Found C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1117 23:11:13.644848    3748 cache.go:60] Finished verifying existence of preloaded tar for  v1.22.3 on docker
	I1117 23:11:13.645072    3748 profile.go:147] Saving config to C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\embed-certs-20211117231110-9504\config.json ...
	I1117 23:11:13.645533    3748 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\embed-certs-20211117231110-9504\config.json: {Name:mk2e4eabd72c91643ff8c15352c495e55c5eeedb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1117 23:11:13.750390    3748 image.go:79] Found gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c in local docker daemon, skipping pull
	I1117 23:11:13.750390    3748 cache.go:140] gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c exists in daemon, skipping load
	I1117 23:11:13.750487    3748 cache.go:206] Successfully downloaded all kic artifacts
	I1117 23:11:13.750609    3748 start.go:313] acquiring machines lock for embed-certs-20211117231110-9504: {Name:mke5160b0799570aa8eaa937f5551637df079826 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 23:11:13.750794    3748 start.go:317] acquired machines lock for "embed-certs-20211117231110-9504" in 154.1µs
	I1117 23:11:13.751005    3748 start.go:89] Provisioning new machine with config: &{Name:embed-certs-20211117231110-9504 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.3 ClusterName:embed-certs-20211117231110-9504 Namespace:default APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.22.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host} &{Name: IP: Port:8443 KubernetesVersion:v1.22.3 ControlPlane:true Worker:true}
	I1117 23:11:13.751131    3748 start.go:126] createHost starting for "" (driver="docker")
	I1117 23:11:13.754537    3748 out.go:203] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I1117 23:11:13.754537    3748 start.go:160] libmachine.API.Create for "embed-certs-20211117231110-9504" (driver="docker")
	I1117 23:11:13.755078    3748 client.go:168] LocalClient.Create starting
	I1117 23:11:13.755527    3748 main.go:130] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I1117 23:11:13.755801    3748 main.go:130] libmachine: Decoding PEM data...
	I1117 23:11:13.755801    3748 main.go:130] libmachine: Parsing certificate...
	I1117 23:11:13.756062    3748 main.go:130] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I1117 23:11:13.756271    3748 main.go:130] libmachine: Decoding PEM data...
	I1117 23:11:13.756328    3748 main.go:130] libmachine: Parsing certificate...
	I1117 23:11:13.761226    3748 cli_runner.go:115] Run: docker network inspect embed-certs-20211117231110-9504 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1117 23:11:13.854352    3748 cli_runner.go:162] docker network inspect embed-certs-20211117231110-9504 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1117 23:11:13.860213    3748 network_create.go:254] running [docker network inspect embed-certs-20211117231110-9504] to gather additional debugging logs...
	I1117 23:11:13.860213    3748 cli_runner.go:115] Run: docker network inspect embed-certs-20211117231110-9504
	W1117 23:11:13.960453    3748 cli_runner.go:162] docker network inspect embed-certs-20211117231110-9504 returned with exit code 1
	I1117 23:11:13.960453    3748 network_create.go:257] error running [docker network inspect embed-certs-20211117231110-9504]: docker network inspect embed-certs-20211117231110-9504: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: embed-certs-20211117231110-9504
	I1117 23:11:13.960453    3748 network_create.go:259] output of [docker network inspect embed-certs-20211117231110-9504]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: embed-certs-20211117231110-9504
	
	** /stderr **
	I1117 23:11:13.964223    3748 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1117 23:11:14.072339    3748 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc0002d6350] misses:0}
	I1117 23:11:14.072339    3748 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1117 23:11:14.072339    3748 network_create.go:106] attempt to create docker network embed-certs-20211117231110-9504 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1117 23:11:14.076280    3748 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true embed-certs-20211117231110-9504
	W1117 23:11:14.187803    3748 cli_runner.go:162] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true embed-certs-20211117231110-9504 returned with exit code 1
	W1117 23:11:14.187803    3748 network_create.go:98] failed to create docker network embed-certs-20211117231110-9504 192.168.49.0/24, will retry: subnet is taken
	I1117 23:11:14.207702    3748 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0002d6350] amended:false}} dirty:map[] misses:0}
	I1117 23:11:14.207778    3748 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1117 23:11:14.220593    3748 network.go:288] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0002d6350] amended:true}} dirty:map[192.168.49.0:0xc0002d6350 192.168.58.0:0xc0002d63d8] misses:0}
	I1117 23:11:14.221593    3748 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1117 23:11:14.221593    3748 network_create.go:106] attempt to create docker network embed-certs-20211117231110-9504 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I1117 23:11:14.225802    3748 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true embed-certs-20211117231110-9504
	W1117 23:11:14.316938    3748 cli_runner.go:162] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true embed-certs-20211117231110-9504 returned with exit code 1
	W1117 23:11:14.317034    3748 network_create.go:98] failed to create docker network embed-certs-20211117231110-9504 192.168.58.0/24, will retry: subnet is taken
	I1117 23:11:14.333165    3748 network.go:279] skipping subnet 192.168.58.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0002d6350] amended:true}} dirty:map[192.168.49.0:0xc0002d6350 192.168.58.0:0xc0002d63d8] misses:1}
	I1117 23:11:14.333165    3748 network.go:238] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1117 23:11:14.349282    3748 network.go:288] reserving subnet 192.168.67.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0002d6350] amended:true}} dirty:map[192.168.49.0:0xc0002d6350 192.168.58.0:0xc0002d63d8 192.168.67.0:0xc000b9c050] misses:1}
	I1117 23:11:14.349348    3748 network.go:235] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1117 23:11:14.349348    3748 network_create.go:106] attempt to create docker network embed-certs-20211117231110-9504 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
	I1117 23:11:14.353344    3748 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true embed-certs-20211117231110-9504
	I1117 23:11:14.583126    3748 network_create.go:90] docker network embed-certs-20211117231110-9504 192.168.67.0/24 created
	I1117 23:11:14.583126    3748 kic.go:106] calculated static IP "192.168.67.2" for the "embed-certs-20211117231110-9504" container
	I1117 23:11:14.591515    3748 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
	I1117 23:11:14.685277    3748 cli_runner.go:115] Run: docker volume create embed-certs-20211117231110-9504 --label name.minikube.sigs.k8s.io=embed-certs-20211117231110-9504 --label created_by.minikube.sigs.k8s.io=true
	I1117 23:11:14.779522    3748 oci.go:102] Successfully created a docker volume embed-certs-20211117231110-9504
	I1117 23:11:14.783653    3748 cli_runner.go:115] Run: docker run --rm --name embed-certs-20211117231110-9504-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-20211117231110-9504 --entrypoint /usr/bin/test -v embed-certs-20211117231110-9504:/var gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -d /var/lib
	I1117 23:11:15.915650    3748 cli_runner.go:168] Completed: docker run --rm --name embed-certs-20211117231110-9504-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-20211117231110-9504 --entrypoint /usr/bin/test -v embed-certs-20211117231110-9504:/var gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -d /var/lib: (1.131989s)
	I1117 23:11:15.915650    3748 oci.go:106] Successfully prepared a docker volume embed-certs-20211117231110-9504
	I1117 23:11:15.915650    3748 preload.go:132] Checking if preload exists for k8s version v1.22.3 and runtime docker
	I1117 23:11:15.915650    3748 kic.go:179] Starting extracting preloaded images to volume ...
	I1117 23:11:15.920171    3748 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I1117 23:11:15.920240    3748 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v embed-certs-20211117231110-9504:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir
	W1117 23:11:16.050706    3748 cli_runner.go:162] docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v embed-certs-20211117231110-9504:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir returned with exit code 125
	I1117 23:11:16.050706    3748 kic.go:186] Unable to extract preloaded tarball to volume: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v embed-certs-20211117231110-9504:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: status code not OK but 500: {"Message":"Unhandled exception: The notification platform is unavailable.\r\n\r\nThe notification platform is unavailable.\r\n","StackTrace":"   at Windows.UI.Notifications.ToastNotificationManager.CreateToastNotifier(String applicationId)\r\n   at Docker.WPF.PromptShareDirectory.<PromptUserAsync>d__0.MoveNext() in C:\\workspaces\\stable-2.3.x\\src\\github.com\\docker\\pinata\\win\\src\\Docker.WPF\\PromptShareDirectory.cs:line 25\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at Docker.ApiServices.Mounting.FileSharing.<DoShareAsync>d__6.MoveNext() in C:\\workspaces\\stable-2.3.x\\src\\github.com\\docker\\pinata\\win\\src\\Docker.ApiServices\\Mounting\\FileSharing.cs:line 80\r\n--- End of stack trace from previous location whe
re exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at Docker.ApiServices.Mounting.FileSharing.<ShareAsync>d__4.MoveNext() in C:\\workspaces\\stable-2.3.x\\src\\github.com\\docker\\pinata\\win\\src\\Docker.ApiServices\\Mounting\\FileSharing.cs:line 47\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at Docker.HttpApi.Controllers.FilesharingController.<ShareDirectory>d__2.MoveNext() in C:\\workspaces\\stable-2.3.x\\src\\github.com\\docker\\pinata\\win\\src\\Docker.HttpApi\\Controllers\\FilesharingController.cs:line 21\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.Exceptio
nDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at System.Threading.Tasks.TaskHelpersExtensions.<CastToObject>d__1`1.MoveNext()\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at System.Web.Http.Controllers.ApiControllerActionInvoker.<InvokeActionAsyncCore>d__1.MoveNext()\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at System.Web.Http.Controllers.ActionFilterResult.<ExecuteAsync>d__5.MoveNext()\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.Excepti
onServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at System.Web.Http.Dispatcher.HttpControllerDispatcher.<SendAsync>d__15.MoveNext()"}.
	See 'docker run --help'.
	I1117 23:11:16.301229    3748 info.go:263] docker info: {ID:L2W4:ZZCI:FAHF:ZZT6:NAWJ:ODFR:2ZBQ:YQ77:PYZZ:VR3E:WLUM:OBSV Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:40 OomKillDisable:true NGoroutines:52 SystemTime:2021-11-17 23:11:16.011756599 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:4.19.76-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://inde
x.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:6239399936 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:19.03.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7ad184331fa3e55e52b890ea95e65ba581ae3429 Expected:7ad184331fa3e55e52b890ea95e65ba581ae3429} RuncCommit:{ID:dc9208a3303feef5b3839f4323d9beb36df0a9dd Expected:dc9208a3303feef5b3839f4323d9beb36df0a9dd} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense:Community Engine Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	E1117 23:11:16.301697    3748 oci.go:197] error getting kernel modules path: Unable to locate kernel modules
	I1117 23:11:16.301697    3748 client.go:171] LocalClient.Create took 2.5466001s
	I1117 23:11:18.310505    3748 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1117 23:11:18.313537    3748 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20211117231110-9504
	W1117 23:11:18.405619    3748 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20211117231110-9504 returned with exit code 1
	I1117 23:11:18.405854    3748 retry.go:31] will retry after 276.165072ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-20211117231110-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20211117231110-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20211117231110-9504
	I1117 23:11:18.687527    3748 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20211117231110-9504
	W1117 23:11:18.783273    3748 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20211117231110-9504 returned with exit code 1
	I1117 23:11:18.783352    3748 retry.go:31] will retry after 540.190908ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-20211117231110-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20211117231110-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20211117231110-9504
	I1117 23:11:19.328843    3748 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20211117231110-9504
	W1117 23:11:19.419355    3748 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20211117231110-9504 returned with exit code 1
	I1117 23:11:19.419595    3748 retry.go:31] will retry after 655.06503ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-20211117231110-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20211117231110-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20211117231110-9504
	I1117 23:11:20.080237    3748 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20211117231110-9504
	W1117 23:11:20.177410    3748 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20211117231110-9504 returned with exit code 1
	W1117 23:11:20.177553    3748 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-20211117231110-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20211117231110-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20211117231110-9504
	
	W1117 23:11:20.177553    3748 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-20211117231110-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20211117231110-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20211117231110-9504
	I1117 23:11:20.177553    3748 start.go:129] duration metric: createHost completed in 6.4263736s
	I1117 23:11:20.177553    3748 start.go:80] releasing machines lock for "embed-certs-20211117231110-9504", held for 6.4266501s
	W1117 23:11:20.177553    3748 start.go:532] error starting host: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	I1117 23:11:20.187939    3748 cli_runner.go:115] Run: docker container inspect embed-certs-20211117231110-9504 --format={{.State.Status}}
	W1117 23:11:20.275523    3748 cli_runner.go:162] docker container inspect embed-certs-20211117231110-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:11:20.275523    3748 delete.go:82] Unable to get host status for embed-certs-20211117231110-9504, assuming it has already been deleted: state: unknown state "embed-certs-20211117231110-9504": docker container inspect embed-certs-20211117231110-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20211117231110-9504
	W1117 23:11:20.276025    3748 out.go:241] ! StartHost failed, but will try again: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	! StartHost failed, but will try again: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	I1117 23:11:20.276025    3748 start.go:547] Will try again in 5 seconds ...
	I1117 23:11:25.276275    3748 start.go:313] acquiring machines lock for embed-certs-20211117231110-9504: {Name:mke5160b0799570aa8eaa937f5551637df079826 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 23:11:25.276275    3748 start.go:317] acquired machines lock for "embed-certs-20211117231110-9504" in 0s
	I1117 23:11:25.276275    3748 start.go:93] Skipping create...Using existing machine configuration
	I1117 23:11:25.276275    3748 fix.go:55] fixHost starting: 
	I1117 23:11:25.284271    3748 cli_runner.go:115] Run: docker container inspect embed-certs-20211117231110-9504 --format={{.State.Status}}
	W1117 23:11:25.386818    3748 cli_runner.go:162] docker container inspect embed-certs-20211117231110-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:11:25.386818    3748 fix.go:108] recreateIfNeeded on embed-certs-20211117231110-9504: state= err=unknown state "embed-certs-20211117231110-9504": docker container inspect embed-certs-20211117231110-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20211117231110-9504
	I1117 23:11:25.386818    3748 fix.go:113] machineExists: false. err=machine does not exist
	I1117 23:11:25.389811    3748 out.go:176] * docker "embed-certs-20211117231110-9504" container is missing, will recreate.
	I1117 23:11:25.389811    3748 delete.go:124] DEMOLISHING embed-certs-20211117231110-9504 ...
	I1117 23:11:25.395805    3748 cli_runner.go:115] Run: docker container inspect embed-certs-20211117231110-9504 --format={{.State.Status}}
	W1117 23:11:25.486360    3748 cli_runner.go:162] docker container inspect embed-certs-20211117231110-9504 --format={{.State.Status}} returned with exit code 1
	W1117 23:11:25.486520    3748 stop.go:75] unable to get state: unknown state "embed-certs-20211117231110-9504": docker container inspect embed-certs-20211117231110-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20211117231110-9504
	I1117 23:11:25.486604    3748 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "embed-certs-20211117231110-9504": docker container inspect embed-certs-20211117231110-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20211117231110-9504
	I1117 23:11:25.495724    3748 cli_runner.go:115] Run: docker container inspect embed-certs-20211117231110-9504 --format={{.State.Status}}
	W1117 23:11:25.585986    3748 cli_runner.go:162] docker container inspect embed-certs-20211117231110-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:11:25.586218    3748 delete.go:82] Unable to get host status for embed-certs-20211117231110-9504, assuming it has already been deleted: state: unknown state "embed-certs-20211117231110-9504": docker container inspect embed-certs-20211117231110-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20211117231110-9504
	I1117 23:11:25.590567    3748 cli_runner.go:115] Run: docker container inspect -f {{.Id}} embed-certs-20211117231110-9504
	W1117 23:11:25.683444    3748 cli_runner.go:162] docker container inspect -f {{.Id}} embed-certs-20211117231110-9504 returned with exit code 1
	I1117 23:11:25.683444    3748 kic.go:360] could not find the container embed-certs-20211117231110-9504 to remove it. will try anyways
	I1117 23:11:25.687663    3748 cli_runner.go:115] Run: docker container inspect embed-certs-20211117231110-9504 --format={{.State.Status}}
	W1117 23:11:25.777555    3748 cli_runner.go:162] docker container inspect embed-certs-20211117231110-9504 --format={{.State.Status}} returned with exit code 1
	W1117 23:11:25.777555    3748 oci.go:83] error getting container status, will try to delete anyways: unknown state "embed-certs-20211117231110-9504": docker container inspect embed-certs-20211117231110-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20211117231110-9504
	I1117 23:11:25.781862    3748 cli_runner.go:115] Run: docker exec --privileged -t embed-certs-20211117231110-9504 /bin/bash -c "sudo init 0"
	W1117 23:11:25.871976    3748 cli_runner.go:162] docker exec --privileged -t embed-certs-20211117231110-9504 /bin/bash -c "sudo init 0" returned with exit code 1
	I1117 23:11:25.871976    3748 oci.go:658] error shutdown embed-certs-20211117231110-9504: docker exec --privileged -t embed-certs-20211117231110-9504 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: embed-certs-20211117231110-9504
	I1117 23:11:26.877895    3748 cli_runner.go:115] Run: docker container inspect embed-certs-20211117231110-9504 --format={{.State.Status}}
	W1117 23:11:26.971104    3748 cli_runner.go:162] docker container inspect embed-certs-20211117231110-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:11:26.971104    3748 oci.go:670] temporary error verifying shutdown: unknown state "embed-certs-20211117231110-9504": docker container inspect embed-certs-20211117231110-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20211117231110-9504
	I1117 23:11:26.971104    3748 oci.go:672] temporary error: container embed-certs-20211117231110-9504 status is  but expect it to be exited
	I1117 23:11:26.971104    3748 retry.go:31] will retry after 462.318748ms: couldn't verify container is exited. %v: unknown state "embed-certs-20211117231110-9504": docker container inspect embed-certs-20211117231110-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20211117231110-9504
	I1117 23:11:27.439480    3748 cli_runner.go:115] Run: docker container inspect embed-certs-20211117231110-9504 --format={{.State.Status}}
	W1117 23:11:27.548754    3748 cli_runner.go:162] docker container inspect embed-certs-20211117231110-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:11:27.548842    3748 oci.go:670] temporary error verifying shutdown: unknown state "embed-certs-20211117231110-9504": docker container inspect embed-certs-20211117231110-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20211117231110-9504
	I1117 23:11:27.548842    3748 oci.go:672] temporary error: container embed-certs-20211117231110-9504 status is  but expect it to be exited
	I1117 23:11:27.548842    3748 retry.go:31] will retry after 890.117305ms: couldn't verify container is exited. %v: unknown state "embed-certs-20211117231110-9504": docker container inspect embed-certs-20211117231110-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20211117231110-9504
	I1117 23:11:28.445134    3748 cli_runner.go:115] Run: docker container inspect embed-certs-20211117231110-9504 --format={{.State.Status}}
	W1117 23:11:28.544134    3748 cli_runner.go:162] docker container inspect embed-certs-20211117231110-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:11:28.544464    3748 oci.go:670] temporary error verifying shutdown: unknown state "embed-certs-20211117231110-9504": docker container inspect embed-certs-20211117231110-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20211117231110-9504
	I1117 23:11:28.544464    3748 oci.go:672] temporary error: container embed-certs-20211117231110-9504 status is  but expect it to be exited
	I1117 23:11:28.544464    3748 retry.go:31] will retry after 636.341646ms: couldn't verify container is exited. %v: unknown state "embed-certs-20211117231110-9504": docker container inspect embed-certs-20211117231110-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20211117231110-9504
	I1117 23:11:29.185731    3748 cli_runner.go:115] Run: docker container inspect embed-certs-20211117231110-9504 --format={{.State.Status}}
	W1117 23:11:29.280968    3748 cli_runner.go:162] docker container inspect embed-certs-20211117231110-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:11:29.281033    3748 oci.go:670] temporary error verifying shutdown: unknown state "embed-certs-20211117231110-9504": docker container inspect embed-certs-20211117231110-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20211117231110-9504
	I1117 23:11:29.281033    3748 oci.go:672] temporary error: container embed-certs-20211117231110-9504 status is  but expect it to be exited
	I1117 23:11:29.281033    3748 retry.go:31] will retry after 1.107876242s: couldn't verify container is exited. %v: unknown state "embed-certs-20211117231110-9504": docker container inspect embed-certs-20211117231110-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20211117231110-9504
	I1117 23:11:30.397328    3748 cli_runner.go:115] Run: docker container inspect embed-certs-20211117231110-9504 --format={{.State.Status}}
	W1117 23:11:30.487459    3748 cli_runner.go:162] docker container inspect embed-certs-20211117231110-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:11:30.487606    3748 oci.go:670] temporary error verifying shutdown: unknown state "embed-certs-20211117231110-9504": docker container inspect embed-certs-20211117231110-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20211117231110-9504
	I1117 23:11:30.487723    3748 oci.go:672] temporary error: container embed-certs-20211117231110-9504 status is  but expect it to be exited
	I1117 23:11:30.487791    3748 retry.go:31] will retry after 1.511079094s: couldn't verify container is exited. %v: unknown state "embed-certs-20211117231110-9504": docker container inspect embed-certs-20211117231110-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20211117231110-9504
	I1117 23:11:32.004174    3748 cli_runner.go:115] Run: docker container inspect embed-certs-20211117231110-9504 --format={{.State.Status}}
	W1117 23:11:32.093461    3748 cli_runner.go:162] docker container inspect embed-certs-20211117231110-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:11:32.093546    3748 oci.go:670] temporary error verifying shutdown: unknown state "embed-certs-20211117231110-9504": docker container inspect embed-certs-20211117231110-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20211117231110-9504
	I1117 23:11:32.093618    3748 oci.go:672] temporary error: container embed-certs-20211117231110-9504 status is  but expect it to be exited
	I1117 23:11:32.093647    3748 retry.go:31] will retry after 3.04096222s: couldn't verify container is exited. %v: unknown state "embed-certs-20211117231110-9504": docker container inspect embed-certs-20211117231110-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20211117231110-9504
	I1117 23:11:35.139206    3748 cli_runner.go:115] Run: docker container inspect embed-certs-20211117231110-9504 --format={{.State.Status}}
	W1117 23:11:35.243091    3748 cli_runner.go:162] docker container inspect embed-certs-20211117231110-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:11:35.243396    3748 oci.go:670] temporary error verifying shutdown: unknown state "embed-certs-20211117231110-9504": docker container inspect embed-certs-20211117231110-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20211117231110-9504
	I1117 23:11:35.243396    3748 oci.go:672] temporary error: container embed-certs-20211117231110-9504 status is  but expect it to be exited
	I1117 23:11:35.243477    3748 retry.go:31] will retry after 5.781953173s: couldn't verify container is exited. %v: unknown state "embed-certs-20211117231110-9504": docker container inspect embed-certs-20211117231110-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20211117231110-9504
	I1117 23:11:41.031017    3748 cli_runner.go:115] Run: docker container inspect embed-certs-20211117231110-9504 --format={{.State.Status}}
	W1117 23:11:41.123518    3748 cli_runner.go:162] docker container inspect embed-certs-20211117231110-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:11:41.123672    3748 oci.go:670] temporary error verifying shutdown: unknown state "embed-certs-20211117231110-9504": docker container inspect embed-certs-20211117231110-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20211117231110-9504
	I1117 23:11:41.123672    3748 oci.go:672] temporary error: container embed-certs-20211117231110-9504 status is  but expect it to be exited
	I1117 23:11:41.123672    3748 oci.go:87] couldn't shut down embed-certs-20211117231110-9504 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "embed-certs-20211117231110-9504": docker container inspect embed-certs-20211117231110-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20211117231110-9504
	 
	I1117 23:11:41.128127    3748 cli_runner.go:115] Run: docker rm -f -v embed-certs-20211117231110-9504
	W1117 23:11:41.217404    3748 cli_runner.go:162] docker rm -f -v embed-certs-20211117231110-9504 returned with exit code 1
	W1117 23:11:41.218490    3748 delete.go:139] delete failed (probably ok) <nil>
	I1117 23:11:41.218490    3748 fix.go:120] Sleeping 1 second for extra luck!
	I1117 23:11:42.219238    3748 start.go:126] createHost starting for "" (driver="docker")
	I1117 23:11:42.222770    3748 out.go:203] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I1117 23:11:42.223109    3748 start.go:160] libmachine.API.Create for "embed-certs-20211117231110-9504" (driver="docker")
	I1117 23:11:42.223219    3748 client.go:168] LocalClient.Create starting
	I1117 23:11:42.223787    3748 main.go:130] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I1117 23:11:42.223981    3748 main.go:130] libmachine: Decoding PEM data...
	I1117 23:11:42.224098    3748 main.go:130] libmachine: Parsing certificate...
	I1117 23:11:42.224392    3748 main.go:130] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I1117 23:11:42.224567    3748 main.go:130] libmachine: Decoding PEM data...
	I1117 23:11:42.224660    3748 main.go:130] libmachine: Parsing certificate...
	I1117 23:11:42.229460    3748 cli_runner.go:115] Run: docker network inspect embed-certs-20211117231110-9504 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1117 23:11:42.324141    3748 cli_runner.go:162] docker network inspect embed-certs-20211117231110-9504 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1117 23:11:42.327723    3748 network_create.go:254] running [docker network inspect embed-certs-20211117231110-9504] to gather additional debugging logs...
	I1117 23:11:42.327723    3748 cli_runner.go:115] Run: docker network inspect embed-certs-20211117231110-9504
	W1117 23:11:42.430535    3748 cli_runner.go:162] docker network inspect embed-certs-20211117231110-9504 returned with exit code 1
	I1117 23:11:42.430706    3748 network_create.go:257] error running [docker network inspect embed-certs-20211117231110-9504]: docker network inspect embed-certs-20211117231110-9504: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: embed-certs-20211117231110-9504
	I1117 23:11:42.430706    3748 network_create.go:259] output of [docker network inspect embed-certs-20211117231110-9504]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: embed-certs-20211117231110-9504
	
	** /stderr **
	I1117 23:11:42.433371    3748 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1117 23:11:42.536413    3748 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0002d6350] amended:true}} dirty:map[192.168.49.0:0xc0002d6350 192.168.58.0:0xc0002d63d8 192.168.67.0:0xc000b9c050] misses:1}
	I1117 23:11:42.536413    3748 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1117 23:11:42.549381    3748 network.go:279] skipping subnet 192.168.58.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0002d6350] amended:true}} dirty:map[192.168.49.0:0xc0002d6350 192.168.58.0:0xc0002d63d8 192.168.67.0:0xc000b9c050] misses:2}
	I1117 23:11:42.549492    3748 network.go:238] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1117 23:11:42.560956    3748 network.go:279] skipping subnet 192.168.67.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0002d6350 192.168.58.0:0xc0002d63d8 192.168.67.0:0xc000b9c050] amended:false}} dirty:map[] misses:0}
	I1117 23:11:42.560956    3748 network.go:238] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1117 23:11:42.572487    3748 network.go:288] reserving subnet 192.168.76.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0002d6350 192.168.58.0:0xc0002d63d8 192.168.67.0:0xc000b9c050] amended:true}} dirty:map[192.168.49.0:0xc0002d6350 192.168.58.0:0xc0002d63d8 192.168.67.0:0xc000b9c050 192.168.76.0:0xc000b9c2d0] misses:0}
	I1117 23:11:42.572826    3748 network.go:235] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1117 23:11:42.572826    3748 network_create.go:106] attempt to create docker network embed-certs-20211117231110-9504 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1117 23:11:42.576908    3748 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true embed-certs-20211117231110-9504
	I1117 23:11:42.792619    3748 network_create.go:90] docker network embed-certs-20211117231110-9504 192.168.76.0/24 created
	I1117 23:11:42.792619    3748 kic.go:106] calculated static IP "192.168.76.2" for the "embed-certs-20211117231110-9504" container
	I1117 23:11:42.799916    3748 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
	I1117 23:11:42.894580    3748 cli_runner.go:115] Run: docker volume create embed-certs-20211117231110-9504 --label name.minikube.sigs.k8s.io=embed-certs-20211117231110-9504 --label created_by.minikube.sigs.k8s.io=true
	I1117 23:11:42.984747    3748 oci.go:102] Successfully created a docker volume embed-certs-20211117231110-9504
	I1117 23:11:42.989528    3748 cli_runner.go:115] Run: docker run --rm --name embed-certs-20211117231110-9504-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-20211117231110-9504 --entrypoint /usr/bin/test -v embed-certs-20211117231110-9504:/var gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -d /var/lib
	I1117 23:11:43.874248    3748 oci.go:106] Successfully prepared a docker volume embed-certs-20211117231110-9504
	I1117 23:11:43.874248    3748 preload.go:132] Checking if preload exists for k8s version v1.22.3 and runtime docker
	I1117 23:11:43.874248    3748 kic.go:179] Starting extracting preloaded images to volume ...
	I1117 23:11:43.879319    3748 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I1117 23:11:43.879630    3748 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v embed-certs-20211117231110-9504:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir
	W1117 23:11:43.988992    3748 cli_runner.go:162] docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v embed-certs-20211117231110-9504:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir returned with exit code 125
	I1117 23:11:43.988992    3748 kic.go:186] Unable to extract preloaded tarball to volume: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v embed-certs-20211117231110-9504:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: status code not OK but 500: {"Message":"Unhandled exception: The notification platform is unavailable.\r\n\r\nThe notification platform is unavailable.\r\n","StackTrace":"   at Windows.UI.Notifications.ToastNotificationManager.CreateToastNotifier(String applicationId)\r\n   at Docker.WPF.PromptShareDirectory.<PromptUserAsync>d__0.MoveNext() in C:\\workspaces\\stable-2.3.x\\src\\github.com\\docker\\pinata\\win\\src\\Docker.WPF\\PromptShareDirectory.cs:line 25\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at Docker.ApiServices.Mounting.FileSharing.<DoShareAsync>d__6.MoveNext() in C:\\workspaces\\stable-2.3.x\\src\\github.com\\docker\\pinata\\win\\src\\Docker.ApiServices\\Mounting\\FileSharing.cs:line 80\r\n--- End of stack trace from previous location whe
re exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at Docker.ApiServices.Mounting.FileSharing.<ShareAsync>d__4.MoveNext() in C:\\workspaces\\stable-2.3.x\\src\\github.com\\docker\\pinata\\win\\src\\Docker.ApiServices\\Mounting\\FileSharing.cs:line 47\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at Docker.HttpApi.Controllers.FilesharingController.<ShareDirectory>d__2.MoveNext() in C:\\workspaces\\stable-2.3.x\\src\\github.com\\docker\\pinata\\win\\src\\Docker.HttpApi\\Controllers\\FilesharingController.cs:line 21\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.Exceptio
nDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at System.Threading.Tasks.TaskHelpersExtensions.<CastToObject>d__1`1.MoveNext()\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at System.Web.Http.Controllers.ApiControllerActionInvoker.<InvokeActionAsyncCore>d__1.MoveNext()\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at System.Web.Http.Controllers.ActionFilterResult.<ExecuteAsync>d__5.MoveNext()\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.Excepti
onServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at System.Web.Http.Dispatcher.HttpControllerDispatcher.<SendAsync>d__15.MoveNext()"}.
	See 'docker run --help'.
	I1117 23:11:44.240158    3748 info.go:263] docker info: {ID:L2W4:ZZCI:FAHF:ZZT6:NAWJ:ODFR:2ZBQ:YQ77:PYZZ:VR3E:WLUM:OBSV Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:41 OomKillDisable:true NGoroutines:53 SystemTime:2021-11-17 23:11:43.965594217 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:4.19.76-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://inde
x.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:6239399936 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:19.03.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7ad184331fa3e55e52b890ea95e65ba581ae3429 Expected:7ad184331fa3e55e52b890ea95e65ba581ae3429} RuncCommit:{ID:dc9208a3303feef5b3839f4323d9beb36df0a9dd Expected:dc9208a3303feef5b3839f4323d9beb36df0a9dd} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense:Community Engine Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	E1117 23:11:44.240488    3748 oci.go:197] error getting kernel modules path: Unable to locate kernel modules
	I1117 23:11:44.240488    3748 client.go:171] LocalClient.Create took 2.0171648s
	I1117 23:11:46.249910    3748 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1117 23:11:46.253587    3748 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20211117231110-9504
	W1117 23:11:46.347936    3748 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20211117231110-9504 returned with exit code 1
	I1117 23:11:46.347936    3748 retry.go:31] will retry after 178.565968ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-20211117231110-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20211117231110-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20211117231110-9504
	I1117 23:11:46.531088    3748 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20211117231110-9504
	W1117 23:11:46.631860    3748 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20211117231110-9504 returned with exit code 1
	I1117 23:11:46.632029    3748 retry.go:31] will retry after 330.246446ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-20211117231110-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20211117231110-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20211117231110-9504
	I1117 23:11:46.964895    3748 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20211117231110-9504
	W1117 23:11:47.081406    3748 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20211117231110-9504 returned with exit code 1
	I1117 23:11:47.081440    3748 retry.go:31] will retry after 460.157723ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-20211117231110-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20211117231110-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20211117231110-9504
	I1117 23:11:47.547400    3748 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20211117231110-9504
	W1117 23:11:47.636690    3748 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20211117231110-9504 returned with exit code 1
	W1117 23:11:47.636878    3748 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-20211117231110-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20211117231110-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20211117231110-9504
	
	W1117 23:11:47.636878    3748 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-20211117231110-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20211117231110-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20211117231110-9504
	I1117 23:11:47.636878    3748 start.go:129] duration metric: createHost completed in 5.4175991s
	I1117 23:11:47.643837    3748 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1117 23:11:47.647672    3748 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20211117231110-9504
	W1117 23:11:47.747196    3748 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20211117231110-9504 returned with exit code 1
	I1117 23:11:47.747307    3748 retry.go:31] will retry after 195.758538ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-20211117231110-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20211117231110-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20211117231110-9504
	I1117 23:11:47.947813    3748 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20211117231110-9504
	W1117 23:11:48.039690    3748 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20211117231110-9504 returned with exit code 1
	I1117 23:11:48.040092    3748 retry.go:31] will retry after 297.413196ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-20211117231110-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20211117231110-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20211117231110-9504
	I1117 23:11:48.340579    3748 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20211117231110-9504
	W1117 23:11:48.447365    3748 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20211117231110-9504 returned with exit code 1
	I1117 23:11:48.447546    3748 retry.go:31] will retry after 663.23513ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-20211117231110-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20211117231110-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20211117231110-9504
	I1117 23:11:49.114981    3748 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20211117231110-9504
	W1117 23:11:49.211480    3748 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20211117231110-9504 returned with exit code 1
	W1117 23:11:49.211954    3748 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-20211117231110-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20211117231110-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20211117231110-9504
	
	W1117 23:11:49.212016    3748 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-20211117231110-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20211117231110-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20211117231110-9504
	I1117 23:11:49.212084    3748 fix.go:57] fixHost completed within 23.9356289s
	I1117 23:11:49.212140    3748 start.go:80] releasing machines lock for "embed-certs-20211117231110-9504", held for 23.935685s
	W1117 23:11:49.212811    3748 out.go:241] * Failed to start docker container. Running "minikube delete -p embed-certs-20211117231110-9504" may fix it: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	* Failed to start docker container. Running "minikube delete -p embed-certs-20211117231110-9504" may fix it: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	I1117 23:11:49.217170    3748 out.go:176] 
	W1117 23:11:49.217170    3748 out.go:241] X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	W1117 23:11:49.217170    3748 out.go:241] * 
	* 
	W1117 23:11:49.218704    3748 out.go:241] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1117 23:11:49.220920    3748 out.go:176] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:173: failed starting minikube -first start-. args "out/minikube-windows-amd64.exe start -p embed-certs-20211117231110-9504 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker --kubernetes-version=v1.22.3": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/embed-certs/serial/FirstStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect embed-certs-20211117231110-9504
helpers_test.go:235: (dbg) docker inspect embed-certs-20211117231110-9504:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "embed-certs-20211117231110-9504",
	        "Id": "85fa506927c7e7c3e4b6e4d9b355db403a40af5beafde53576c49efb2f077bb1",
	        "Created": "2021-11-17T23:11:42.653236081Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.76.0/24",
	                    "Gateway": "192.168.76.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p embed-certs-20211117231110-9504 -n embed-certs-20211117231110-9504

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/FirstStart
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p embed-certs-20211117231110-9504 -n embed-certs-20211117231110-9504: exit status 7 (1.8604479s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 23:11:51.288314   10736 status.go:247] status error: host: state: unknown state "embed-certs-20211117231110-9504": docker container inspect embed-certs-20211117231110-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20211117231110-9504

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-20211117231110-9504" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/embed-certs/serial/FirstStart (40.36s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (39.73s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:171: (dbg) Run:  out/minikube-windows-amd64.exe start -p no-preload-20211117231133-9504 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker --kubernetes-version=v1.22.4-rc.0

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:171: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p no-preload-20211117231133-9504 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker --kubernetes-version=v1.22.4-rc.0: exit status 80 (37.7711162s)

                                                
                                                
-- stdout --
	* [no-preload-20211117231133-9504] minikube v1.24.0 on Microsoft Windows 10 Enterprise N 10.0.18363 Build 18363
	  - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=12739
	* Using the docker driver based on user configuration
	* Starting control plane node no-preload-20211117231133-9504 in cluster no-preload-20211117231133-9504
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* docker "no-preload-20211117231133-9504" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1117 23:11:33.342165    6200 out.go:297] Setting OutFile to fd 1424 ...
	I1117 23:11:33.407386    6200 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 23:11:33.407386    6200 out.go:310] Setting ErrFile to fd 1376...
	I1117 23:11:33.407386    6200 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 23:11:33.419228    6200 out.go:304] Setting JSON to false
	I1117 23:11:33.420804    6200 start.go:112] hostinfo: {"hostname":"minikube2","uptime":80009,"bootTime":1637110684,"procs":131,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.18363 Build 18363","kernelVersion":"10.0.18363 Build 18363","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"29dced62-21fb-45d8-a34e-472b66ced002"}
	W1117 23:11:33.421864    6200 start.go:120] gopshost.Virtualization returned error: not implemented yet
	I1117 23:11:33.427731    6200 out.go:176] * [no-preload-20211117231133-9504] minikube v1.24.0 on Microsoft Windows 10 Enterprise N 10.0.18363 Build 18363
	I1117 23:11:33.428723    6200 notify.go:174] Checking for updates...
	I1117 23:11:33.432349    6200 out.go:176]   - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	I1117 23:11:33.435317    6200 out.go:176]   - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	I1117 23:11:33.437665    6200 out.go:176]   - MINIKUBE_LOCATION=12739
	I1117 23:11:33.438579    6200 config.go:176] Loaded profile config "embed-certs-20211117231110-9504": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.3
	I1117 23:11:33.439672    6200 config.go:176] Loaded profile config "kubenet-20211117230313-9504": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.3
	I1117 23:11:33.439672    6200 config.go:176] Loaded profile config "multinode-20211117225530-9504-m01": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.3
	I1117 23:11:33.440490    6200 config.go:176] Loaded profile config "old-k8s-version-20211117231110-9504": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.14.0
	I1117 23:11:33.440490    6200 driver.go:343] Setting default libvirt URI to qemu:///system
	I1117 23:11:34.961883    6200 docker.go:132] docker version: linux-19.03.12
	I1117 23:11:34.966226    6200 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I1117 23:11:35.314005    6200 info.go:263] docker info: {ID:L2W4:ZZCI:FAHF:ZZT6:NAWJ:ODFR:2ZBQ:YQ77:PYZZ:VR3E:WLUM:OBSV Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:40 OomKillDisable:true NGoroutines:52 SystemTime:2021-11-17 23:11:35.042913843 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:4.19.76-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://inde
x.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:6239399936 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:19.03.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7ad184331fa3e55e52b890ea95e65ba581ae3429 Expected:7ad184331fa3e55e52b890ea95e65ba581ae3429} RuncCommit:{ID:dc9208a3303feef5b3839f4323d9beb36df0a9dd Expected:dc9208a3303feef5b3839f4323d9beb36df0a9dd} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense:Community Engine Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I1117 23:11:35.319351    6200 out.go:176] * Using the docker driver based on user configuration
	I1117 23:11:35.319351    6200 start.go:280] selected driver: docker
	I1117 23:11:35.319351    6200 start.go:775] validating driver "docker" against <nil>
	I1117 23:11:35.319351    6200 start.go:786] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I1117 23:11:35.376397    6200 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I1117 23:11:35.713548    6200 info.go:263] docker info: {ID:L2W4:ZZCI:FAHF:ZZT6:NAWJ:ODFR:2ZBQ:YQ77:PYZZ:VR3E:WLUM:OBSV Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:40 OomKillDisable:true NGoroutines:52 SystemTime:2021-11-17 23:11:35.456420143 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:4.19.76-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://inde
x.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:6239399936 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:19.03.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7ad184331fa3e55e52b890ea95e65ba581ae3429 Expected:7ad184331fa3e55e52b890ea95e65ba581ae3429} RuncCommit:{ID:dc9208a3303feef5b3839f4323d9beb36df0a9dd Expected:dc9208a3303feef5b3839f4323d9beb36df0a9dd} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense:Community Engine Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I1117 23:11:35.713810    6200 start_flags.go:268] no existing cluster config was found, will generate one from the flags 
	I1117 23:11:35.714360    6200 start_flags.go:758] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1117 23:11:35.714428    6200 cni.go:93] Creating CNI manager for ""
	I1117 23:11:35.714428    6200 cni.go:167] CNI unnecessary in this configuration, recommending no CNI
	I1117 23:11:35.714428    6200 start_flags.go:282] config:
	{Name:no-preload-20211117231133-9504 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.4-rc.0 ClusterName:no-preload-20211117231133-9504 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CR
ISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host}
	I1117 23:11:35.717454    6200 out.go:176] * Starting control plane node no-preload-20211117231133-9504 in cluster no-preload-20211117231133-9504
	I1117 23:11:35.717567    6200 cache.go:118] Beginning downloading kic base image for docker with docker
	I1117 23:11:35.720509    6200 out.go:176] * Pulling base image ...
	I1117 23:11:35.720509    6200 preload.go:132] Checking if preload exists for k8s version v1.22.4-rc.0 and runtime docker
	I1117 23:11:35.721043    6200 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c in local docker daemon
	I1117 23:11:35.721168    6200 profile.go:147] Saving config to C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\no-preload-20211117231133-9504\config.json ...
	I1117 23:11:35.721308    6200 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\docker.io\kubernetesui\metrics-scraper:v1.0.7 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\docker.io\kubernetesui\metrics-scraper_v1.0.7
	I1117 23:11:35.721308    6200 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\k8s.gcr.io\kube-apiserver:v1.22.4-rc.0 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\k8s.gcr.io\kube-apiserver_v1.22.4-rc.0
	I1117 23:11:35.721308    6200 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\k8s.gcr.io\coredns\coredns:v1.8.4 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\k8s.gcr.io\coredns\coredns_v1.8.4
	I1117 23:11:35.721436    6200 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\k8s.gcr.io\etcd:3.5.0-0 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\k8s.gcr.io\etcd_3.5.0-0
	I1117 23:11:35.721436    6200 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\k8s.gcr.io\kube-scheduler:v1.22.4-rc.0 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\k8s.gcr.io\kube-scheduler_v1.22.4-rc.0
	I1117 23:11:35.721552    6200 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\docker.io\kubernetesui\dashboard:v2.3.1 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\docker.io\kubernetesui\dashboard_v2.3.1
	I1117 23:11:35.721606    6200 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\k8s.gcr.io\kube-proxy:v1.22.4-rc.0 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\k8s.gcr.io\kube-proxy_v1.22.4-rc.0
	I1117 23:11:35.721606    6200 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\k8s.gcr.io\kube-controller-manager:v1.22.4-rc.0 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\k8s.gcr.io\kube-controller-manager_v1.22.4-rc.0
	I1117 23:11:35.721382    6200 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\no-preload-20211117231133-9504\config.json: {Name:mk079c5c2d7ef423abba3393517c9808a04dfe72 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1117 23:11:35.721308    6200 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\k8s.gcr.io\pause:3.5 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\k8s.gcr.io\pause_3.5
	I1117 23:11:35.721436    6200 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\gcr.io\k8s-minikube\storage-provisioner:v5 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\gcr.io\k8s-minikube\storage-provisioner_v5
	I1117 23:11:35.860321    6200 image.go:79] Found gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c in local docker daemon, skipping pull
	I1117 23:11:35.860401    6200 cache.go:140] gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c exists in daemon, skipping load
	I1117 23:11:35.860502    6200 cache.go:206] Successfully downloaded all kic artifacts
	I1117 23:11:35.860877    6200 start.go:313] acquiring machines lock for no-preload-20211117231133-9504: {Name:mk72290d14abe23f276712b59e3d3211293a2fa6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 23:11:35.861104    6200 start.go:317] acquired machines lock for "no-preload-20211117231133-9504" in 227.2µs
	I1117 23:11:35.861341    6200 start.go:89] Provisioning new machine with config: &{Name:no-preload-20211117231133-9504 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.4-rc.0 ClusterName:no-preload-20211117231133-9504 Namespace:default APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.22.4-rc.0 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host} &{Name: IP: Port:8443 KubernetesVersion:v1.22.4-rc.0 ControlPlane:true Worker:true}
	I1117 23:11:35.861447    6200 start.go:126] createHost starting for "" (driver="docker")
	I1117 23:11:35.865964    6200 out.go:203] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I1117 23:11:35.866579    6200 start.go:160] libmachine.API.Create for "no-preload-20211117231133-9504" (driver="docker")
	I1117 23:11:35.866738    6200 client.go:168] LocalClient.Create starting
	I1117 23:11:35.867314    6200 main.go:130] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I1117 23:11:35.867487    6200 main.go:130] libmachine: Decoding PEM data...
	I1117 23:11:35.867487    6200 main.go:130] libmachine: Parsing certificate...
	I1117 23:11:35.867487    6200 main.go:130] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I1117 23:11:35.867487    6200 main.go:130] libmachine: Decoding PEM data...
	I1117 23:11:35.867487    6200 main.go:130] libmachine: Parsing certificate...
	I1117 23:11:35.875790    6200 cli_runner.go:115] Run: docker network inspect no-preload-20211117231133-9504 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1117 23:11:35.895776    6200 cache.go:107] acquiring lock: {Name:mk16b2c84e0562e7dfabdafa8a4b202b59aeeb0f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 23:11:35.895776    6200 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\docker.io\kubernetesui\metrics-scraper_v1.0.7 exists
	I1117 23:11:35.895776    6200 cache.go:96] cache image "docker.io/kubernetesui/metrics-scraper:v1.0.7" -> "C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\cache\\images\\docker.io\\kubernetesui\\metrics-scraper_v1.0.7" took 174.4668ms
	I1117 23:11:35.895776    6200 cache.go:80] save to tar file docker.io/kubernetesui/metrics-scraper:v1.0.7 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\docker.io\kubernetesui\metrics-scraper_v1.0.7 succeeded
	I1117 23:11:35.896783    6200 cache.go:107] acquiring lock: {Name:mke9439de88fd7cfde7b3c89f335155fffdfe7dd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 23:11:35.896783    6200 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\gcr.io\k8s-minikube\storage-provisioner_v5 exists
	I1117 23:11:35.896783    6200 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\cache\\images\\gcr.io\\k8s-minikube\\storage-provisioner_v5" took 174.8989ms
	I1117 23:11:35.896783    6200 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\gcr.io\k8s-minikube\storage-provisioner_v5 succeeded
	I1117 23:11:35.898768    6200 cache.go:107] acquiring lock: {Name:mk27464e4112fb40ec903ad32451be9529e7a06c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 23:11:35.898768    6200 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\k8s.gcr.io\pause_3.5 exists
	I1117 23:11:35.898768    6200 cache.go:96] cache image "k8s.gcr.io/pause:3.5" -> "C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\cache\\images\\k8s.gcr.io\\pause_3.5" took 176.8837ms
	I1117 23:11:35.898768    6200 cache.go:80] save to tar file k8s.gcr.io/pause:3.5 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\k8s.gcr.io\pause_3.5 succeeded
	I1117 23:11:35.907769    6200 cache.go:107] acquiring lock: {Name:mk0c4800ed5b13ab291ff2265133357b20336a8c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 23:11:35.907930    6200 cache.go:107] acquiring lock: {Name:mk07753e378828d6a9b5c8273895167d2e474020 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 23:11:35.907769    6200 cache.go:107] acquiring lock: {Name:mkecddbdf5bdb96eb368bff20b8b8044de9c16ea Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 23:11:35.908028    6200 cache.go:107] acquiring lock: {Name:mkfa4d3d6685004524c7d13a9f49266b74c76ab8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 23:11:35.908028    6200 cache.go:107] acquiring lock: {Name:mk7f425adc20e24994bc202f1792de2676d16e94 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 23:11:35.908107    6200 cache.go:107] acquiring lock: {Name:mkf2d8ca031c09006306827859434409adc972c8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 23:11:35.908215    6200 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\docker.io\kubernetesui\dashboard_v2.3.1 exists
	I1117 23:11:35.908384    6200 cache.go:96] cache image "docker.io/kubernetesui/dashboard:v2.3.1" -> "C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\cache\\images\\docker.io\\kubernetesui\\dashboard_v2.3.1" took 186.8308ms
	I1117 23:11:35.908384    6200 cache.go:80] save to tar file docker.io/kubernetesui/dashboard:v2.3.1 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\docker.io\kubernetesui\dashboard_v2.3.1 succeeded
	I1117 23:11:35.908384    6200 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\k8s.gcr.io\etcd_3.5.0-0 exists
	I1117 23:11:35.908384    6200 image.go:134] retrieving image: k8s.gcr.io/kube-scheduler:v1.22.4-rc.0
	I1117 23:11:35.908584    6200 image.go:134] retrieving image: k8s.gcr.io/kube-apiserver:v1.22.4-rc.0
	I1117 23:11:35.908676    6200 cache.go:96] cache image "k8s.gcr.io/etcd:3.5.0-0" -> "C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\cache\\images\\k8s.gcr.io\\etcd_3.5.0-0" took 187.1807ms
	I1117 23:11:35.908807    6200 cache.go:80] save to tar file k8s.gcr.io/etcd:3.5.0-0 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\k8s.gcr.io\etcd_3.5.0-0 succeeded
	I1117 23:11:35.908847    6200 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\k8s.gcr.io\coredns\coredns_v1.8.4 exists
	I1117 23:11:35.908847    6200 image.go:134] retrieving image: k8s.gcr.io/kube-proxy:v1.22.4-rc.0
	I1117 23:11:35.909140    6200 cache.go:96] cache image "k8s.gcr.io/coredns/coredns:v1.8.4" -> "C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\cache\\images\\k8s.gcr.io\\coredns\\coredns_v1.8.4" took 187.41ms
	I1117 23:11:35.909140    6200 cache.go:80] save to tar file k8s.gcr.io/coredns/coredns:v1.8.4 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\k8s.gcr.io\coredns\coredns_v1.8.4 succeeded
	I1117 23:11:35.910051    6200 cache.go:107] acquiring lock: {Name:mkf3b50dab57c642704a948e6ed1b538aa89c43f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 23:11:35.910051    6200 image.go:134] retrieving image: k8s.gcr.io/kube-controller-manager:v1.22.4-rc.0
	I1117 23:11:35.913464    6200 image.go:176] found k8s.gcr.io/kube-scheduler:v1.22.4-rc.0 locally: &{ref:{Repository:{Registry:{insecure:false registry:k8s.gcr.io} repository:kube-scheduler} tag:v1.22.4-rc.0 original:k8s.gcr.io/kube-scheduler:v1.22.4-rc.0} opener:0xc000d8e0e0 tarballImage:<nil> once:{done:0 m:{state:0 sema:0}} err:<nil>}
	I1117 23:11:35.913464    6200 cache.go:161] opening:  \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\k8s.gcr.io\kube-scheduler_v1.22.4-rc.0
	I1117 23:11:35.914941    6200 image.go:176] found k8s.gcr.io/kube-apiserver:v1.22.4-rc.0 locally: &{ref:{Repository:{Registry:{insecure:false registry:k8s.gcr.io} repository:kube-apiserver} tag:v1.22.4-rc.0 original:k8s.gcr.io/kube-apiserver:v1.22.4-rc.0} opener:0xc00016e0e0 tarballImage:<nil> once:{done:0 m:{state:0 sema:0}} err:<nil>}
	I1117 23:11:35.914941    6200 cache.go:161] opening:  \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\k8s.gcr.io\kube-apiserver_v1.22.4-rc.0
	W1117 23:11:35.921235    6200 cache.go:172] failed to clean up the temp file \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\k8s.gcr.io\kube-scheduler_v1.22.4-rc.0.1894050249.tmp: remove \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\k8s.gcr.io\kube-scheduler_v1.22.4-rc.0.1894050249.tmp: The process cannot access the file because it is being used by another process.
	I1117 23:11:35.921235    6200 cache.go:96] cache image "k8s.gcr.io/kube-scheduler:v1.22.4-rc.0" -> "C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\cache\\images\\k8s.gcr.io\\kube-scheduler_v1.22.4-rc.0" took 199.7395ms
	I1117 23:11:35.925168    6200 image.go:176] found k8s.gcr.io/kube-controller-manager:v1.22.4-rc.0 locally: &{ref:{Repository:{Registry:{insecure:false registry:k8s.gcr.io} repository:kube-controller-manager} tag:v1.22.4-rc.0 original:k8s.gcr.io/kube-controller-manager:v1.22.4-rc.0} opener:0xc0002da000 tarballImage:<nil> once:{done:0 m:{state:0 sema:0}} err:<nil>}
	I1117 23:11:35.925168    6200 cache.go:161] opening:  \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\k8s.gcr.io\kube-controller-manager_v1.22.4-rc.0
	W1117 23:11:35.928176    6200 cache.go:172] failed to clean up the temp file \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\k8s.gcr.io\kube-controller-manager_v1.22.4-rc.0.836495919.tmp: remove \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\k8s.gcr.io\kube-controller-manager_v1.22.4-rc.0.836495919.tmp: The process cannot access the file because it is being used by another process.
	I1117 23:11:35.929177    6200 cache.go:96] cache image "k8s.gcr.io/kube-controller-manager:v1.22.4-rc.0" -> "C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\cache\\images\\k8s.gcr.io\\kube-controller-manager_v1.22.4-rc.0" took 207.413ms
	I1117 23:11:35.935856    6200 image.go:176] found k8s.gcr.io/kube-proxy:v1.22.4-rc.0 locally: &{ref:{Repository:{Registry:{insecure:false registry:k8s.gcr.io} repository:kube-proxy} tag:v1.22.4-rc.0 original:k8s.gcr.io/kube-proxy:v1.22.4-rc.0} opener:0xc00016e3f0 tarballImage:<nil> once:{done:0 m:{state:0 sema:0}} err:<nil>}
	I1117 23:11:35.935856    6200 cache.go:161] opening:  \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\k8s.gcr.io\kube-proxy_v1.22.4-rc.0
	W1117 23:11:35.940355    6200 cache.go:172] failed to clean up the temp file \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\k8s.gcr.io\kube-apiserver_v1.22.4-rc.0.2261647343.tmp: remove \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\k8s.gcr.io\kube-apiserver_v1.22.4-rc.0.2261647343.tmp: The process cannot access the file because it is being used by another process.
	I1117 23:11:35.940684    6200 cache.go:96] cache image "k8s.gcr.io/kube-apiserver:v1.22.4-rc.0" -> "C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\cache\\images\\k8s.gcr.io\\kube-apiserver_v1.22.4-rc.0" took 219.3749ms
	W1117 23:11:35.949809    6200 cache.go:172] failed to clean up the temp file \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\k8s.gcr.io\kube-proxy_v1.22.4-rc.0.1967827556.tmp: remove \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\k8s.gcr.io\kube-proxy_v1.22.4-rc.0.1967827556.tmp: The process cannot access the file because it is being used by another process.
	I1117 23:11:35.950074    6200 cache.go:96] cache image "k8s.gcr.io/kube-proxy:v1.22.4-rc.0" -> "C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\cache\\images\\k8s.gcr.io\\kube-proxy_v1.22.4-rc.0" took 228.4665ms
	W1117 23:11:35.984518    6200 cli_runner.go:162] docker network inspect no-preload-20211117231133-9504 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1117 23:11:35.989319    6200 network_create.go:254] running [docker network inspect no-preload-20211117231133-9504] to gather additional debugging logs...
	I1117 23:11:35.989319    6200 cli_runner.go:115] Run: docker network inspect no-preload-20211117231133-9504
	W1117 23:11:36.080461    6200 cli_runner.go:162] docker network inspect no-preload-20211117231133-9504 returned with exit code 1
	I1117 23:11:36.080461    6200 network_create.go:257] error running [docker network inspect no-preload-20211117231133-9504]: docker network inspect no-preload-20211117231133-9504: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: no-preload-20211117231133-9504
	I1117 23:11:36.080461    6200 network_create.go:259] output of [docker network inspect no-preload-20211117231133-9504]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: no-preload-20211117231133-9504
	
	** /stderr **
	I1117 23:11:36.084512    6200 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1117 23:11:36.192581    6200 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc000e580f0] misses:0}
	I1117 23:11:36.193577    6200 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1117 23:11:36.193577    6200 network_create.go:106] attempt to create docker network no-preload-20211117231133-9504 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1117 23:11:36.196936    6200 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true no-preload-20211117231133-9504
	I1117 23:11:36.400713    6200 network_create.go:90] docker network no-preload-20211117231133-9504 192.168.49.0/24 created
	I1117 23:11:36.400713    6200 kic.go:106] calculated static IP "192.168.49.2" for the "no-preload-20211117231133-9504" container
	I1117 23:11:36.409278    6200 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
	I1117 23:11:36.504123    6200 cli_runner.go:115] Run: docker volume create no-preload-20211117231133-9504 --label name.minikube.sigs.k8s.io=no-preload-20211117231133-9504 --label created_by.minikube.sigs.k8s.io=true
	I1117 23:11:36.605669    6200 oci.go:102] Successfully created a docker volume no-preload-20211117231133-9504
	I1117 23:11:36.610246    6200 cli_runner.go:115] Run: docker run --rm --name no-preload-20211117231133-9504-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-20211117231133-9504 --entrypoint /usr/bin/test -v no-preload-20211117231133-9504:/var gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -d /var/lib
	I1117 23:11:37.676332    6200 cli_runner.go:168] Completed: docker run --rm --name no-preload-20211117231133-9504-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-20211117231133-9504 --entrypoint /usr/bin/test -v no-preload-20211117231133-9504:/var gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -d /var/lib: (1.0660783s)
	I1117 23:11:37.676332    6200 oci.go:106] Successfully prepared a docker volume no-preload-20211117231133-9504
	I1117 23:11:37.676751    6200 preload.go:132] Checking if preload exists for k8s version v1.22.4-rc.0 and runtime docker
	I1117 23:11:37.681003    6200 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I1117 23:11:38.028007    6200 info.go:263] docker info: {ID:L2W4:ZZCI:FAHF:ZZT6:NAWJ:ODFR:2ZBQ:YQ77:PYZZ:VR3E:WLUM:OBSV Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:40 OomKillDisable:true NGoroutines:52 SystemTime:2021-11-17 23:11:37.770560288 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:4.19.76-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://inde
x.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:6239399936 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:19.03.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7ad184331fa3e55e52b890ea95e65ba581ae3429 Expected:7ad184331fa3e55e52b890ea95e65ba581ae3429} RuncCommit:{ID:dc9208a3303feef5b3839f4323d9beb36df0a9dd Expected:dc9208a3303feef5b3839f4323d9beb36df0a9dd} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense:Community Engine Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	E1117 23:11:38.028007    6200 oci.go:197] error getting kernel modules path: Unable to locate kernel modules
	I1117 23:11:38.028007    6200 client.go:171] LocalClient.Create took 2.1612522s
	I1117 23:11:40.037085    6200 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1117 23:11:40.040723    6200 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20211117231133-9504
	W1117 23:11:40.131163    6200 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20211117231133-9504 returned with exit code 1
	I1117 23:11:40.131495    6200 retry.go:31] will retry after 276.165072ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-20211117231133-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20211117231133-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20211117231133-9504
	I1117 23:11:40.412443    6200 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20211117231133-9504
	W1117 23:11:40.500354    6200 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20211117231133-9504 returned with exit code 1
	I1117 23:11:40.500830    6200 retry.go:31] will retry after 540.190908ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-20211117231133-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20211117231133-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20211117231133-9504
	I1117 23:11:41.045391    6200 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20211117231133-9504
	W1117 23:11:41.135751    6200 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20211117231133-9504 returned with exit code 1
	I1117 23:11:41.135947    6200 retry.go:31] will retry after 655.06503ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-20211117231133-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20211117231133-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20211117231133-9504
	I1117 23:11:41.797918    6200 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20211117231133-9504
	W1117 23:11:41.892011    6200 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20211117231133-9504 returned with exit code 1
	W1117 23:11:41.892271    6200 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-20211117231133-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20211117231133-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20211117231133-9504
	
	W1117 23:11:41.892271    6200 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-20211117231133-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20211117231133-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20211117231133-9504
	I1117 23:11:41.892271    6200 start.go:129] duration metric: createHost completed in 6.0307788s
	I1117 23:11:41.892271    6200 start.go:80] releasing machines lock for "no-preload-20211117231133-9504", held for 6.0311216s
	W1117 23:11:41.892271    6200 start.go:532] error starting host: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	I1117 23:11:41.901100    6200 cli_runner.go:115] Run: docker container inspect no-preload-20211117231133-9504 --format={{.State.Status}}
	W1117 23:11:41.990779    6200 cli_runner.go:162] docker container inspect no-preload-20211117231133-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:11:41.991078    6200 delete.go:82] Unable to get host status for no-preload-20211117231133-9504, assuming it has already been deleted: state: unknown state "no-preload-20211117231133-9504": docker container inspect no-preload-20211117231133-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20211117231133-9504
	W1117 23:11:41.991731    6200 out.go:241] ! StartHost failed, but will try again: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	! StartHost failed, but will try again: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	I1117 23:11:41.991731    6200 start.go:547] Will try again in 5 seconds ...
	I1117 23:11:46.992888    6200 start.go:313] acquiring machines lock for no-preload-20211117231133-9504: {Name:mk72290d14abe23f276712b59e3d3211293a2fa6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 23:11:46.993121    6200 start.go:317] acquired machines lock for "no-preload-20211117231133-9504" in 196.7µs
	I1117 23:11:46.993264    6200 start.go:93] Skipping create...Using existing machine configuration
	I1117 23:11:46.993264    6200 fix.go:55] fixHost starting: 
	I1117 23:11:47.002154    6200 cli_runner.go:115] Run: docker container inspect no-preload-20211117231133-9504 --format={{.State.Status}}
	W1117 23:11:47.103497    6200 cli_runner.go:162] docker container inspect no-preload-20211117231133-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:11:47.103497    6200 fix.go:108] recreateIfNeeded on no-preload-20211117231133-9504: state= err=unknown state "no-preload-20211117231133-9504": docker container inspect no-preload-20211117231133-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20211117231133-9504
	I1117 23:11:47.103497    6200 fix.go:113] machineExists: false. err=machine does not exist
	I1117 23:11:47.107867    6200 out.go:176] * docker "no-preload-20211117231133-9504" container is missing, will recreate.
	I1117 23:11:47.107867    6200 delete.go:124] DEMOLISHING no-preload-20211117231133-9504 ...
	I1117 23:11:47.116323    6200 cli_runner.go:115] Run: docker container inspect no-preload-20211117231133-9504 --format={{.State.Status}}
	W1117 23:11:47.224363    6200 cli_runner.go:162] docker container inspect no-preload-20211117231133-9504 --format={{.State.Status}} returned with exit code 1
	W1117 23:11:47.224363    6200 stop.go:75] unable to get state: unknown state "no-preload-20211117231133-9504": docker container inspect no-preload-20211117231133-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20211117231133-9504
	I1117 23:11:47.224363    6200 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "no-preload-20211117231133-9504": docker container inspect no-preload-20211117231133-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20211117231133-9504
	I1117 23:11:47.236858    6200 cli_runner.go:115] Run: docker container inspect no-preload-20211117231133-9504 --format={{.State.Status}}
	W1117 23:11:47.342781    6200 cli_runner.go:162] docker container inspect no-preload-20211117231133-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:11:47.342781    6200 delete.go:82] Unable to get host status for no-preload-20211117231133-9504, assuming it has already been deleted: state: unknown state "no-preload-20211117231133-9504": docker container inspect no-preload-20211117231133-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20211117231133-9504
	I1117 23:11:47.345768    6200 cli_runner.go:115] Run: docker container inspect -f {{.Id}} no-preload-20211117231133-9504
	W1117 23:11:47.433944    6200 cli_runner.go:162] docker container inspect -f {{.Id}} no-preload-20211117231133-9504 returned with exit code 1
	I1117 23:11:47.434047    6200 kic.go:360] could not find the container no-preload-20211117231133-9504 to remove it. will try anyways
	I1117 23:11:47.442057    6200 cli_runner.go:115] Run: docker container inspect no-preload-20211117231133-9504 --format={{.State.Status}}
	W1117 23:11:47.541042    6200 cli_runner.go:162] docker container inspect no-preload-20211117231133-9504 --format={{.State.Status}} returned with exit code 1
	W1117 23:11:47.541042    6200 oci.go:83] error getting container status, will try to delete anyways: unknown state "no-preload-20211117231133-9504": docker container inspect no-preload-20211117231133-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20211117231133-9504
	I1117 23:11:47.546678    6200 cli_runner.go:115] Run: docker exec --privileged -t no-preload-20211117231133-9504 /bin/bash -c "sudo init 0"
	W1117 23:11:47.646909    6200 cli_runner.go:162] docker exec --privileged -t no-preload-20211117231133-9504 /bin/bash -c "sudo init 0" returned with exit code 1
	I1117 23:11:47.646909    6200 oci.go:658] error shutdown no-preload-20211117231133-9504: docker exec --privileged -t no-preload-20211117231133-9504 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: no-preload-20211117231133-9504
	I1117 23:11:48.651052    6200 cli_runner.go:115] Run: docker container inspect no-preload-20211117231133-9504 --format={{.State.Status}}
	W1117 23:11:48.756936    6200 cli_runner.go:162] docker container inspect no-preload-20211117231133-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:11:48.756936    6200 oci.go:670] temporary error verifying shutdown: unknown state "no-preload-20211117231133-9504": docker container inspect no-preload-20211117231133-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20211117231133-9504
	I1117 23:11:48.756936    6200 oci.go:672] temporary error: container no-preload-20211117231133-9504 status is  but expect it to be exited
	I1117 23:11:48.756936    6200 retry.go:31] will retry after 462.318748ms: couldn't verify container is exited. %v: unknown state "no-preload-20211117231133-9504": docker container inspect no-preload-20211117231133-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20211117231133-9504
	I1117 23:11:49.225797    6200 cli_runner.go:115] Run: docker container inspect no-preload-20211117231133-9504 --format={{.State.Status}}
	W1117 23:11:49.331116    6200 cli_runner.go:162] docker container inspect no-preload-20211117231133-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:11:49.331308    6200 oci.go:670] temporary error verifying shutdown: unknown state "no-preload-20211117231133-9504": docker container inspect no-preload-20211117231133-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20211117231133-9504
	I1117 23:11:49.331308    6200 oci.go:672] temporary error: container no-preload-20211117231133-9504 status is  but expect it to be exited
	I1117 23:11:49.331308    6200 retry.go:31] will retry after 890.117305ms: couldn't verify container is exited. %v: unknown state "no-preload-20211117231133-9504": docker container inspect no-preload-20211117231133-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20211117231133-9504
	I1117 23:11:50.226515    6200 cli_runner.go:115] Run: docker container inspect no-preload-20211117231133-9504 --format={{.State.Status}}
	W1117 23:11:50.320248    6200 cli_runner.go:162] docker container inspect no-preload-20211117231133-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:11:50.320505    6200 oci.go:670] temporary error verifying shutdown: unknown state "no-preload-20211117231133-9504": docker container inspect no-preload-20211117231133-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20211117231133-9504
	I1117 23:11:50.320680    6200 oci.go:672] temporary error: container no-preload-20211117231133-9504 status is  but expect it to be exited
	I1117 23:11:50.320680    6200 retry.go:31] will retry after 636.341646ms: couldn't verify container is exited. %v: unknown state "no-preload-20211117231133-9504": docker container inspect no-preload-20211117231133-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20211117231133-9504
	I1117 23:11:50.962281    6200 cli_runner.go:115] Run: docker container inspect no-preload-20211117231133-9504 --format={{.State.Status}}
	W1117 23:11:51.066584    6200 cli_runner.go:162] docker container inspect no-preload-20211117231133-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:11:51.066584    6200 oci.go:670] temporary error verifying shutdown: unknown state "no-preload-20211117231133-9504": docker container inspect no-preload-20211117231133-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20211117231133-9504
	I1117 23:11:51.066584    6200 oci.go:672] temporary error: container no-preload-20211117231133-9504 status is  but expect it to be exited
	I1117 23:11:51.066584    6200 retry.go:31] will retry after 1.107876242s: couldn't verify container is exited. %v: unknown state "no-preload-20211117231133-9504": docker container inspect no-preload-20211117231133-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20211117231133-9504
	I1117 23:11:52.178798    6200 cli_runner.go:115] Run: docker container inspect no-preload-20211117231133-9504 --format={{.State.Status}}
	W1117 23:11:52.273417    6200 cli_runner.go:162] docker container inspect no-preload-20211117231133-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:11:52.273615    6200 oci.go:670] temporary error verifying shutdown: unknown state "no-preload-20211117231133-9504": docker container inspect no-preload-20211117231133-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20211117231133-9504
	I1117 23:11:52.273615    6200 oci.go:672] temporary error: container no-preload-20211117231133-9504 status is  but expect it to be exited
	I1117 23:11:52.273615    6200 retry.go:31] will retry after 1.511079094s: couldn't verify container is exited. %v: unknown state "no-preload-20211117231133-9504": docker container inspect no-preload-20211117231133-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20211117231133-9504
	I1117 23:11:53.790535    6200 cli_runner.go:115] Run: docker container inspect no-preload-20211117231133-9504 --format={{.State.Status}}
	W1117 23:11:53.891555    6200 cli_runner.go:162] docker container inspect no-preload-20211117231133-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:11:53.891555    6200 oci.go:670] temporary error verifying shutdown: unknown state "no-preload-20211117231133-9504": docker container inspect no-preload-20211117231133-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20211117231133-9504
	I1117 23:11:53.891555    6200 oci.go:672] temporary error: container no-preload-20211117231133-9504 status is  but expect it to be exited
	I1117 23:11:53.891555    6200 retry.go:31] will retry after 3.04096222s: couldn't verify container is exited. %v: unknown state "no-preload-20211117231133-9504": docker container inspect no-preload-20211117231133-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20211117231133-9504
	I1117 23:11:56.937617    6200 cli_runner.go:115] Run: docker container inspect no-preload-20211117231133-9504 --format={{.State.Status}}
	W1117 23:11:57.034254    6200 cli_runner.go:162] docker container inspect no-preload-20211117231133-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:11:57.034254    6200 oci.go:670] temporary error verifying shutdown: unknown state "no-preload-20211117231133-9504": docker container inspect no-preload-20211117231133-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20211117231133-9504
	I1117 23:11:57.034254    6200 oci.go:672] temporary error: container no-preload-20211117231133-9504 status is  but expect it to be exited
	I1117 23:11:57.034254    6200 retry.go:31] will retry after 5.781953173s: couldn't verify container is exited. %v: unknown state "no-preload-20211117231133-9504": docker container inspect no-preload-20211117231133-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20211117231133-9504
	I1117 23:12:02.821542    6200 cli_runner.go:115] Run: docker container inspect no-preload-20211117231133-9504 --format={{.State.Status}}
	W1117 23:12:02.909492    6200 cli_runner.go:162] docker container inspect no-preload-20211117231133-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:12:02.909583    6200 oci.go:670] temporary error verifying shutdown: unknown state "no-preload-20211117231133-9504": docker container inspect no-preload-20211117231133-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20211117231133-9504
	I1117 23:12:02.909649    6200 oci.go:672] temporary error: container no-preload-20211117231133-9504 status is  but expect it to be exited
	I1117 23:12:02.909672    6200 oci.go:87] couldn't shut down no-preload-20211117231133-9504 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "no-preload-20211117231133-9504": docker container inspect no-preload-20211117231133-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20211117231133-9504
	 
	I1117 23:12:02.913833    6200 cli_runner.go:115] Run: docker rm -f -v no-preload-20211117231133-9504
	W1117 23:12:03.001337    6200 cli_runner.go:162] docker rm -f -v no-preload-20211117231133-9504 returned with exit code 1
	W1117 23:12:03.002279    6200 delete.go:139] delete failed (probably ok) <nil>
	I1117 23:12:03.002279    6200 fix.go:120] Sleeping 1 second for extra luck!
	I1117 23:12:04.002631    6200 start.go:126] createHost starting for "" (driver="docker")
	I1117 23:12:04.006460    6200 out.go:203] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I1117 23:12:04.006460    6200 start.go:160] libmachine.API.Create for "no-preload-20211117231133-9504" (driver="docker")
	I1117 23:12:04.006460    6200 client.go:168] LocalClient.Create starting
	I1117 23:12:04.007233    6200 main.go:130] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I1117 23:12:04.007539    6200 main.go:130] libmachine: Decoding PEM data...
	I1117 23:12:04.007597    6200 main.go:130] libmachine: Parsing certificate...
	I1117 23:12:04.007777    6200 main.go:130] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I1117 23:12:04.007997    6200 main.go:130] libmachine: Decoding PEM data...
	I1117 23:12:04.008031    6200 main.go:130] libmachine: Parsing certificate...
	I1117 23:12:04.013199    6200 cli_runner.go:115] Run: docker network inspect no-preload-20211117231133-9504 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1117 23:12:04.101145    6200 cli_runner.go:162] docker network inspect no-preload-20211117231133-9504 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1117 23:12:04.105591    6200 network_create.go:254] running [docker network inspect no-preload-20211117231133-9504] to gather additional debugging logs...
	I1117 23:12:04.105691    6200 cli_runner.go:115] Run: docker network inspect no-preload-20211117231133-9504
	W1117 23:12:04.199158    6200 cli_runner.go:162] docker network inspect no-preload-20211117231133-9504 returned with exit code 1
	I1117 23:12:04.199290    6200 network_create.go:257] error running [docker network inspect no-preload-20211117231133-9504]: docker network inspect no-preload-20211117231133-9504: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: no-preload-20211117231133-9504
	I1117 23:12:04.199467    6200 network_create.go:259] output of [docker network inspect no-preload-20211117231133-9504]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: no-preload-20211117231133-9504
	
	** /stderr **
	I1117 23:12:04.203565    6200 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1117 23:12:04.307292    6200 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000e580f0] amended:false}} dirty:map[] misses:0}
	I1117 23:12:04.307292    6200 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1117 23:12:04.322203    6200 network.go:288] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000e580f0] amended:true}} dirty:map[192.168.49.0:0xc000e580f0 192.168.58.0:0xc0012121a0] misses:0}
	I1117 23:12:04.322203    6200 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1117 23:12:04.322203    6200 network_create.go:106] attempt to create docker network no-preload-20211117231133-9504 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I1117 23:12:04.326179    6200 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true no-preload-20211117231133-9504
	I1117 23:12:04.539670    6200 network_create.go:90] docker network no-preload-20211117231133-9504 192.168.58.0/24 created
	I1117 23:12:04.539670    6200 kic.go:106] calculated static IP "192.168.58.2" for the "no-preload-20211117231133-9504" container
	I1117 23:12:04.547590    6200 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
	I1117 23:12:04.650851    6200 cli_runner.go:115] Run: docker volume create no-preload-20211117231133-9504 --label name.minikube.sigs.k8s.io=no-preload-20211117231133-9504 --label created_by.minikube.sigs.k8s.io=true
	I1117 23:12:04.752841    6200 oci.go:102] Successfully created a docker volume no-preload-20211117231133-9504
	I1117 23:12:04.757092    6200 cli_runner.go:115] Run: docker run --rm --name no-preload-20211117231133-9504-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-20211117231133-9504 --entrypoint /usr/bin/test -v no-preload-20211117231133-9504:/var gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -d /var/lib
	I1117 23:12:05.610290    6200 oci.go:106] Successfully prepared a docker volume no-preload-20211117231133-9504
	I1117 23:12:05.610290    6200 preload.go:132] Checking if preload exists for k8s version v1.22.4-rc.0 and runtime docker
	I1117 23:12:05.615158    6200 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I1117 23:12:05.964687    6200 info.go:263] docker info: {ID:L2W4:ZZCI:FAHF:ZZT6:NAWJ:ODFR:2ZBQ:YQ77:PYZZ:VR3E:WLUM:OBSV Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:40 OomKillDisable:true NGoroutines:52 SystemTime:2021-11-17 23:12:05.697659846 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:4.19.76-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://inde
x.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:6239399936 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:19.03.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7ad184331fa3e55e52b890ea95e65ba581ae3429 Expected:7ad184331fa3e55e52b890ea95e65ba581ae3429} RuncCommit:{ID:dc9208a3303feef5b3839f4323d9beb36df0a9dd Expected:dc9208a3303feef5b3839f4323d9beb36df0a9dd} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense:Community Engine Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	E1117 23:12:05.965066    6200 oci.go:197] error getting kernel modules path: Unable to locate kernel modules
	I1117 23:12:05.965161    6200 client.go:171] LocalClient.Create took 1.9586864s
	I1117 23:12:07.973633    6200 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1117 23:12:07.977135    6200 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20211117231133-9504
	W1117 23:12:08.077902    6200 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20211117231133-9504 returned with exit code 1
	I1117 23:12:08.078263    6200 retry.go:31] will retry after 178.565968ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-20211117231133-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20211117231133-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20211117231133-9504
	I1117 23:12:08.261783    6200 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20211117231133-9504
	W1117 23:12:08.352248    6200 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20211117231133-9504 returned with exit code 1
	I1117 23:12:08.352528    6200 retry.go:31] will retry after 330.246446ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-20211117231133-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20211117231133-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20211117231133-9504
	I1117 23:12:08.687464    6200 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20211117231133-9504
	W1117 23:12:08.778334    6200 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20211117231133-9504 returned with exit code 1
	I1117 23:12:08.778810    6200 retry.go:31] will retry after 460.157723ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-20211117231133-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20211117231133-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20211117231133-9504
	I1117 23:12:09.244346    6200 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20211117231133-9504
	W1117 23:12:09.334348    6200 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20211117231133-9504 returned with exit code 1
	W1117 23:12:09.334413    6200 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-20211117231133-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20211117231133-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20211117231133-9504
	
	W1117 23:12:09.334413    6200 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-20211117231133-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20211117231133-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20211117231133-9504
	I1117 23:12:09.334413    6200 start.go:129] duration metric: createHost completed in 5.3317421s
	I1117 23:12:09.340990    6200 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1117 23:12:09.344517    6200 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20211117231133-9504
	W1117 23:12:09.435763    6200 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20211117231133-9504 returned with exit code 1
	I1117 23:12:09.436022    6200 retry.go:31] will retry after 195.758538ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-20211117231133-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20211117231133-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20211117231133-9504
	I1117 23:12:09.637316    6200 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20211117231133-9504
	W1117 23:12:09.726380    6200 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20211117231133-9504 returned with exit code 1
	I1117 23:12:09.726641    6200 retry.go:31] will retry after 297.413196ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-20211117231133-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20211117231133-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20211117231133-9504
	I1117 23:12:10.028581    6200 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20211117231133-9504
	W1117 23:12:10.120362    6200 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20211117231133-9504 returned with exit code 1
	I1117 23:12:10.120362    6200 retry.go:31] will retry after 663.23513ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-20211117231133-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20211117231133-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20211117231133-9504
	I1117 23:12:10.789444    6200 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20211117231133-9504
	W1117 23:12:10.895686    6200 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20211117231133-9504 returned with exit code 1
	W1117 23:12:10.896240    6200 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-20211117231133-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20211117231133-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20211117231133-9504
	
	W1117 23:12:10.896541    6200 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-20211117231133-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20211117231133-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20211117231133-9504
	I1117 23:12:10.896622    6200 fix.go:57] fixHost completed within 23.9031785s
	I1117 23:12:10.896679    6200 start.go:80] releasing machines lock for "no-preload-20211117231133-9504", held for 23.903378s
	W1117 23:12:10.896679    6200 out.go:241] * Failed to start docker container. Running "minikube delete -p no-preload-20211117231133-9504" may fix it: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	* Failed to start docker container. Running "minikube delete -p no-preload-20211117231133-9504" may fix it: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	I1117 23:12:10.903000    6200 out.go:176] 
	W1117 23:12:10.903000    6200 out.go:241] X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	W1117 23:12:10.903000    6200 out.go:241] * 
	* 
	W1117 23:12:10.903951    6200 out.go:241] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1117 23:12:10.906923    6200 out.go:176] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:173: failed starting minikube -first start-. args "out/minikube-windows-amd64.exe start -p no-preload-20211117231133-9504 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker --kubernetes-version=v1.22.4-rc.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/no-preload/serial/FirstStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect no-preload-20211117231133-9504
helpers_test.go:235: (dbg) docker inspect no-preload-20211117231133-9504:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "no-preload-20211117231133-9504",
	        "Id": "03729738a14bc6e222aa8b654491630e4091cfad66b947339e45d35ea236214f",
	        "Created": "2021-11-17T23:12:04.406110101Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.58.0/24",
	                    "Gateway": "192.168.58.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-20211117231133-9504 -n no-preload-20211117231133-9504
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-20211117231133-9504 -n no-preload-20211117231133-9504: exit status 7 (1.758112s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 23:12:12.864976    3856 status.go:247] status error: host: state: unknown state "no-preload-20211117231133-9504": docker container inspect no-preload-20211117231133-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20211117231133-9504

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-20211117231133-9504" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/no-preload/serial/FirstStart (39.73s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (4.15s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:181: (dbg) Run:  kubectl --context old-k8s-version-20211117231110-9504 create -f testdata\busybox.yaml
start_stop_delete_test.go:181: (dbg) Non-zero exit: kubectl --context old-k8s-version-20211117231110-9504 create -f testdata\busybox.yaml: exit status 1 (218.5751ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-20211117231110-9504" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:181: kubectl --context old-k8s-version-20211117231110-9504 create -f testdata\busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/DeployApp]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-20211117231110-9504
helpers_test.go:235: (dbg) docker inspect old-k8s-version-20211117231110-9504:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "old-k8s-version-20211117231110-9504",
	        "Id": "6009a0bc1d99ae6c82ca17b35f379158c558287ff62cc23322707284d430eba4",
	        "Created": "2021-11-17T23:11:41.713808941Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.67.0/24",
	                    "Gateway": "192.168.67.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-20211117231110-9504 -n old-k8s-version-20211117231110-9504

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/DeployApp
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-20211117231110-9504 -n old-k8s-version-20211117231110-9504: exit status 7 (1.8596493s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 23:11:52.562393    8060 status.go:247] status error: host: state: unknown state "old-k8s-version-20211117231110-9504": docker container inspect old-k8s-version-20211117231110-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20211117231110-9504

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-20211117231110-9504" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/DeployApp]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-20211117231110-9504
helpers_test.go:235: (dbg) docker inspect old-k8s-version-20211117231110-9504:

                                                
                                                
-- stdout --
	[
	    {
	        "CreatedAt": "2021-11-17T23:11:14Z",
	        "Driver": "local",
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "old-k8s-version-20211117231110-9504"
	        },
	        "Mountpoint": "/var/lib/docker/volumes/old-k8s-version-20211117231110-9504/_data",
	        "Name": "old-k8s-version-20211117231110-9504",
	        "Options": {},
	        "Scope": "local"
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-20211117231110-9504 -n old-k8s-version-20211117231110-9504

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/DeployApp
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-20211117231110-9504 -n old-k8s-version-20211117231110-9504: exit status 7 (1.8667753s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 23:11:54.537751    3964 status.go:247] status error: host: state: unknown state "old-k8s-version-20211117231110-9504": docker container inspect old-k8s-version-20211117231110-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20211117231110-9504

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-20211117231110-9504" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (4.15s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (4.2s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:181: (dbg) Run:  kubectl --context embed-certs-20211117231110-9504 create -f testdata\busybox.yaml
start_stop_delete_test.go:181: (dbg) Non-zero exit: kubectl --context embed-certs-20211117231110-9504 create -f testdata\busybox.yaml: exit status 1 (214.9422ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-20211117231110-9504" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:181: kubectl --context embed-certs-20211117231110-9504 create -f testdata\busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/embed-certs/serial/DeployApp]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect embed-certs-20211117231110-9504
helpers_test.go:235: (dbg) docker inspect embed-certs-20211117231110-9504:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "embed-certs-20211117231110-9504",
	        "Id": "85fa506927c7e7c3e4b6e4d9b355db403a40af5beafde53576c49efb2f077bb1",
	        "Created": "2021-11-17T23:11:42.653236081Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.76.0/24",
	                    "Gateway": "192.168.76.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p embed-certs-20211117231110-9504 -n embed-certs-20211117231110-9504

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/DeployApp
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p embed-certs-20211117231110-9504 -n embed-certs-20211117231110-9504: exit status 7 (1.9543047s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 23:11:53.566053    5028 status.go:247] status error: host: state: unknown state "embed-certs-20211117231110-9504": docker container inspect embed-certs-20211117231110-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20211117231110-9504

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-20211117231110-9504" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/embed-certs/serial/DeployApp]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect embed-certs-20211117231110-9504
helpers_test.go:235: (dbg) docker inspect embed-certs-20211117231110-9504:

                                                
                                                
-- stdout --
	[
	    {
	        "CreatedAt": "2021-11-17T23:11:15Z",
	        "Driver": "local",
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "embed-certs-20211117231110-9504"
	        },
	        "Mountpoint": "/var/lib/docker/volumes/embed-certs-20211117231110-9504/_data",
	        "Name": "embed-certs-20211117231110-9504",
	        "Options": {},
	        "Scope": "local"
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p embed-certs-20211117231110-9504 -n embed-certs-20211117231110-9504

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/DeployApp
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p embed-certs-20211117231110-9504 -n embed-certs-20211117231110-9504: exit status 7 (1.8061066s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 23:11:55.493978    7324 status.go:247] status error: host: state: unknown state "embed-certs-20211117231110-9504": docker container inspect embed-certs-20211117231110-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20211117231110-9504

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-20211117231110-9504" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/embed-certs/serial/DeployApp (4.20s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/FirstStart (39.5s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/FirstStart
start_stop_delete_test.go:171: (dbg) Run:  out/minikube-windows-amd64.exe start -p default-k8s-different-port-20211117231152-9504 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker --kubernetes-version=v1.22.3

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/FirstStart
start_stop_delete_test.go:171: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p default-k8s-different-port-20211117231152-9504 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker --kubernetes-version=v1.22.3: exit status 80 (37.5357312s)

                                                
                                                
-- stdout --
	* [default-k8s-different-port-20211117231152-9504] minikube v1.24.0 on Microsoft Windows 10 Enterprise N 10.0.18363 Build 18363
	  - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=12739
	* Using the docker driver based on user configuration
	* Starting control plane node default-k8s-different-port-20211117231152-9504 in cluster default-k8s-different-port-20211117231152-9504
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* docker "default-k8s-different-port-20211117231152-9504" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1117 23:11:53.121380   10672 out.go:297] Setting OutFile to fd 1336 ...
	I1117 23:11:53.195602   10672 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 23:11:53.195602   10672 out.go:310] Setting ErrFile to fd 1388...
	I1117 23:11:53.195602   10672 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 23:11:53.207693   10672 out.go:304] Setting JSON to false
	I1117 23:11:53.209285   10672 start.go:112] hostinfo: {"hostname":"minikube2","uptime":80029,"bootTime":1637110684,"procs":133,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.18363 Build 18363","kernelVersion":"10.0.18363 Build 18363","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"29dced62-21fb-45d8-a34e-472b66ced002"}
	W1117 23:11:53.209285   10672 start.go:120] gopshost.Virtualization returned error: not implemented yet
	I1117 23:11:53.213372   10672 out.go:176] * [default-k8s-different-port-20211117231152-9504] minikube v1.24.0 on Microsoft Windows 10 Enterprise N 10.0.18363 Build 18363
	I1117 23:11:53.214120   10672 notify.go:174] Checking for updates...
	I1117 23:11:53.217428   10672 out.go:176]   - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	I1117 23:11:53.220981   10672 out.go:176]   - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	I1117 23:11:53.223109   10672 out.go:176]   - MINIKUBE_LOCATION=12739
	I1117 23:11:53.224638   10672 config.go:176] Loaded profile config "embed-certs-20211117231110-9504": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.3
	I1117 23:11:53.225795   10672 config.go:176] Loaded profile config "multinode-20211117225530-9504-m01": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.3
	I1117 23:11:53.226134   10672 config.go:176] Loaded profile config "no-preload-20211117231133-9504": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.4-rc.0
	I1117 23:11:53.226134   10672 config.go:176] Loaded profile config "old-k8s-version-20211117231110-9504": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.14.0
	I1117 23:11:53.226944   10672 driver.go:343] Setting default libvirt URI to qemu:///system
	I1117 23:11:54.874122   10672 docker.go:132] docker version: linux-19.03.12
	I1117 23:11:54.878143   10672 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I1117 23:11:55.236125   10672 info.go:263] docker info: {ID:L2W4:ZZCI:FAHF:ZZT6:NAWJ:ODFR:2ZBQ:YQ77:PYZZ:VR3E:WLUM:OBSV Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:40 OomKillDisable:true NGoroutines:52 SystemTime:2021-11-17 23:11:54.9626912 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:4.19.76-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://index.
docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:6239399936 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:19.03.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7ad184331fa3e55e52b890ea95e65ba581ae3429 Expected:7ad184331fa3e55e52b890ea95e65ba581ae3429} RuncCommit:{ID:dc9208a3303feef5b3839f4323d9beb36df0a9dd Expected:dc9208a3303feef5b3839f4323d9beb36df0a9dd} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense:Community Engine Warnings:<nil> Se
rverErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I1117 23:11:55.241726   10672 out.go:176] * Using the docker driver based on user configuration
	I1117 23:11:55.241818   10672 start.go:280] selected driver: docker
	I1117 23:11:55.241818   10672 start.go:775] validating driver "docker" against <nil>
	I1117 23:11:55.241818   10672 start.go:786] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I1117 23:11:55.302621   10672 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I1117 23:11:55.638907   10672 info.go:263] docker info: {ID:L2W4:ZZCI:FAHF:ZZT6:NAWJ:ODFR:2ZBQ:YQ77:PYZZ:VR3E:WLUM:OBSV Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:40 OomKillDisable:true NGoroutines:52 SystemTime:2021-11-17 23:11:55.380148925 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:4.19.76-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://inde
x.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:6239399936 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:19.03.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7ad184331fa3e55e52b890ea95e65ba581ae3429 Expected:7ad184331fa3e55e52b890ea95e65ba581ae3429} RuncCommit:{ID:dc9208a3303feef5b3839f4323d9beb36df0a9dd Expected:dc9208a3303feef5b3839f4323d9beb36df0a9dd} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense:Community Engine Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I1117 23:11:55.638907   10672 start_flags.go:268] no existing cluster config was found, will generate one from the flags 
	I1117 23:11:55.639922   10672 start_flags.go:758] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1117 23:11:55.639922   10672 cni.go:93] Creating CNI manager for ""
	I1117 23:11:55.639922   10672 cni.go:167] CNI unnecessary in this configuration, recommending no CNI
	I1117 23:11:55.639922   10672 start_flags.go:282] config:
	{Name:default-k8s-different-port-20211117231152-9504 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.3 ClusterName:default-k8s-different-port-20211117231152-9504 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host}
	I1117 23:11:55.642915   10672 out.go:176] * Starting control plane node default-k8s-different-port-20211117231152-9504 in cluster default-k8s-different-port-20211117231152-9504
	I1117 23:11:55.642915   10672 cache.go:118] Beginning downloading kic base image for docker with docker
	I1117 23:11:55.645904   10672 out.go:176] * Pulling base image ...
	I1117 23:11:55.645904   10672 preload.go:132] Checking if preload exists for k8s version v1.22.3 and runtime docker
	I1117 23:11:55.645904   10672 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c in local docker daemon
	I1117 23:11:55.645904   10672 preload.go:148] Found local preload: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4
	I1117 23:11:55.645904   10672 cache.go:57] Caching tarball of preloaded images
	I1117 23:11:55.646908   10672 preload.go:174] Found C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1117 23:11:55.646908   10672 cache.go:60] Finished verifying existence of preloaded tar for  v1.22.3 on docker
	I1117 23:11:55.646908   10672 profile.go:147] Saving config to C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\default-k8s-different-port-20211117231152-9504\config.json ...
	I1117 23:11:55.646908   10672 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\default-k8s-different-port-20211117231152-9504\config.json: {Name:mkf67e563fbc35b9b9a2d207af81ab19f92bde20 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1117 23:11:55.742908   10672 image.go:79] Found gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c in local docker daemon, skipping pull
	I1117 23:11:55.742908   10672 cache.go:140] gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c exists in daemon, skipping load
	I1117 23:11:55.742908   10672 cache.go:206] Successfully downloaded all kic artifacts
	I1117 23:11:55.742908   10672 start.go:313] acquiring machines lock for default-k8s-different-port-20211117231152-9504: {Name:mk2897e2360a69311577988e13dc34760667171e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 23:11:55.742908   10672 start.go:317] acquired machines lock for "default-k8s-different-port-20211117231152-9504" in 0s
	I1117 23:11:55.742908   10672 start.go:89] Provisioning new machine with config: &{Name:default-k8s-different-port-20211117231152-9504 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.3 ClusterName:default-k8s-different-port-20211117231152-9504 Namespace:default
APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.22.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host} &{Name: IP: Port:8444 KubernetesVersion:v1.22.3 ControlPlane:true Worker:true}
	I1117 23:11:55.742908   10672 start.go:126] createHost starting for "" (driver="docker")
	I1117 23:11:55.746909   10672 out.go:203] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I1117 23:11:55.747905   10672 start.go:160] libmachine.API.Create for "default-k8s-different-port-20211117231152-9504" (driver="docker")
	I1117 23:11:55.747905   10672 client.go:168] LocalClient.Create starting
	I1117 23:11:55.747905   10672 main.go:130] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I1117 23:11:55.747905   10672 main.go:130] libmachine: Decoding PEM data...
	I1117 23:11:55.747905   10672 main.go:130] libmachine: Parsing certificate...
	I1117 23:11:55.748904   10672 main.go:130] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I1117 23:11:55.748904   10672 main.go:130] libmachine: Decoding PEM data...
	I1117 23:11:55.748904   10672 main.go:130] libmachine: Parsing certificate...
	I1117 23:11:55.753915   10672 cli_runner.go:115] Run: docker network inspect default-k8s-different-port-20211117231152-9504 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1117 23:11:55.846355   10672 cli_runner.go:162] docker network inspect default-k8s-different-port-20211117231152-9504 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1117 23:11:55.849682   10672 network_create.go:254] running [docker network inspect default-k8s-different-port-20211117231152-9504] to gather additional debugging logs...
	I1117 23:11:55.849682   10672 cli_runner.go:115] Run: docker network inspect default-k8s-different-port-20211117231152-9504
	W1117 23:11:55.939640   10672 cli_runner.go:162] docker network inspect default-k8s-different-port-20211117231152-9504 returned with exit code 1
	I1117 23:11:55.939808   10672 network_create.go:257] error running [docker network inspect default-k8s-different-port-20211117231152-9504]: docker network inspect default-k8s-different-port-20211117231152-9504: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: default-k8s-different-port-20211117231152-9504
	I1117 23:11:55.939862   10672 network_create.go:259] output of [docker network inspect default-k8s-different-port-20211117231152-9504]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: default-k8s-different-port-20211117231152-9504
	
	** /stderr **
	I1117 23:11:55.943551   10672 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1117 23:11:56.056656   10672 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc0004b0260] misses:0}
	I1117 23:11:56.056656   10672 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1117 23:11:56.056656   10672 network_create.go:106] attempt to create docker network default-k8s-different-port-20211117231152-9504 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1117 23:11:56.060904   10672 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true default-k8s-different-port-20211117231152-9504
	I1117 23:11:56.264868   10672 network_create.go:90] docker network default-k8s-different-port-20211117231152-9504 192.168.49.0/24 created
	I1117 23:11:56.265062   10672 kic.go:106] calculated static IP "192.168.49.2" for the "default-k8s-different-port-20211117231152-9504" container
	I1117 23:11:56.271826   10672 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
	I1117 23:11:56.374194   10672 cli_runner.go:115] Run: docker volume create default-k8s-different-port-20211117231152-9504 --label name.minikube.sigs.k8s.io=default-k8s-different-port-20211117231152-9504 --label created_by.minikube.sigs.k8s.io=true
	I1117 23:11:56.483737   10672 oci.go:102] Successfully created a docker volume default-k8s-different-port-20211117231152-9504
	I1117 23:11:56.487741   10672 cli_runner.go:115] Run: docker run --rm --name default-k8s-different-port-20211117231152-9504-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-different-port-20211117231152-9504 --entrypoint /usr/bin/test -v default-k8s-different-port-20211117231152-9504:/var gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -d /var/lib
	I1117 23:11:57.607785   10672 cli_runner.go:168] Completed: docker run --rm --name default-k8s-different-port-20211117231152-9504-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-different-port-20211117231152-9504 --entrypoint /usr/bin/test -v default-k8s-different-port-20211117231152-9504:/var gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -d /var/lib: (1.1198271s)
	I1117 23:11:57.607785   10672 oci.go:106] Successfully prepared a docker volume default-k8s-different-port-20211117231152-9504
	I1117 23:11:57.607952   10672 preload.go:132] Checking if preload exists for k8s version v1.22.3 and runtime docker
	I1117 23:11:57.607987   10672 kic.go:179] Starting extracting preloaded images to volume ...
	I1117 23:11:57.612993   10672 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-different-port-20211117231152-9504:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir
	I1117 23:11:57.613259   10672 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	W1117 23:11:57.734204   10672 cli_runner.go:162] docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-different-port-20211117231152-9504:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir returned with exit code 125
	I1117 23:11:57.734450   10672 kic.go:186] Unable to extract preloaded tarball to volume: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-different-port-20211117231152-9504:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: status code not OK but 500: {"Message":"Unhandled exception: The notification platform is unavailable.\r\n\r\nThe notification platform is unavailable.\r\n","StackTrace":"   at Windows.UI.Notifications.ToastNotificationManager.CreateToastNotifier(String applicationId)\r\n   at Docker.WPF.PromptShareDirectory.<PromptUserAsync>d__0.MoveNext() in C:\\workspaces\\stable-2.3.x\\src\\github.com\\docker\\pinata\\win\\src\\Docker.WPF\\PromptShareDirectory.cs:line 25\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at Docker.ApiServices.Mounting.FileSharing.<DoShareAsync>d__6.MoveNext() in C:\\workspaces\\stable-2.3.x\\src\\github.com\\docker\\pinata\\win\\src\\Docker.ApiServices\\Mounting\\FileSharing.cs:line 80\r\n--- End of stack trace from previous location whe
re exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at Docker.ApiServices.Mounting.FileSharing.<ShareAsync>d__4.MoveNext() in C:\\workspaces\\stable-2.3.x\\src\\github.com\\docker\\pinata\\win\\src\\Docker.ApiServices\\Mounting\\FileSharing.cs:line 47\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at Docker.HttpApi.Controllers.FilesharingController.<ShareDirectory>d__2.MoveNext() in C:\\workspaces\\stable-2.3.x\\src\\github.com\\docker\\pinata\\win\\src\\Docker.HttpApi\\Controllers\\FilesharingController.cs:line 21\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.Exceptio
nDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at System.Threading.Tasks.TaskHelpersExtensions.<CastToObject>d__1`1.MoveNext()\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at System.Web.Http.Controllers.ApiControllerActionInvoker.<InvokeActionAsyncCore>d__1.MoveNext()\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at System.Web.Http.Controllers.ActionFilterResult.<ExecuteAsync>d__5.MoveNext()\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.Excepti
onServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at System.Web.Http.Dispatcher.HttpControllerDispatcher.<SendAsync>d__15.MoveNext()"}.
	See 'docker run --help'.
	I1117 23:11:58.011792   10672 info.go:263] docker info: {ID:L2W4:ZZCI:FAHF:ZZT6:NAWJ:ODFR:2ZBQ:YQ77:PYZZ:VR3E:WLUM:OBSV Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:40 OomKillDisable:true NGoroutines:52 SystemTime:2021-11-17 23:11:57.731727129 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:4.19.76-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://inde
x.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:6239399936 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:19.03.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7ad184331fa3e55e52b890ea95e65ba581ae3429 Expected:7ad184331fa3e55e52b890ea95e65ba581ae3429} RuncCommit:{ID:dc9208a3303feef5b3839f4323d9beb36df0a9dd Expected:dc9208a3303feef5b3839f4323d9beb36df0a9dd} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense:Community Engine Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	E1117 23:11:58.012032   10672 oci.go:197] error getting kernel modules path: Unable to locate kernel modules
	I1117 23:11:58.012032   10672 client.go:171] LocalClient.Create took 2.26411s
	I1117 23:12:00.020869   10672 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1117 23:12:00.024525   10672 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211117231152-9504
	W1117 23:12:00.116816   10672 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211117231152-9504 returned with exit code 1
	I1117 23:12:00.117188   10672 retry.go:31] will retry after 276.165072ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-different-port-20211117231152-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211117231152-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20211117231152-9504
	I1117 23:12:00.399005   10672 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211117231152-9504
	W1117 23:12:00.488454   10672 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211117231152-9504 returned with exit code 1
	I1117 23:12:00.488712   10672 retry.go:31] will retry after 540.190908ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-different-port-20211117231152-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211117231152-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20211117231152-9504
	I1117 23:12:01.034131   10672 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211117231152-9504
	W1117 23:12:01.126803   10672 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211117231152-9504 returned with exit code 1
	I1117 23:12:01.127154   10672 retry.go:31] will retry after 655.06503ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-different-port-20211117231152-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211117231152-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20211117231152-9504
	I1117 23:12:01.787368   10672 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211117231152-9504
	W1117 23:12:01.870672   10672 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211117231152-9504 returned with exit code 1
	W1117 23:12:01.870859   10672 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-different-port-20211117231152-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211117231152-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20211117231152-9504
	
	W1117 23:12:01.870923   10672 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-different-port-20211117231152-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211117231152-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20211117231152-9504
	I1117 23:12:01.870923   10672 start.go:129] duration metric: createHost completed in 6.127969s
	I1117 23:12:01.870923   10672 start.go:80] releasing machines lock for "default-k8s-different-port-20211117231152-9504", held for 6.127969s
	W1117 23:12:01.870923   10672 start.go:532] error starting host: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	I1117 23:12:01.880327   10672 cli_runner.go:115] Run: docker container inspect default-k8s-different-port-20211117231152-9504 --format={{.State.Status}}
	W1117 23:12:01.976634   10672 cli_runner.go:162] docker container inspect default-k8s-different-port-20211117231152-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:12:01.976634   10672 delete.go:82] Unable to get host status for default-k8s-different-port-20211117231152-9504, assuming it has already been deleted: state: unknown state "default-k8s-different-port-20211117231152-9504": docker container inspect default-k8s-different-port-20211117231152-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20211117231152-9504
	W1117 23:12:01.976634   10672 out.go:241] ! StartHost failed, but will try again: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	! StartHost failed, but will try again: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	I1117 23:12:01.976634   10672 start.go:547] Will try again in 5 seconds ...
	I1117 23:12:06.977410   10672 start.go:313] acquiring machines lock for default-k8s-different-port-20211117231152-9504: {Name:mk2897e2360a69311577988e13dc34760667171e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 23:12:06.977410   10672 start.go:317] acquired machines lock for "default-k8s-different-port-20211117231152-9504" in 0s
	I1117 23:12:06.978049   10672 start.go:93] Skipping create...Using existing machine configuration
	I1117 23:12:06.978049   10672 fix.go:55] fixHost starting: 
	I1117 23:12:06.985337   10672 cli_runner.go:115] Run: docker container inspect default-k8s-different-port-20211117231152-9504 --format={{.State.Status}}
	W1117 23:12:07.076253   10672 cli_runner.go:162] docker container inspect default-k8s-different-port-20211117231152-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:12:07.076253   10672 fix.go:108] recreateIfNeeded on default-k8s-different-port-20211117231152-9504: state= err=unknown state "default-k8s-different-port-20211117231152-9504": docker container inspect default-k8s-different-port-20211117231152-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20211117231152-9504
	I1117 23:12:07.076253   10672 fix.go:113] machineExists: false. err=machine does not exist
	I1117 23:12:07.079877   10672 out.go:176] * docker "default-k8s-different-port-20211117231152-9504" container is missing, will recreate.
	I1117 23:12:07.079877   10672 delete.go:124] DEMOLISHING default-k8s-different-port-20211117231152-9504 ...
	I1117 23:12:07.087368   10672 cli_runner.go:115] Run: docker container inspect default-k8s-different-port-20211117231152-9504 --format={{.State.Status}}
	W1117 23:12:07.180602   10672 cli_runner.go:162] docker container inspect default-k8s-different-port-20211117231152-9504 --format={{.State.Status}} returned with exit code 1
	W1117 23:12:07.180602   10672 stop.go:75] unable to get state: unknown state "default-k8s-different-port-20211117231152-9504": docker container inspect default-k8s-different-port-20211117231152-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20211117231152-9504
	I1117 23:12:07.180602   10672 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "default-k8s-different-port-20211117231152-9504": docker container inspect default-k8s-different-port-20211117231152-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20211117231152-9504
	I1117 23:12:07.188715   10672 cli_runner.go:115] Run: docker container inspect default-k8s-different-port-20211117231152-9504 --format={{.State.Status}}
	W1117 23:12:07.280122   10672 cli_runner.go:162] docker container inspect default-k8s-different-port-20211117231152-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:12:07.280247   10672 delete.go:82] Unable to get host status for default-k8s-different-port-20211117231152-9504, assuming it has already been deleted: state: unknown state "default-k8s-different-port-20211117231152-9504": docker container inspect default-k8s-different-port-20211117231152-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20211117231152-9504
	I1117 23:12:07.284435   10672 cli_runner.go:115] Run: docker container inspect -f {{.Id}} default-k8s-different-port-20211117231152-9504
	W1117 23:12:07.372440   10672 cli_runner.go:162] docker container inspect -f {{.Id}} default-k8s-different-port-20211117231152-9504 returned with exit code 1
	I1117 23:12:07.372545   10672 kic.go:360] could not find the container default-k8s-different-port-20211117231152-9504 to remove it. will try anyways
	I1117 23:12:07.376409   10672 cli_runner.go:115] Run: docker container inspect default-k8s-different-port-20211117231152-9504 --format={{.State.Status}}
	W1117 23:12:07.466148   10672 cli_runner.go:162] docker container inspect default-k8s-different-port-20211117231152-9504 --format={{.State.Status}} returned with exit code 1
	W1117 23:12:07.466148   10672 oci.go:83] error getting container status, will try to delete anyways: unknown state "default-k8s-different-port-20211117231152-9504": docker container inspect default-k8s-different-port-20211117231152-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20211117231152-9504
	I1117 23:12:07.470146   10672 cli_runner.go:115] Run: docker exec --privileged -t default-k8s-different-port-20211117231152-9504 /bin/bash -c "sudo init 0"
	W1117 23:12:07.559483   10672 cli_runner.go:162] docker exec --privileged -t default-k8s-different-port-20211117231152-9504 /bin/bash -c "sudo init 0" returned with exit code 1
	I1117 23:12:07.559663   10672 oci.go:658] error shutdown default-k8s-different-port-20211117231152-9504: docker exec --privileged -t default-k8s-different-port-20211117231152-9504 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: default-k8s-different-port-20211117231152-9504
	I1117 23:12:08.565890   10672 cli_runner.go:115] Run: docker container inspect default-k8s-different-port-20211117231152-9504 --format={{.State.Status}}
	W1117 23:12:08.653511   10672 cli_runner.go:162] docker container inspect default-k8s-different-port-20211117231152-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:12:08.653511   10672 oci.go:670] temporary error verifying shutdown: unknown state "default-k8s-different-port-20211117231152-9504": docker container inspect default-k8s-different-port-20211117231152-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20211117231152-9504
	I1117 23:12:08.653511   10672 oci.go:672] temporary error: container default-k8s-different-port-20211117231152-9504 status is  but expect it to be exited
	I1117 23:12:08.653511   10672 retry.go:31] will retry after 462.318748ms: couldn't verify container is exited. %v: unknown state "default-k8s-different-port-20211117231152-9504": docker container inspect default-k8s-different-port-20211117231152-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20211117231152-9504
	I1117 23:12:09.121209   10672 cli_runner.go:115] Run: docker container inspect default-k8s-different-port-20211117231152-9504 --format={{.State.Status}}
	W1117 23:12:09.211943   10672 cli_runner.go:162] docker container inspect default-k8s-different-port-20211117231152-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:12:09.212127   10672 oci.go:670] temporary error verifying shutdown: unknown state "default-k8s-different-port-20211117231152-9504": docker container inspect default-k8s-different-port-20211117231152-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20211117231152-9504
	I1117 23:12:09.212127   10672 oci.go:672] temporary error: container default-k8s-different-port-20211117231152-9504 status is  but expect it to be exited
	I1117 23:12:09.212127   10672 retry.go:31] will retry after 890.117305ms: couldn't verify container is exited. %v: unknown state "default-k8s-different-port-20211117231152-9504": docker container inspect default-k8s-different-port-20211117231152-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20211117231152-9504
	I1117 23:12:10.106933   10672 cli_runner.go:115] Run: docker container inspect default-k8s-different-port-20211117231152-9504 --format={{.State.Status}}
	W1117 23:12:10.200897   10672 cli_runner.go:162] docker container inspect default-k8s-different-port-20211117231152-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:12:10.201154   10672 oci.go:670] temporary error verifying shutdown: unknown state "default-k8s-different-port-20211117231152-9504": docker container inspect default-k8s-different-port-20211117231152-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20211117231152-9504
	I1117 23:12:10.201713   10672 oci.go:672] temporary error: container default-k8s-different-port-20211117231152-9504 status is  but expect it to be exited
	I1117 23:12:10.201713   10672 retry.go:31] will retry after 636.341646ms: couldn't verify container is exited. %v: unknown state "default-k8s-different-port-20211117231152-9504": docker container inspect default-k8s-different-port-20211117231152-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20211117231152-9504
	I1117 23:12:10.843729   10672 cli_runner.go:115] Run: docker container inspect default-k8s-different-port-20211117231152-9504 --format={{.State.Status}}
	W1117 23:12:10.940995   10672 cli_runner.go:162] docker container inspect default-k8s-different-port-20211117231152-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:12:10.940995   10672 oci.go:670] temporary error verifying shutdown: unknown state "default-k8s-different-port-20211117231152-9504": docker container inspect default-k8s-different-port-20211117231152-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20211117231152-9504
	I1117 23:12:10.940995   10672 oci.go:672] temporary error: container default-k8s-different-port-20211117231152-9504 status is  but expect it to be exited
	I1117 23:12:10.940995   10672 retry.go:31] will retry after 1.107876242s: couldn't verify container is exited. %v: unknown state "default-k8s-different-port-20211117231152-9504": docker container inspect default-k8s-different-port-20211117231152-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20211117231152-9504
	I1117 23:12:12.055300   10672 cli_runner.go:115] Run: docker container inspect default-k8s-different-port-20211117231152-9504 --format={{.State.Status}}
	W1117 23:12:12.145407   10672 cli_runner.go:162] docker container inspect default-k8s-different-port-20211117231152-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:12:12.145727   10672 oci.go:670] temporary error verifying shutdown: unknown state "default-k8s-different-port-20211117231152-9504": docker container inspect default-k8s-different-port-20211117231152-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20211117231152-9504
	I1117 23:12:12.145798   10672 oci.go:672] temporary error: container default-k8s-different-port-20211117231152-9504 status is  but expect it to be exited
	I1117 23:12:12.145877   10672 retry.go:31] will retry after 1.511079094s: couldn't verify container is exited. %v: unknown state "default-k8s-different-port-20211117231152-9504": docker container inspect default-k8s-different-port-20211117231152-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20211117231152-9504
	I1117 23:12:13.662797   10672 cli_runner.go:115] Run: docker container inspect default-k8s-different-port-20211117231152-9504 --format={{.State.Status}}
	W1117 23:12:13.769466   10672 cli_runner.go:162] docker container inspect default-k8s-different-port-20211117231152-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:12:13.769573   10672 oci.go:670] temporary error verifying shutdown: unknown state "default-k8s-different-port-20211117231152-9504": docker container inspect default-k8s-different-port-20211117231152-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20211117231152-9504
	I1117 23:12:13.769643   10672 oci.go:672] temporary error: container default-k8s-different-port-20211117231152-9504 status is  but expect it to be exited
	I1117 23:12:13.769690   10672 retry.go:31] will retry after 3.04096222s: couldn't verify container is exited. %v: unknown state "default-k8s-different-port-20211117231152-9504": docker container inspect default-k8s-different-port-20211117231152-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20211117231152-9504
	I1117 23:12:16.814842   10672 cli_runner.go:115] Run: docker container inspect default-k8s-different-port-20211117231152-9504 --format={{.State.Status}}
	W1117 23:12:16.909204   10672 cli_runner.go:162] docker container inspect default-k8s-different-port-20211117231152-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:12:16.909204   10672 oci.go:670] temporary error verifying shutdown: unknown state "default-k8s-different-port-20211117231152-9504": docker container inspect default-k8s-different-port-20211117231152-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20211117231152-9504
	I1117 23:12:16.909204   10672 oci.go:672] temporary error: container default-k8s-different-port-20211117231152-9504 status is  but expect it to be exited
	I1117 23:12:16.909204   10672 retry.go:31] will retry after 5.781953173s: couldn't verify container is exited. %v: unknown state "default-k8s-different-port-20211117231152-9504": docker container inspect default-k8s-different-port-20211117231152-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20211117231152-9504
	I1117 23:12:22.694422   10672 cli_runner.go:115] Run: docker container inspect default-k8s-different-port-20211117231152-9504 --format={{.State.Status}}
	W1117 23:12:22.787507   10672 cli_runner.go:162] docker container inspect default-k8s-different-port-20211117231152-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:12:22.787862   10672 oci.go:670] temporary error verifying shutdown: unknown state "default-k8s-different-port-20211117231152-9504": docker container inspect default-k8s-different-port-20211117231152-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20211117231152-9504
	I1117 23:12:22.787896   10672 oci.go:672] temporary error: container default-k8s-different-port-20211117231152-9504 status is  but expect it to be exited
	I1117 23:12:22.787938   10672 oci.go:87] couldn't shut down default-k8s-different-port-20211117231152-9504 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "default-k8s-different-port-20211117231152-9504": docker container inspect default-k8s-different-port-20211117231152-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20211117231152-9504
	 
	I1117 23:12:22.791430   10672 cli_runner.go:115] Run: docker rm -f -v default-k8s-different-port-20211117231152-9504
	W1117 23:12:22.888895   10672 cli_runner.go:162] docker rm -f -v default-k8s-different-port-20211117231152-9504 returned with exit code 1
	W1117 23:12:22.889895   10672 delete.go:139] delete failed (probably ok) <nil>
	I1117 23:12:22.889895   10672 fix.go:120] Sleeping 1 second for extra luck!
	I1117 23:12:23.889999   10672 start.go:126] createHost starting for "" (driver="docker")
	I1117 23:12:23.895195   10672 out.go:203] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I1117 23:12:23.895449   10672 start.go:160] libmachine.API.Create for "default-k8s-different-port-20211117231152-9504" (driver="docker")
	I1117 23:12:23.895449   10672 client.go:168] LocalClient.Create starting
	I1117 23:12:23.896029   10672 main.go:130] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I1117 23:12:23.896029   10672 main.go:130] libmachine: Decoding PEM data...
	I1117 23:12:23.896029   10672 main.go:130] libmachine: Parsing certificate...
	I1117 23:12:23.896029   10672 main.go:130] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I1117 23:12:23.896554   10672 main.go:130] libmachine: Decoding PEM data...
	I1117 23:12:23.896554   10672 main.go:130] libmachine: Parsing certificate...
	I1117 23:12:23.902221   10672 cli_runner.go:115] Run: docker network inspect default-k8s-different-port-20211117231152-9504 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1117 23:12:24.003133   10672 network_create.go:67] Found existing network {name:default-k8s-different-port-20211117231152-9504 subnet:0xc000e645d0 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 168 49 1] mtu:1500}
	I1117 23:12:24.003133   10672 kic.go:106] calculated static IP "192.168.49.2" for the "default-k8s-different-port-20211117231152-9504" container
	I1117 23:12:24.012210   10672 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
	I1117 23:12:24.136576   10672 cli_runner.go:115] Run: docker volume create default-k8s-different-port-20211117231152-9504 --label name.minikube.sigs.k8s.io=default-k8s-different-port-20211117231152-9504 --label created_by.minikube.sigs.k8s.io=true
	I1117 23:12:24.228411   10672 oci.go:102] Successfully created a docker volume default-k8s-different-port-20211117231152-9504
	I1117 23:12:24.231438   10672 cli_runner.go:115] Run: docker run --rm --name default-k8s-different-port-20211117231152-9504-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-different-port-20211117231152-9504 --entrypoint /usr/bin/test -v default-k8s-different-port-20211117231152-9504:/var gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -d /var/lib
	I1117 23:12:25.134508   10672 oci.go:106] Successfully prepared a docker volume default-k8s-different-port-20211117231152-9504
	I1117 23:12:25.134690   10672 preload.go:132] Checking if preload exists for k8s version v1.22.3 and runtime docker
	I1117 23:12:25.134690   10672 kic.go:179] Starting extracting preloaded images to volume ...
	I1117 23:12:25.140278   10672 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-different-port-20211117231152-9504:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir
	I1117 23:12:25.140482   10672 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	W1117 23:12:25.254556   10672 cli_runner.go:162] docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-different-port-20211117231152-9504:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir returned with exit code 125
	I1117 23:12:25.254634   10672 kic.go:186] Unable to extract preloaded tarball to volume: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-different-port-20211117231152-9504:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: status code not OK but 500: {"Message":"Unhandled exception: The notification platform is unavailable.\r\n\r\nThe notification platform is unavailable.\r\n","StackTrace":"   at Windows.UI.Notifications.ToastNotificationManager.CreateToastNotifier(String applicationId)\r\n   at Docker.WPF.PromptShareDirectory.<PromptUserAsync>d__0.MoveNext() in C:\\workspaces\\stable-2.3.x\\src\\github.com\\docker\\pinata\\win\\src\\Docker.WPF\\PromptShareDirectory.cs:line 25\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at Docker.ApiServices.Mounting.FileSharing.<DoShareAsync>d__6.MoveNext() in C:\\workspaces\\stable-2.3.x\\src\\github.com\\docker\\pinata\\win\\src\\Docker.ApiServices\\Mounting\\FileSharing.cs:line 80\r\n--- End of stack trace from previous location whe
re exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at Docker.ApiServices.Mounting.FileSharing.<ShareAsync>d__4.MoveNext() in C:\\workspaces\\stable-2.3.x\\src\\github.com\\docker\\pinata\\win\\src\\Docker.ApiServices\\Mounting\\FileSharing.cs:line 47\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at Docker.HttpApi.Controllers.FilesharingController.<ShareDirectory>d__2.MoveNext() in C:\\workspaces\\stable-2.3.x\\src\\github.com\\docker\\pinata\\win\\src\\Docker.HttpApi\\Controllers\\FilesharingController.cs:line 21\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.Exceptio
nDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at System.Threading.Tasks.TaskHelpersExtensions.<CastToObject>d__1`1.MoveNext()\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at System.Web.Http.Controllers.ApiControllerActionInvoker.<InvokeActionAsyncCore>d__1.MoveNext()\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at System.Web.Http.Controllers.ActionFilterResult.<ExecuteAsync>d__5.MoveNext()\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.Excepti
onServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at System.Web.Http.Dispatcher.HttpControllerDispatcher.<SendAsync>d__15.MoveNext()"}.
	See 'docker run --help'.
	I1117 23:12:25.536611   10672 info.go:263] docker info: {ID:L2W4:ZZCI:FAHF:ZZT6:NAWJ:ODFR:2ZBQ:YQ77:PYZZ:VR3E:WLUM:OBSV Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:40 OomKillDisable:true NGoroutines:52 SystemTime:2021-11-17 23:12:25.247785988 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:4.19.76-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://inde
x.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:6239399936 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:19.03.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7ad184331fa3e55e52b890ea95e65ba581ae3429 Expected:7ad184331fa3e55e52b890ea95e65ba581ae3429} RuncCommit:{ID:dc9208a3303feef5b3839f4323d9beb36df0a9dd Expected:dc9208a3303feef5b3839f4323d9beb36df0a9dd} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense:Community Engine Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	E1117 23:12:25.537155   10672 oci.go:197] error getting kernel modules path: Unable to locate kernel modules
	I1117 23:12:25.537155   10672 client.go:171] LocalClient.Create took 1.6416933s
	I1117 23:12:27.547315   10672 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1117 23:12:27.550861   10672 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211117231152-9504
	W1117 23:12:27.643805   10672 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211117231152-9504 returned with exit code 1
	I1117 23:12:27.643805   10672 retry.go:31] will retry after 178.565968ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-different-port-20211117231152-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211117231152-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20211117231152-9504
	I1117 23:12:27.828625   10672 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211117231152-9504
	W1117 23:12:27.915200   10672 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211117231152-9504 returned with exit code 1
	I1117 23:12:27.915484   10672 retry.go:31] will retry after 330.246446ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-different-port-20211117231152-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211117231152-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20211117231152-9504
	I1117 23:12:28.251111   10672 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211117231152-9504
	W1117 23:12:28.341521   10672 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211117231152-9504 returned with exit code 1
	I1117 23:12:28.341986   10672 retry.go:31] will retry after 460.157723ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-different-port-20211117231152-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211117231152-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20211117231152-9504
	I1117 23:12:28.807951   10672 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211117231152-9504
	W1117 23:12:28.895728   10672 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211117231152-9504 returned with exit code 1
	W1117 23:12:28.895936   10672 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-different-port-20211117231152-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211117231152-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20211117231152-9504
	
	W1117 23:12:28.895996   10672 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-different-port-20211117231152-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211117231152-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20211117231152-9504
	I1117 23:12:28.895996   10672 start.go:129] duration metric: createHost completed in 5.0057297s
	I1117 23:12:28.903566   10672 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1117 23:12:28.906611   10672 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211117231152-9504
	W1117 23:12:28.994508   10672 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211117231152-9504 returned with exit code 1
	I1117 23:12:28.994674   10672 retry.go:31] will retry after 195.758538ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-different-port-20211117231152-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211117231152-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20211117231152-9504
	I1117 23:12:29.196411   10672 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211117231152-9504
	W1117 23:12:29.287069   10672 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211117231152-9504 returned with exit code 1
	I1117 23:12:29.287069   10672 retry.go:31] will retry after 297.413196ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-different-port-20211117231152-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211117231152-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20211117231152-9504
	I1117 23:12:29.589666   10672 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211117231152-9504
	W1117 23:12:29.680432   10672 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211117231152-9504 returned with exit code 1
	I1117 23:12:29.680432   10672 retry.go:31] will retry after 663.23513ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-different-port-20211117231152-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211117231152-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20211117231152-9504
	I1117 23:12:30.349761   10672 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211117231152-9504
	W1117 23:12:30.441978   10672 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211117231152-9504 returned with exit code 1
	W1117 23:12:30.442447   10672 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-different-port-20211117231152-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211117231152-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20211117231152-9504
	
	W1117 23:12:30.442531   10672 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-different-port-20211117231152-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211117231152-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20211117231152-9504
	I1117 23:12:30.442603   10672 fix.go:57] fixHost completed within 23.4643774s
	I1117 23:12:30.442603   10672 start.go:80] releasing machines lock for "default-k8s-different-port-20211117231152-9504", held for 23.4650164s
	W1117 23:12:30.443087   10672 out.go:241] * Failed to start docker container. Running "minikube delete -p default-k8s-different-port-20211117231152-9504" may fix it: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	* Failed to start docker container. Running "minikube delete -p default-k8s-different-port-20211117231152-9504" may fix it: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	I1117 23:12:30.447512   10672 out.go:176] 
	W1117 23:12:30.447725   10672 out.go:241] X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	W1117 23:12:30.447725   10672 out.go:241] * 
	* 
	W1117 23:12:30.449116   10672 out.go:241] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1117 23:12:30.450961   10672 out.go:176] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:173: failed starting minikube -first start-. args "out/minikube-windows-amd64.exe start -p default-k8s-different-port-20211117231152-9504 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker --kubernetes-version=v1.22.3": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/default-k8s-different-port/serial/FirstStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect default-k8s-different-port-20211117231152-9504
helpers_test.go:235: (dbg) docker inspect default-k8s-different-port-20211117231152-9504:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "default-k8s-different-port-20211117231152-9504",
	        "Id": "958aad75e6207a47e0822eec615f5b93779f44fc3bfdf4fedaf773fd2df7bb30",
	        "Created": "2021-11-17T23:11:56.13952141Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.49.0/24",
	                    "Gateway": "192.168.49.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p default-k8s-different-port-20211117231152-9504 -n default-k8s-different-port-20211117231152-9504
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p default-k8s-different-port-20211117231152-9504 -n default-k8s-different-port-20211117231152-9504: exit status 7 (1.7701961s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 23:12:32.421896   11316 status.go:247] status error: host: state: unknown state "default-k8s-different-port-20211117231152-9504": docker container inspect default-k8s-different-port-20211117231152-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20211117231152-9504

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-different-port-20211117231152-9504" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/default-k8s-different-port/serial/FirstStart (39.50s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (4.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:190: (dbg) Run:  out/minikube-windows-amd64.exe addons enable metrics-server -p old-k8s-version-20211117231110-9504 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:190: (dbg) Done: out/minikube-windows-amd64.exe addons enable metrics-server -p old-k8s-version-20211117231110-9504 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.8837633s)
start_stop_delete_test.go:200: (dbg) Run:  kubectl --context old-k8s-version-20211117231110-9504 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:200: (dbg) Non-zero exit: kubectl --context old-k8s-version-20211117231110-9504 describe deploy/metrics-server -n kube-system: exit status 1 (219.9763ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-20211117231110-9504" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:202: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-20211117231110-9504 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:206: addon did not load correct image. Expected to contain " fake.domain/k8s.gcr.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-20211117231110-9504
helpers_test.go:235: (dbg) docker inspect old-k8s-version-20211117231110-9504:

                                                
                                                
-- stdout --
	[
	    {
	        "CreatedAt": "2021-11-17T23:11:14Z",
	        "Driver": "local",
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "old-k8s-version-20211117231110-9504"
	        },
	        "Mountpoint": "/var/lib/docker/volumes/old-k8s-version-20211117231110-9504/_data",
	        "Name": "old-k8s-version-20211117231110-9504",
	        "Options": {},
	        "Scope": "local"
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-20211117231110-9504 -n old-k8s-version-20211117231110-9504

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-20211117231110-9504 -n old-k8s-version-20211117231110-9504: exit status 7 (1.874911s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 23:11:58.634886     180 status.go:247] status error: host: state: unknown state "old-k8s-version-20211117231110-9504": docker container inspect old-k8s-version-20211117231110-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20211117231110-9504

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-20211117231110-9504" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (4.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (4.03s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:190: (dbg) Run:  out/minikube-windows-amd64.exe addons enable metrics-server -p embed-certs-20211117231110-9504 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:190: (dbg) Done: out/minikube-windows-amd64.exe addons enable metrics-server -p embed-certs-20211117231110-9504 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.8577109s)
start_stop_delete_test.go:200: (dbg) Run:  kubectl --context embed-certs-20211117231110-9504 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:200: (dbg) Non-zero exit: kubectl --context embed-certs-20211117231110-9504 describe deploy/metrics-server -n kube-system: exit status 1 (214.8277ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-20211117231110-9504" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:202: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-20211117231110-9504 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:206: addon did not load correct image. Expected to contain " fake.domain/k8s.gcr.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect embed-certs-20211117231110-9504
helpers_test.go:235: (dbg) docker inspect embed-certs-20211117231110-9504:

                                                
                                                
-- stdout --
	[
	    {
	        "CreatedAt": "2021-11-17T23:11:15Z",
	        "Driver": "local",
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "embed-certs-20211117231110-9504"
	        },
	        "Mountpoint": "/var/lib/docker/volumes/embed-certs-20211117231110-9504/_data",
	        "Name": "embed-certs-20211117231110-9504",
	        "Options": {},
	        "Scope": "local"
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p embed-certs-20211117231110-9504 -n embed-certs-20211117231110-9504

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p embed-certs-20211117231110-9504 -n embed-certs-20211117231110-9504: exit status 7 (1.8187825s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 23:11:59.520707    9888 status.go:247] status error: host: state: unknown state "embed-certs-20211117231110-9504": docker container inspect embed-certs-20211117231110-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20211117231110-9504

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-20211117231110-9504" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (4.03s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (17.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:213: (dbg) Run:  out/minikube-windows-amd64.exe stop -p old-k8s-version-20211117231110-9504 --alsologtostderr -v=3

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:213: (dbg) Non-zero exit: out/minikube-windows-amd64.exe stop -p old-k8s-version-20211117231110-9504 --alsologtostderr -v=3: exit status 82 (15.1545023s)

                                                
                                                
-- stdout --
	* Stopping node "old-k8s-version-20211117231110-9504"  ...
	* Stopping node "old-k8s-version-20211117231110-9504"  ...
	* Stopping node "old-k8s-version-20211117231110-9504"  ...
	* Stopping node "old-k8s-version-20211117231110-9504"  ...
	* Stopping node "old-k8s-version-20211117231110-9504"  ...
	* Stopping node "old-k8s-version-20211117231110-9504"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1117 23:11:58.838611   11132 out.go:297] Setting OutFile to fd 1572 ...
	I1117 23:11:58.913431   11132 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 23:11:58.913431   11132 out.go:310] Setting ErrFile to fd 1704...
	I1117 23:11:58.913431   11132 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 23:11:58.922402   11132 out.go:304] Setting JSON to false
	I1117 23:11:58.922992   11132 daemonize_windows.go:45] trying to kill existing schedule stop for profile old-k8s-version-20211117231110-9504...
	I1117 23:11:58.931380   11132 ssh_runner.go:152] Run: systemctl --version
	I1117 23:11:58.934515   11132 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20211117231110-9504
	W1117 23:12:00.466211   11132 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20211117231110-9504 returned with exit code 1
	I1117 23:12:00.466435   11132 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20211117231110-9504: (1.5316844s)
	I1117 23:12:00.466435   11132 retry.go:31] will retry after 276.165072ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-20211117231110-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20211117231110-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20211117231110-9504
	I1117 23:12:00.748230   11132 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20211117231110-9504
	W1117 23:12:00.834853   11132 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20211117231110-9504 returned with exit code 1
	I1117 23:12:00.846427   11132 ssh_runner.go:152] Run: sudo service minikube-scheduled-stop stop
	I1117 23:12:00.849825   11132 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20211117231110-9504
	W1117 23:12:00.934641   11132 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20211117231110-9504 returned with exit code 1
	I1117 23:12:00.934996   11132 retry.go:31] will retry after 291.140013ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-20211117231110-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20211117231110-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20211117231110-9504
	I1117 23:12:01.233143   11132 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20211117231110-9504
	W1117 23:12:01.325847   11132 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20211117231110-9504 returned with exit code 1
	I1117 23:12:01.326167   11132 retry.go:31] will retry after 351.64282ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-20211117231110-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20211117231110-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20211117231110-9504
	I1117 23:12:01.682699   11132 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20211117231110-9504
	W1117 23:12:01.789968   11132 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20211117231110-9504 returned with exit code 1
	I1117 23:12:01.790164   11132 retry.go:31] will retry after 520.108592ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-20211117231110-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20211117231110-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20211117231110-9504
	I1117 23:12:02.314983   11132 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20211117231110-9504
	W1117 23:12:02.405591   11132 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20211117231110-9504 returned with exit code 1
	I1117 23:12:02.405732   11132 openrc.go:165] stop output: 
	E1117 23:12:02.405732   11132 daemonize_windows.go:39] error terminating scheduled stop for profile old-k8s-version-20211117231110-9504: stopping schedule-stop service for profile old-k8s-version-20211117231110-9504: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-20211117231110-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20211117231110-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20211117231110-9504
	I1117 23:12:02.405732   11132 mustload.go:65] Loading cluster: old-k8s-version-20211117231110-9504
	I1117 23:12:02.406612   11132 config.go:176] Loaded profile config "old-k8s-version-20211117231110-9504": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.14.0
	I1117 23:12:02.406612   11132 stop.go:39] StopHost: old-k8s-version-20211117231110-9504
	I1117 23:12:02.411140   11132 out.go:176] * Stopping node "old-k8s-version-20211117231110-9504"  ...
	I1117 23:12:02.421719   11132 cli_runner.go:115] Run: docker container inspect old-k8s-version-20211117231110-9504 --format={{.State.Status}}
	W1117 23:12:02.509187   11132 cli_runner.go:162] docker container inspect old-k8s-version-20211117231110-9504 --format={{.State.Status}} returned with exit code 1
	W1117 23:12:02.509449   11132 stop.go:75] unable to get state: unknown state "old-k8s-version-20211117231110-9504": docker container inspect old-k8s-version-20211117231110-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20211117231110-9504
	W1117 23:12:02.509553   11132 stop.go:163] stop host returned error: ssh power off: unknown state "old-k8s-version-20211117231110-9504": docker container inspect old-k8s-version-20211117231110-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20211117231110-9504
	I1117 23:12:02.509553   11132 retry.go:31] will retry after 565.637019ms: docker container inspect old-k8s-version-20211117231110-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20211117231110-9504
	I1117 23:12:03.075731   11132 stop.go:39] StopHost: old-k8s-version-20211117231110-9504
	I1117 23:12:03.083590   11132 out.go:176] * Stopping node "old-k8s-version-20211117231110-9504"  ...
	I1117 23:12:03.094077   11132 cli_runner.go:115] Run: docker container inspect old-k8s-version-20211117231110-9504 --format={{.State.Status}}
	W1117 23:12:03.186415   11132 cli_runner.go:162] docker container inspect old-k8s-version-20211117231110-9504 --format={{.State.Status}} returned with exit code 1
	W1117 23:12:03.186415   11132 stop.go:75] unable to get state: unknown state "old-k8s-version-20211117231110-9504": docker container inspect old-k8s-version-20211117231110-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20211117231110-9504
	W1117 23:12:03.186415   11132 stop.go:163] stop host returned error: ssh power off: unknown state "old-k8s-version-20211117231110-9504": docker container inspect old-k8s-version-20211117231110-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20211117231110-9504
	I1117 23:12:03.186415   11132 retry.go:31] will retry after 984.778882ms: docker container inspect old-k8s-version-20211117231110-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20211117231110-9504
	I1117 23:12:04.172245   11132 stop.go:39] StopHost: old-k8s-version-20211117231110-9504
	I1117 23:12:04.176070   11132 out.go:176] * Stopping node "old-k8s-version-20211117231110-9504"  ...
	I1117 23:12:04.182490   11132 cli_runner.go:115] Run: docker container inspect old-k8s-version-20211117231110-9504 --format={{.State.Status}}
	W1117 23:12:04.276364   11132 cli_runner.go:162] docker container inspect old-k8s-version-20211117231110-9504 --format={{.State.Status}} returned with exit code 1
	W1117 23:12:04.276364   11132 stop.go:75] unable to get state: unknown state "old-k8s-version-20211117231110-9504": docker container inspect old-k8s-version-20211117231110-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20211117231110-9504
	W1117 23:12:04.276364   11132 stop.go:163] stop host returned error: ssh power off: unknown state "old-k8s-version-20211117231110-9504": docker container inspect old-k8s-version-20211117231110-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20211117231110-9504
	I1117 23:12:04.276364   11132 retry.go:31] will retry after 1.343181417s: docker container inspect old-k8s-version-20211117231110-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20211117231110-9504
	I1117 23:12:05.619699   11132 stop.go:39] StopHost: old-k8s-version-20211117231110-9504
	I1117 23:12:05.622898   11132 out.go:176] * Stopping node "old-k8s-version-20211117231110-9504"  ...
	I1117 23:12:05.631565   11132 cli_runner.go:115] Run: docker container inspect old-k8s-version-20211117231110-9504 --format={{.State.Status}}
	W1117 23:12:05.728638   11132 cli_runner.go:162] docker container inspect old-k8s-version-20211117231110-9504 --format={{.State.Status}} returned with exit code 1
	W1117 23:12:05.728740   11132 stop.go:75] unable to get state: unknown state "old-k8s-version-20211117231110-9504": docker container inspect old-k8s-version-20211117231110-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20211117231110-9504
	W1117 23:12:05.728740   11132 stop.go:163] stop host returned error: ssh power off: unknown state "old-k8s-version-20211117231110-9504": docker container inspect old-k8s-version-20211117231110-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20211117231110-9504
	I1117 23:12:05.728823   11132 retry.go:31] will retry after 2.703077529s: docker container inspect old-k8s-version-20211117231110-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20211117231110-9504
	I1117 23:12:08.432897   11132 stop.go:39] StopHost: old-k8s-version-20211117231110-9504
	I1117 23:12:08.437137   11132 out.go:176] * Stopping node "old-k8s-version-20211117231110-9504"  ...
	I1117 23:12:08.443966   11132 cli_runner.go:115] Run: docker container inspect old-k8s-version-20211117231110-9504 --format={{.State.Status}}
	W1117 23:12:08.535323   11132 cli_runner.go:162] docker container inspect old-k8s-version-20211117231110-9504 --format={{.State.Status}} returned with exit code 1
	W1117 23:12:08.535323   11132 stop.go:75] unable to get state: unknown state "old-k8s-version-20211117231110-9504": docker container inspect old-k8s-version-20211117231110-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20211117231110-9504
	W1117 23:12:08.535323   11132 stop.go:163] stop host returned error: ssh power off: unknown state "old-k8s-version-20211117231110-9504": docker container inspect old-k8s-version-20211117231110-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20211117231110-9504
	I1117 23:12:08.535323   11132 retry.go:31] will retry after 5.139513932s: docker container inspect old-k8s-version-20211117231110-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20211117231110-9504
	I1117 23:12:13.675139   11132 stop.go:39] StopHost: old-k8s-version-20211117231110-9504
	I1117 23:12:13.678574   11132 out.go:176] * Stopping node "old-k8s-version-20211117231110-9504"  ...
	I1117 23:12:13.686932   11132 cli_runner.go:115] Run: docker container inspect old-k8s-version-20211117231110-9504 --format={{.State.Status}}
	W1117 23:12:13.776594   11132 cli_runner.go:162] docker container inspect old-k8s-version-20211117231110-9504 --format={{.State.Status}} returned with exit code 1
	W1117 23:12:13.776669   11132 stop.go:75] unable to get state: unknown state "old-k8s-version-20211117231110-9504": docker container inspect old-k8s-version-20211117231110-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20211117231110-9504
	W1117 23:12:13.776723   11132 stop.go:163] stop host returned error: ssh power off: unknown state "old-k8s-version-20211117231110-9504": docker container inspect old-k8s-version-20211117231110-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20211117231110-9504
	I1117 23:12:13.779571   11132 out.go:176] 
	W1117 23:12:13.779571   11132 out.go:241] X Exiting due to GUEST_STOP_TIMEOUT: docker container inspect old-k8s-version-20211117231110-9504 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20211117231110-9504
	
	X Exiting due to GUEST_STOP_TIMEOUT: docker container inspect old-k8s-version-20211117231110-9504 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20211117231110-9504
	
	W1117 23:12:13.779571   11132 out.go:241] * 
	* 
	W1117 23:12:13.789723   11132 out.go:241] ╭──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                      │
	│    * If the above advice does not help, please let us know:                                                          │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                        │
	│                                                                                                                      │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                             │
	│    * Please also attach the following file to the GitHub issue:                                                      │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_stop_eeafc14e343686f0df1f1d4295ac2d3042636ff8_2.log    │
	│                                                                                                                      │
	╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                      │
	│    * If the above advice does not help, please let us know:                                                          │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                        │
	│                                                                                                                      │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                             │
	│    * Please also attach the following file to the GitHub issue:                                                      │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_stop_eeafc14e343686f0df1f1d4295ac2d3042636ff8_2.log    │
	│                                                                                                                      │
	╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I1117 23:12:13.792042   11132 out.go:176] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed stopping minikube - first stop-. args "out/minikube-windows-amd64.exe stop -p old-k8s-version-20211117231110-9504 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Stop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-20211117231110-9504
helpers_test.go:235: (dbg) docker inspect old-k8s-version-20211117231110-9504:

                                                
                                                
-- stdout --
	[
	    {
	        "CreatedAt": "2021-11-17T23:11:14Z",
	        "Driver": "local",
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "old-k8s-version-20211117231110-9504"
	        },
	        "Mountpoint": "/var/lib/docker/volumes/old-k8s-version-20211117231110-9504/_data",
	        "Name": "old-k8s-version-20211117231110-9504",
	        "Options": {},
	        "Scope": "local"
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-20211117231110-9504 -n old-k8s-version-20211117231110-9504

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/Stop
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-20211117231110-9504 -n old-k8s-version-20211117231110-9504: exit status 7 (1.8384469s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 23:12:15.736069    4128 status.go:247] status error: host: state: unknown state "old-k8s-version-20211117231110-9504": docker container inspect old-k8s-version-20211117231110-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20211117231110-9504

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-20211117231110-9504" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/old-k8s-version/serial/Stop (17.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (17.2s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:213: (dbg) Run:  out/minikube-windows-amd64.exe stop -p embed-certs-20211117231110-9504 --alsologtostderr -v=3

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:213: (dbg) Non-zero exit: out/minikube-windows-amd64.exe stop -p embed-certs-20211117231110-9504 --alsologtostderr -v=3: exit status 82 (15.2200074s)

                                                
                                                
-- stdout --
	* Stopping node "embed-certs-20211117231110-9504"  ...
	* Stopping node "embed-certs-20211117231110-9504"  ...
	* Stopping node "embed-certs-20211117231110-9504"  ...
	* Stopping node "embed-certs-20211117231110-9504"  ...
	* Stopping node "embed-certs-20211117231110-9504"  ...
	* Stopping node "embed-certs-20211117231110-9504"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1117 23:11:59.734672    7220 out.go:297] Setting OutFile to fd 1884 ...
	I1117 23:11:59.809692    7220 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 23:11:59.809692    7220 out.go:310] Setting ErrFile to fd 2000...
	I1117 23:11:59.809692    7220 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 23:11:59.823669    7220 out.go:304] Setting JSON to false
	I1117 23:11:59.823669    7220 daemonize_windows.go:45] trying to kill existing schedule stop for profile embed-certs-20211117231110-9504...
	I1117 23:11:59.831355    7220 ssh_runner.go:152] Run: systemctl --version
	I1117 23:11:59.834975    7220 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20211117231110-9504
	W1117 23:12:01.337391    7220 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20211117231110-9504 returned with exit code 1
	I1117 23:12:01.337391    7220 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20211117231110-9504: (1.5023303s)
	I1117 23:12:01.337636    7220 retry.go:31] will retry after 276.165072ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-20211117231110-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20211117231110-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20211117231110-9504
	I1117 23:12:01.618909    7220 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20211117231110-9504
	W1117 23:12:01.717096    7220 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20211117231110-9504 returned with exit code 1
	I1117 23:12:01.725199    7220 ssh_runner.go:152] Run: sudo service minikube-scheduled-stop stop
	I1117 23:12:01.728814    7220 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20211117231110-9504
	W1117 23:12:01.837162    7220 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20211117231110-9504 returned with exit code 1
	I1117 23:12:01.837223    7220 retry.go:31] will retry after 291.140013ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-20211117231110-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20211117231110-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20211117231110-9504
	I1117 23:12:02.133695    7220 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20211117231110-9504
	W1117 23:12:02.225230    7220 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20211117231110-9504 returned with exit code 1
	I1117 23:12:02.225390    7220 retry.go:31] will retry after 351.64282ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-20211117231110-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20211117231110-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20211117231110-9504
	I1117 23:12:02.583075    7220 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20211117231110-9504
	W1117 23:12:02.677051    7220 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20211117231110-9504 returned with exit code 1
	I1117 23:12:02.677386    7220 retry.go:31] will retry after 520.108592ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-20211117231110-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20211117231110-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20211117231110-9504
	I1117 23:12:03.201729    7220 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20211117231110-9504
	W1117 23:12:03.299543    7220 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20211117231110-9504 returned with exit code 1
	I1117 23:12:03.299817    7220 openrc.go:165] stop output: 
	E1117 23:12:03.299892    7220 daemonize_windows.go:39] error terminating scheduled stop for profile embed-certs-20211117231110-9504: stopping schedule-stop service for profile embed-certs-20211117231110-9504: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-20211117231110-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20211117231110-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20211117231110-9504
	I1117 23:12:03.299987    7220 mustload.go:65] Loading cluster: embed-certs-20211117231110-9504
	I1117 23:12:03.300833    7220 config.go:176] Loaded profile config "embed-certs-20211117231110-9504": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.3
	I1117 23:12:03.301100    7220 stop.go:39] StopHost: embed-certs-20211117231110-9504
	I1117 23:12:03.304779    7220 out.go:176] * Stopping node "embed-certs-20211117231110-9504"  ...
	I1117 23:12:03.315495    7220 cli_runner.go:115] Run: docker container inspect embed-certs-20211117231110-9504 --format={{.State.Status}}
	W1117 23:12:03.425401    7220 cli_runner.go:162] docker container inspect embed-certs-20211117231110-9504 --format={{.State.Status}} returned with exit code 1
	W1117 23:12:03.425759    7220 stop.go:75] unable to get state: unknown state "embed-certs-20211117231110-9504": docker container inspect embed-certs-20211117231110-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20211117231110-9504
	W1117 23:12:03.425759    7220 stop.go:163] stop host returned error: ssh power off: unknown state "embed-certs-20211117231110-9504": docker container inspect embed-certs-20211117231110-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20211117231110-9504
	I1117 23:12:03.425885    7220 retry.go:31] will retry after 565.637019ms: docker container inspect embed-certs-20211117231110-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20211117231110-9504
	I1117 23:12:03.992375    7220 stop.go:39] StopHost: embed-certs-20211117231110-9504
	I1117 23:12:03.996809    7220 out.go:176] * Stopping node "embed-certs-20211117231110-9504"  ...
	I1117 23:12:04.005650    7220 cli_runner.go:115] Run: docker container inspect embed-certs-20211117231110-9504 --format={{.State.Status}}
	W1117 23:12:04.110021    7220 cli_runner.go:162] docker container inspect embed-certs-20211117231110-9504 --format={{.State.Status}} returned with exit code 1
	W1117 23:12:04.110021    7220 stop.go:75] unable to get state: unknown state "embed-certs-20211117231110-9504": docker container inspect embed-certs-20211117231110-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20211117231110-9504
	W1117 23:12:04.110021    7220 stop.go:163] stop host returned error: ssh power off: unknown state "embed-certs-20211117231110-9504": docker container inspect embed-certs-20211117231110-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20211117231110-9504
	I1117 23:12:04.110021    7220 retry.go:31] will retry after 984.778882ms: docker container inspect embed-certs-20211117231110-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20211117231110-9504
	I1117 23:12:05.095746    7220 stop.go:39] StopHost: embed-certs-20211117231110-9504
	I1117 23:12:05.100580    7220 out.go:176] * Stopping node "embed-certs-20211117231110-9504"  ...
	I1117 23:12:05.107110    7220 cli_runner.go:115] Run: docker container inspect embed-certs-20211117231110-9504 --format={{.State.Status}}
	W1117 23:12:05.199983    7220 cli_runner.go:162] docker container inspect embed-certs-20211117231110-9504 --format={{.State.Status}} returned with exit code 1
	W1117 23:12:05.200074    7220 stop.go:75] unable to get state: unknown state "embed-certs-20211117231110-9504": docker container inspect embed-certs-20211117231110-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20211117231110-9504
	W1117 23:12:05.200074    7220 stop.go:163] stop host returned error: ssh power off: unknown state "embed-certs-20211117231110-9504": docker container inspect embed-certs-20211117231110-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20211117231110-9504
	I1117 23:12:05.200074    7220 retry.go:31] will retry after 1.343181417s: docker container inspect embed-certs-20211117231110-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20211117231110-9504
	I1117 23:12:06.544098    7220 stop.go:39] StopHost: embed-certs-20211117231110-9504
	I1117 23:12:06.549448    7220 out.go:176] * Stopping node "embed-certs-20211117231110-9504"  ...
	I1117 23:12:06.561187    7220 cli_runner.go:115] Run: docker container inspect embed-certs-20211117231110-9504 --format={{.State.Status}}
	W1117 23:12:06.653379    7220 cli_runner.go:162] docker container inspect embed-certs-20211117231110-9504 --format={{.State.Status}} returned with exit code 1
	W1117 23:12:06.653379    7220 stop.go:75] unable to get state: unknown state "embed-certs-20211117231110-9504": docker container inspect embed-certs-20211117231110-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20211117231110-9504
	W1117 23:12:06.653379    7220 stop.go:163] stop host returned error: ssh power off: unknown state "embed-certs-20211117231110-9504": docker container inspect embed-certs-20211117231110-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20211117231110-9504
	I1117 23:12:06.653379    7220 retry.go:31] will retry after 2.703077529s: docker container inspect embed-certs-20211117231110-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20211117231110-9504
	I1117 23:12:09.356934    7220 stop.go:39] StopHost: embed-certs-20211117231110-9504
	I1117 23:12:09.359586    7220 out.go:176] * Stopping node "embed-certs-20211117231110-9504"  ...
	I1117 23:12:09.366586    7220 cli_runner.go:115] Run: docker container inspect embed-certs-20211117231110-9504 --format={{.State.Status}}
	W1117 23:12:09.457314    7220 cli_runner.go:162] docker container inspect embed-certs-20211117231110-9504 --format={{.State.Status}} returned with exit code 1
	W1117 23:12:09.457588    7220 stop.go:75] unable to get state: unknown state "embed-certs-20211117231110-9504": docker container inspect embed-certs-20211117231110-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20211117231110-9504
	W1117 23:12:09.457588    7220 stop.go:163] stop host returned error: ssh power off: unknown state "embed-certs-20211117231110-9504": docker container inspect embed-certs-20211117231110-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20211117231110-9504
	I1117 23:12:09.457588    7220 retry.go:31] will retry after 5.139513932s: docker container inspect embed-certs-20211117231110-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20211117231110-9504
	I1117 23:12:14.598034    7220 stop.go:39] StopHost: embed-certs-20211117231110-9504
	I1117 23:12:14.601982    7220 out.go:176] * Stopping node "embed-certs-20211117231110-9504"  ...
	I1117 23:12:14.608769    7220 cli_runner.go:115] Run: docker container inspect embed-certs-20211117231110-9504 --format={{.State.Status}}
	W1117 23:12:14.707669    7220 cli_runner.go:162] docker container inspect embed-certs-20211117231110-9504 --format={{.State.Status}} returned with exit code 1
	W1117 23:12:14.707724    7220 stop.go:75] unable to get state: unknown state "embed-certs-20211117231110-9504": docker container inspect embed-certs-20211117231110-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20211117231110-9504
	W1117 23:12:14.707724    7220 stop.go:163] stop host returned error: ssh power off: unknown state "embed-certs-20211117231110-9504": docker container inspect embed-certs-20211117231110-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20211117231110-9504
	I1117 23:12:14.711076    7220 out.go:176] 
	W1117 23:12:14.711076    7220 out.go:241] X Exiting due to GUEST_STOP_TIMEOUT: docker container inspect embed-certs-20211117231110-9504 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20211117231110-9504
	
	X Exiting due to GUEST_STOP_TIMEOUT: docker container inspect embed-certs-20211117231110-9504 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20211117231110-9504
	
	W1117 23:12:14.711076    7220 out.go:241] * 
	* 
	W1117 23:12:14.722744    7220 out.go:241] ╭──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                      │
	│    * If the above advice does not help, please let us know:                                                          │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                        │
	│                                                                                                                      │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                             │
	│    * Please also attach the following file to the GitHub issue:                                                      │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_stop_eeafc14e343686f0df1f1d4295ac2d3042636ff8_2.log    │
	│                                                                                                                      │
	╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                      │
	│    * If the above advice does not help, please let us know:                                                          │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                        │
	│                                                                                                                      │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                             │
	│    * Please also attach the following file to the GitHub issue:                                                      │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_stop_eeafc14e343686f0df1f1d4295ac2d3042636ff8_2.log    │
	│                                                                                                                      │
	╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I1117 23:12:14.726486    7220 out.go:176] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed stopping minikube - first stop-. args "out/minikube-windows-amd64.exe stop -p embed-certs-20211117231110-9504 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Stop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect embed-certs-20211117231110-9504
helpers_test.go:235: (dbg) docker inspect embed-certs-20211117231110-9504:

                                                
                                                
-- stdout --
	[
	    {
	        "CreatedAt": "2021-11-17T23:11:15Z",
	        "Driver": "local",
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "embed-certs-20211117231110-9504"
	        },
	        "Mountpoint": "/var/lib/docker/volumes/embed-certs-20211117231110-9504/_data",
	        "Name": "embed-certs-20211117231110-9504",
	        "Options": {},
	        "Scope": "local"
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p embed-certs-20211117231110-9504 -n embed-certs-20211117231110-9504

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/Stop
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p embed-certs-20211117231110-9504 -n embed-certs-20211117231110-9504: exit status 7 (1.8609666s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 23:12:16.715388   10088 status.go:247] status error: host: state: unknown state "embed-certs-20211117231110-9504": docker container inspect embed-certs-20211117231110-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20211117231110-9504

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-20211117231110-9504" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/embed-certs/serial/Stop (17.20s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (4.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:181: (dbg) Run:  kubectl --context no-preload-20211117231133-9504 create -f testdata\busybox.yaml
start_stop_delete_test.go:181: (dbg) Non-zero exit: kubectl --context no-preload-20211117231133-9504 create -f testdata\busybox.yaml: exit status 1 (225.4846ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-20211117231133-9504" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:181: kubectl --context no-preload-20211117231133-9504 create -f testdata\busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect no-preload-20211117231133-9504
helpers_test.go:235: (dbg) docker inspect no-preload-20211117231133-9504:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "no-preload-20211117231133-9504",
	        "Id": "03729738a14bc6e222aa8b654491630e4091cfad66b947339e45d35ea236214f",
	        "Created": "2021-11-17T23:12:04.406110101Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.58.0/24",
	                    "Gateway": "192.168.58.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-20211117231133-9504 -n no-preload-20211117231133-9504

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/DeployApp
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-20211117231133-9504 -n no-preload-20211117231133-9504: exit status 7 (1.8337193s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 23:12:15.034971    6008 status.go:247] status error: host: state: unknown state "no-preload-20211117231133-9504": docker container inspect no-preload-20211117231133-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20211117231133-9504

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-20211117231133-9504" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect no-preload-20211117231133-9504
helpers_test.go:235: (dbg) docker inspect no-preload-20211117231133-9504:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "no-preload-20211117231133-9504",
	        "Id": "03729738a14bc6e222aa8b654491630e4091cfad66b947339e45d35ea236214f",
	        "Created": "2021-11-17T23:12:04.406110101Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.58.0/24",
	                    "Gateway": "192.168.58.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-20211117231133-9504 -n no-preload-20211117231133-9504

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/DeployApp
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-20211117231133-9504 -n no-preload-20211117231133-9504: exit status 7 (1.8269455s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 23:12:16.987375    1356 status.go:247] status error: host: state: unknown state "no-preload-20211117231133-9504": docker container inspect no-preload-20211117231133-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20211117231133-9504

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-20211117231133-9504" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/no-preload/serial/DeployApp (4.11s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (5.65s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:224: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-20211117231110-9504 -n old-k8s-version-20211117231110-9504

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:224: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-20211117231110-9504 -n old-k8s-version-20211117231110-9504: exit status 7 (1.8558662s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 23:12:17.591026   11580 status.go:247] status error: host: state: unknown state "old-k8s-version-20211117231110-9504": docker container inspect old-k8s-version-20211117231110-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20211117231110-9504

                                                
                                                
** /stderr **
start_stop_delete_test.go:224: status error: exit status 7 (may be ok)
start_stop_delete_test.go:226: expected post-stop host status to be -"Stopped"- but got *"Nonexistent"*
start_stop_delete_test.go:231: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p old-k8s-version-20211117231110-9504 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:231: (dbg) Done: out/minikube-windows-amd64.exe addons enable dashboard -p old-k8s-version-20211117231110-9504 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4: (1.8533433s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-20211117231110-9504
helpers_test.go:235: (dbg) docker inspect old-k8s-version-20211117231110-9504:

                                                
                                                
-- stdout --
	[
	    {
	        "CreatedAt": "2021-11-17T23:11:14Z",
	        "Driver": "local",
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "old-k8s-version-20211117231110-9504"
	        },
	        "Mountpoint": "/var/lib/docker/volumes/old-k8s-version-20211117231110-9504/_data",
	        "Name": "old-k8s-version-20211117231110-9504",
	        "Options": {},
	        "Scope": "local"
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-20211117231110-9504 -n old-k8s-version-20211117231110-9504

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-20211117231110-9504 -n old-k8s-version-20211117231110-9504: exit status 7 (1.8246664s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 23:12:21.394890   11992 status.go:247] status error: host: state: unknown state "old-k8s-version-20211117231110-9504": docker container inspect old-k8s-version-20211117231110-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20211117231110-9504

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-20211117231110-9504" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (5.65s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (5.58s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:224: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p embed-certs-20211117231110-9504 -n embed-certs-20211117231110-9504

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:224: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p embed-certs-20211117231110-9504 -n embed-certs-20211117231110-9504: exit status 7 (1.7968238s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 23:12:18.521363   11660 status.go:247] status error: host: state: unknown state "embed-certs-20211117231110-9504": docker container inspect embed-certs-20211117231110-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20211117231110-9504

                                                
                                                
** /stderr **
start_stop_delete_test.go:224: status error: exit status 7 (may be ok)
start_stop_delete_test.go:226: expected post-stop host status to be -"Stopped"- but got *"Nonexistent"*
start_stop_delete_test.go:231: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p embed-certs-20211117231110-9504 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:231: (dbg) Done: out/minikube-windows-amd64.exe addons enable dashboard -p embed-certs-20211117231110-9504 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4: (1.8575747s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect embed-certs-20211117231110-9504
helpers_test.go:235: (dbg) docker inspect embed-certs-20211117231110-9504:

                                                
                                                
-- stdout --
	[
	    {
	        "CreatedAt": "2021-11-17T23:11:15Z",
	        "Driver": "local",
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "embed-certs-20211117231110-9504"
	        },
	        "Mountpoint": "/var/lib/docker/volumes/embed-certs-20211117231110-9504/_data",
	        "Name": "embed-certs-20211117231110-9504",
	        "Options": {},
	        "Scope": "local"
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p embed-certs-20211117231110-9504 -n embed-certs-20211117231110-9504

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p embed-certs-20211117231110-9504 -n embed-certs-20211117231110-9504: exit status 7 (1.8138396s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 23:12:22.305148    3008 status.go:247] status error: host: state: unknown state "embed-certs-20211117231110-9504": docker container inspect embed-certs-20211117231110-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20211117231110-9504

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-20211117231110-9504" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (5.58s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (3.96s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:190: (dbg) Run:  out/minikube-windows-amd64.exe addons enable metrics-server -p no-preload-20211117231133-9504 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:190: (dbg) Done: out/minikube-windows-amd64.exe addons enable metrics-server -p no-preload-20211117231133-9504 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.8241014s)
start_stop_delete_test.go:200: (dbg) Run:  kubectl --context no-preload-20211117231133-9504 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:200: (dbg) Non-zero exit: kubectl --context no-preload-20211117231133-9504 describe deploy/metrics-server -n kube-system: exit status 1 (216.165ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-20211117231133-9504" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:202: failed to get info on auto-pause deployments. args "kubectl --context no-preload-20211117231133-9504 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:206: addon did not load correct image. Expected to contain " fake.domain/k8s.gcr.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect no-preload-20211117231133-9504
helpers_test.go:235: (dbg) docker inspect no-preload-20211117231133-9504:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "no-preload-20211117231133-9504",
	        "Id": "03729738a14bc6e222aa8b654491630e4091cfad66b947339e45d35ea236214f",
	        "Created": "2021-11-17T23:12:04.406110101Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.58.0/24",
	                    "Gateway": "192.168.58.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-20211117231133-9504 -n no-preload-20211117231133-9504

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/EnableAddonWhileActive
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-20211117231133-9504 -n no-preload-20211117231133-9504: exit status 7 (1.8130346s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 23:12:20.941527    7440 status.go:247] status error: host: state: unknown state "no-preload-20211117231133-9504": docker container inspect no-preload-20211117231133-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20211117231133-9504

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-20211117231133-9504" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (3.96s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (17.12s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:213: (dbg) Run:  out/minikube-windows-amd64.exe stop -p no-preload-20211117231133-9504 --alsologtostderr -v=3

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:213: (dbg) Non-zero exit: out/minikube-windows-amd64.exe stop -p no-preload-20211117231133-9504 --alsologtostderr -v=3: exit status 82 (15.1535667s)

                                                
                                                
-- stdout --
	* Stopping node "no-preload-20211117231133-9504"  ...
	* Stopping node "no-preload-20211117231133-9504"  ...
	* Stopping node "no-preload-20211117231133-9504"  ...
	* Stopping node "no-preload-20211117231133-9504"  ...
	* Stopping node "no-preload-20211117231133-9504"  ...
	* Stopping node "no-preload-20211117231133-9504"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1117 23:12:21.145838    8424 out.go:297] Setting OutFile to fd 1752 ...
	I1117 23:12:21.208834    8424 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 23:12:21.208834    8424 out.go:310] Setting ErrFile to fd 2024...
	I1117 23:12:21.208834    8424 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 23:12:21.219831    8424 out.go:304] Setting JSON to false
	I1117 23:12:21.220832    8424 daemonize_windows.go:45] trying to kill existing schedule stop for profile no-preload-20211117231133-9504...
	I1117 23:12:21.228847    8424 ssh_runner.go:152] Run: systemctl --version
	I1117 23:12:21.232830    8424 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20211117231133-9504
	W1117 23:12:22.757030    8424 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20211117231133-9504 returned with exit code 1
	I1117 23:12:22.757030    8424 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20211117231133-9504: (1.5241878s)
	I1117 23:12:22.757030    8424 retry.go:31] will retry after 276.165072ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-20211117231133-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20211117231133-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20211117231133-9504
	I1117 23:12:23.037920    8424 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20211117231133-9504
	W1117 23:12:23.137861    8424 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20211117231133-9504 returned with exit code 1
	I1117 23:12:23.144179    8424 ssh_runner.go:152] Run: sudo service minikube-scheduled-stop stop
	I1117 23:12:23.147174    8424 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20211117231133-9504
	W1117 23:12:23.234306    8424 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20211117231133-9504 returned with exit code 1
	I1117 23:12:23.234566    8424 retry.go:31] will retry after 291.140013ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-20211117231133-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20211117231133-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20211117231133-9504
	I1117 23:12:23.529016    8424 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20211117231133-9504
	W1117 23:12:23.622019    8424 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20211117231133-9504 returned with exit code 1
	I1117 23:12:23.622275    8424 retry.go:31] will retry after 351.64282ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-20211117231133-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20211117231133-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20211117231133-9504
	I1117 23:12:23.978201    8424 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20211117231133-9504
	W1117 23:12:24.082676    8424 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20211117231133-9504 returned with exit code 1
	I1117 23:12:24.082676    8424 retry.go:31] will retry after 520.108592ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-20211117231133-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20211117231133-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20211117231133-9504
	I1117 23:12:24.607148    8424 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20211117231133-9504
	W1117 23:12:24.706587    8424 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20211117231133-9504 returned with exit code 1
	I1117 23:12:24.706587    8424 openrc.go:165] stop output: 
	E1117 23:12:24.706587    8424 daemonize_windows.go:39] error terminating scheduled stop for profile no-preload-20211117231133-9504: stopping schedule-stop service for profile no-preload-20211117231133-9504: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-20211117231133-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20211117231133-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20211117231133-9504
	I1117 23:12:24.706975    8424 mustload.go:65] Loading cluster: no-preload-20211117231133-9504
	I1117 23:12:24.707190    8424 config.go:176] Loaded profile config "no-preload-20211117231133-9504": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.4-rc.0
	I1117 23:12:24.707819    8424 stop.go:39] StopHost: no-preload-20211117231133-9504
	I1117 23:12:24.711576    8424 out.go:176] * Stopping node "no-preload-20211117231133-9504"  ...
	I1117 23:12:24.718584    8424 cli_runner.go:115] Run: docker container inspect no-preload-20211117231133-9504 --format={{.State.Status}}
	W1117 23:12:24.809896    8424 cli_runner.go:162] docker container inspect no-preload-20211117231133-9504 --format={{.State.Status}} returned with exit code 1
	W1117 23:12:24.809896    8424 stop.go:75] unable to get state: unknown state "no-preload-20211117231133-9504": docker container inspect no-preload-20211117231133-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20211117231133-9504
	W1117 23:12:24.809896    8424 stop.go:163] stop host returned error: ssh power off: unknown state "no-preload-20211117231133-9504": docker container inspect no-preload-20211117231133-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20211117231133-9504
	I1117 23:12:24.809896    8424 retry.go:31] will retry after 565.637019ms: docker container inspect no-preload-20211117231133-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20211117231133-9504
	I1117 23:12:25.376417    8424 stop.go:39] StopHost: no-preload-20211117231133-9504
	I1117 23:12:25.380601    8424 out.go:176] * Stopping node "no-preload-20211117231133-9504"  ...
	I1117 23:12:25.388000    8424 cli_runner.go:115] Run: docker container inspect no-preload-20211117231133-9504 --format={{.State.Status}}
	W1117 23:12:25.482313    8424 cli_runner.go:162] docker container inspect no-preload-20211117231133-9504 --format={{.State.Status}} returned with exit code 1
	W1117 23:12:25.482313    8424 stop.go:75] unable to get state: unknown state "no-preload-20211117231133-9504": docker container inspect no-preload-20211117231133-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20211117231133-9504
	W1117 23:12:25.482313    8424 stop.go:163] stop host returned error: ssh power off: unknown state "no-preload-20211117231133-9504": docker container inspect no-preload-20211117231133-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20211117231133-9504
	I1117 23:12:25.482313    8424 retry.go:31] will retry after 984.778882ms: docker container inspect no-preload-20211117231133-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20211117231133-9504
	I1117 23:12:26.467497    8424 stop.go:39] StopHost: no-preload-20211117231133-9504
	I1117 23:12:26.471008    8424 out.go:176] * Stopping node "no-preload-20211117231133-9504"  ...
	I1117 23:12:26.479803    8424 cli_runner.go:115] Run: docker container inspect no-preload-20211117231133-9504 --format={{.State.Status}}
	W1117 23:12:26.573894    8424 cli_runner.go:162] docker container inspect no-preload-20211117231133-9504 --format={{.State.Status}} returned with exit code 1
	W1117 23:12:26.573894    8424 stop.go:75] unable to get state: unknown state "no-preload-20211117231133-9504": docker container inspect no-preload-20211117231133-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20211117231133-9504
	W1117 23:12:26.573894    8424 stop.go:163] stop host returned error: ssh power off: unknown state "no-preload-20211117231133-9504": docker container inspect no-preload-20211117231133-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20211117231133-9504
	I1117 23:12:26.573894    8424 retry.go:31] will retry after 1.343181417s: docker container inspect no-preload-20211117231133-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20211117231133-9504
	I1117 23:12:27.917285    8424 stop.go:39] StopHost: no-preload-20211117231133-9504
	I1117 23:12:27.921959    8424 out.go:176] * Stopping node "no-preload-20211117231133-9504"  ...
	I1117 23:12:27.930106    8424 cli_runner.go:115] Run: docker container inspect no-preload-20211117231133-9504 --format={{.State.Status}}
	W1117 23:12:28.018519    8424 cli_runner.go:162] docker container inspect no-preload-20211117231133-9504 --format={{.State.Status}} returned with exit code 1
	W1117 23:12:28.018519    8424 stop.go:75] unable to get state: unknown state "no-preload-20211117231133-9504": docker container inspect no-preload-20211117231133-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20211117231133-9504
	W1117 23:12:28.018519    8424 stop.go:163] stop host returned error: ssh power off: unknown state "no-preload-20211117231133-9504": docker container inspect no-preload-20211117231133-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20211117231133-9504
	I1117 23:12:28.018519    8424 retry.go:31] will retry after 2.703077529s: docker container inspect no-preload-20211117231133-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20211117231133-9504
	I1117 23:12:30.721671    8424 stop.go:39] StopHost: no-preload-20211117231133-9504
	I1117 23:12:30.724670    8424 out.go:176] * Stopping node "no-preload-20211117231133-9504"  ...
	I1117 23:12:30.731676    8424 cli_runner.go:115] Run: docker container inspect no-preload-20211117231133-9504 --format={{.State.Status}}
	W1117 23:12:30.825656    8424 cli_runner.go:162] docker container inspect no-preload-20211117231133-9504 --format={{.State.Status}} returned with exit code 1
	W1117 23:12:30.825734    8424 stop.go:75] unable to get state: unknown state "no-preload-20211117231133-9504": docker container inspect no-preload-20211117231133-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20211117231133-9504
	W1117 23:12:30.825734    8424 stop.go:163] stop host returned error: ssh power off: unknown state "no-preload-20211117231133-9504": docker container inspect no-preload-20211117231133-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20211117231133-9504
	I1117 23:12:30.825831    8424 retry.go:31] will retry after 5.139513932s: docker container inspect no-preload-20211117231133-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20211117231133-9504
	I1117 23:12:35.965821    8424 stop.go:39] StopHost: no-preload-20211117231133-9504
	I1117 23:12:35.969509    8424 out.go:176] * Stopping node "no-preload-20211117231133-9504"  ...
	I1117 23:12:35.978803    8424 cli_runner.go:115] Run: docker container inspect no-preload-20211117231133-9504 --format={{.State.Status}}
	W1117 23:12:36.072296    8424 cli_runner.go:162] docker container inspect no-preload-20211117231133-9504 --format={{.State.Status}} returned with exit code 1
	W1117 23:12:36.072296    8424 stop.go:75] unable to get state: unknown state "no-preload-20211117231133-9504": docker container inspect no-preload-20211117231133-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20211117231133-9504
	W1117 23:12:36.072296    8424 stop.go:163] stop host returned error: ssh power off: unknown state "no-preload-20211117231133-9504": docker container inspect no-preload-20211117231133-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20211117231133-9504
	I1117 23:12:36.076812    8424 out.go:176] 
	W1117 23:12:36.076812    8424 out.go:241] X Exiting due to GUEST_STOP_TIMEOUT: docker container inspect no-preload-20211117231133-9504 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20211117231133-9504
	
	X Exiting due to GUEST_STOP_TIMEOUT: docker container inspect no-preload-20211117231133-9504 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20211117231133-9504
	
	W1117 23:12:36.076812    8424 out.go:241] * 
	* 
	W1117 23:12:36.084331    8424 out.go:241] ╭──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                      │
	│    * If the above advice does not help, please let us know:                                                          │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                        │
	│                                                                                                                      │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                             │
	│    * Please also attach the following file to the GitHub issue:                                                      │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_stop_eeafc14e343686f0df1f1d4295ac2d3042636ff8_2.log    │
	│                                                                                                                      │
	╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                      │
	│    * If the above advice does not help, please let us know:                                                          │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                        │
	│                                                                                                                      │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                             │
	│    * Please also attach the following file to the GitHub issue:                                                      │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_stop_eeafc14e343686f0df1f1d4295ac2d3042636ff8_2.log    │
	│                                                                                                                      │
	╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I1117 23:12:36.087234    8424 out.go:176] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed stopping minikube - first stop-. args "out/minikube-windows-amd64.exe stop -p no-preload-20211117231133-9504 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/no-preload/serial/Stop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect no-preload-20211117231133-9504
helpers_test.go:235: (dbg) docker inspect no-preload-20211117231133-9504:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "no-preload-20211117231133-9504",
	        "Id": "03729738a14bc6e222aa8b654491630e4091cfad66b947339e45d35ea236214f",
	        "Created": "2021-11-17T23:12:04.406110101Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.58.0/24",
	                    "Gateway": "192.168.58.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-20211117231133-9504 -n no-preload-20211117231133-9504

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/Stop
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-20211117231133-9504 -n no-preload-20211117231133-9504: exit status 7 (1.8473831s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 23:12:38.058277    7716 status.go:247] status error: host: state: unknown state "no-preload-20211117231133-9504": docker container inspect no-preload-20211117231133-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20211117231133-9504

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-20211117231133-9504" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/no-preload/serial/Stop (17.12s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (60.54s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:241: (dbg) Run:  out/minikube-windows-amd64.exe start -p old-k8s-version-20211117231110-9504 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --kubernetes-version=v1.14.0

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:241: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p old-k8s-version-20211117231110-9504 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --kubernetes-version=v1.14.0: exit status 80 (58.4928248s)

                                                
                                                
-- stdout --
	* [old-k8s-version-20211117231110-9504] minikube v1.24.0 on Microsoft Windows 10 Enterprise N 10.0.18363 Build 18363
	  - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=12739
	* Kubernetes 1.22.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.22.3
	* Using the docker driver based on existing profile
	* Starting control plane node old-k8s-version-20211117231110-9504 in cluster old-k8s-version-20211117231110-9504
	* Pulling base image ...
	* docker "old-k8s-version-20211117231110-9504" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* docker "old-k8s-version-20211117231110-9504" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1117 23:12:21.590135   11448 out.go:297] Setting OutFile to fd 1556 ...
	I1117 23:12:21.668127   11448 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 23:12:21.668751   11448 out.go:310] Setting ErrFile to fd 1396...
	I1117 23:12:21.668751   11448 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 23:12:21.679071   11448 out.go:304] Setting JSON to false
	I1117 23:12:21.682065   11448 start.go:112] hostinfo: {"hostname":"minikube2","uptime":80057,"bootTime":1637110684,"procs":133,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.18363 Build 18363","kernelVersion":"10.0.18363 Build 18363","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"29dced62-21fb-45d8-a34e-472b66ced002"}
	W1117 23:12:21.682065   11448 start.go:120] gopshost.Virtualization returned error: not implemented yet
	I1117 23:12:21.686064   11448 out.go:176] * [old-k8s-version-20211117231110-9504] minikube v1.24.0 on Microsoft Windows 10 Enterprise N 10.0.18363 Build 18363
	I1117 23:12:21.687065   11448 notify.go:174] Checking for updates...
	I1117 23:12:21.689062   11448 out.go:176]   - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	I1117 23:12:21.691064   11448 out.go:176]   - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	I1117 23:12:21.693073   11448 out.go:176]   - MINIKUBE_LOCATION=12739
	I1117 23:12:21.694070   11448 config.go:176] Loaded profile config "old-k8s-version-20211117231110-9504": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.14.0
	I1117 23:12:21.697071   11448 out.go:176] * Kubernetes 1.22.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.22.3
	I1117 23:12:21.697071   11448 driver.go:343] Setting default libvirt URI to qemu:///system
	I1117 23:12:23.301500   11448 docker.go:132] docker version: linux-19.03.12
	I1117 23:12:23.306945   11448 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I1117 23:12:23.663989   11448 info.go:263] docker info: {ID:L2W4:ZZCI:FAHF:ZZT6:NAWJ:ODFR:2ZBQ:YQ77:PYZZ:VR3E:WLUM:OBSV Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:40 OomKillDisable:true NGoroutines:52 SystemTime:2021-11-17 23:12:23.390509161 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:4.19.76-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://inde
x.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:6239399936 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:19.03.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7ad184331fa3e55e52b890ea95e65ba581ae3429 Expected:7ad184331fa3e55e52b890ea95e65ba581ae3429} RuncCommit:{ID:dc9208a3303feef5b3839f4323d9beb36df0a9dd Expected:dc9208a3303feef5b3839f4323d9beb36df0a9dd} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense:Community Engine Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I1117 23:12:23.670015   11448 out.go:176] * Using the docker driver based on existing profile
	I1117 23:12:23.670086   11448 start.go:280] selected driver: docker
	I1117 23:12:23.670086   11448 start.go:775] validating driver "docker" against &{Name:old-k8s-version-20211117231110-9504 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.14.0 ClusterName:old-k8s-version-20211117231110-9504 Namespace:default APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.14.0 ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.3.1@sha256:ec27f462cf1946220f5a9ace416a84a57c18f98c777876a8054405d1428cc92e MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountStr
ing:C:\Users\jenkins.minikube2:/minikube-host}
	I1117 23:12:23.670086   11448 start.go:786] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I1117 23:12:24.067150   11448 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I1117 23:12:24.427655   11448 info.go:263] docker info: {ID:L2W4:ZZCI:FAHF:ZZT6:NAWJ:ODFR:2ZBQ:YQ77:PYZZ:VR3E:WLUM:OBSV Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:40 OomKillDisable:true NGoroutines:52 SystemTime:2021-11-17 23:12:24.154227489 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:4.19.76-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://inde
x.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:6239399936 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:19.03.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7ad184331fa3e55e52b890ea95e65ba581ae3429 Expected:7ad184331fa3e55e52b890ea95e65ba581ae3429} RuncCommit:{ID:dc9208a3303feef5b3839f4323d9beb36df0a9dd Expected:dc9208a3303feef5b3839f4323d9beb36df0a9dd} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense:Community Engine Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I1117 23:12:24.427974   11448 start_flags.go:758] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1117 23:12:24.428025   11448 cni.go:93] Creating CNI manager for ""
	I1117 23:12:24.428025   11448 cni.go:167] CNI unnecessary in this configuration, recommending no CNI
	I1117 23:12:24.428106   11448 start_flags.go:282] config:
	{Name:old-k8s-version-20211117231110-9504 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.14.0 ClusterName:old-k8s-version-20211117231110-9504 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docke
r CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.14.0 ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.3.1@sha256:ec27f462cf1946220f5a9ace416a84a57c18f98c777876a8054405d1428cc92e MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host}
	I1117 23:12:24.435716   11448 out.go:176] * Starting control plane node old-k8s-version-20211117231110-9504 in cluster old-k8s-version-20211117231110-9504
	I1117 23:12:24.435716   11448 cache.go:118] Beginning downloading kic base image for docker with docker
	I1117 23:12:24.440066   11448 out.go:176] * Pulling base image ...
	I1117 23:12:24.440121   11448 preload.go:132] Checking if preload exists for k8s version v1.14.0 and runtime docker
	I1117 23:12:24.440121   11448 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c in local docker daemon
	I1117 23:12:24.440298   11448 preload.go:148] Found local preload: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v14-v1.14.0-docker-overlay2-amd64.tar.lz4
	I1117 23:12:24.440298   11448 cache.go:57] Caching tarball of preloaded images
	I1117 23:12:24.440737   11448 preload.go:174] Found C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v14-v1.14.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1117 23:12:24.440737   11448 cache.go:60] Finished verifying existence of preloaded tar for  v1.14.0 on docker
	I1117 23:12:24.441062   11448 profile.go:147] Saving config to C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\old-k8s-version-20211117231110-9504\config.json ...
	I1117 23:12:24.543474   11448 image.go:79] Found gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c in local docker daemon, skipping pull
	I1117 23:12:24.543474   11448 cache.go:140] gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c exists in daemon, skipping load
	I1117 23:12:24.543474   11448 cache.go:206] Successfully downloaded all kic artifacts
	I1117 23:12:24.543803   11448 start.go:313] acquiring machines lock for old-k8s-version-20211117231110-9504: {Name:mkf20483f474415f88720279d1dc914d2f1e71fc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 23:12:24.543900   11448 start.go:317] acquired machines lock for "old-k8s-version-20211117231110-9504" in 52.4µs
	I1117 23:12:24.543900   11448 start.go:93] Skipping create...Using existing machine configuration
	I1117 23:12:24.543900   11448 fix.go:55] fixHost starting: 
	I1117 23:12:24.554500   11448 cli_runner.go:115] Run: docker container inspect old-k8s-version-20211117231110-9504 --format={{.State.Status}}
	W1117 23:12:24.661592   11448 cli_runner.go:162] docker container inspect old-k8s-version-20211117231110-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:12:24.661592   11448 fix.go:108] recreateIfNeeded on old-k8s-version-20211117231110-9504: state= err=unknown state "old-k8s-version-20211117231110-9504": docker container inspect old-k8s-version-20211117231110-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20211117231110-9504
	I1117 23:12:24.661592   11448 fix.go:113] machineExists: false. err=machine does not exist
	I1117 23:12:24.668592   11448 out.go:176] * docker "old-k8s-version-20211117231110-9504" container is missing, will recreate.
	I1117 23:12:24.668592   11448 delete.go:124] DEMOLISHING old-k8s-version-20211117231110-9504 ...
	I1117 23:12:24.676582   11448 cli_runner.go:115] Run: docker container inspect old-k8s-version-20211117231110-9504 --format={{.State.Status}}
	W1117 23:12:24.776897   11448 cli_runner.go:162] docker container inspect old-k8s-version-20211117231110-9504 --format={{.State.Status}} returned with exit code 1
	W1117 23:12:24.776897   11448 stop.go:75] unable to get state: unknown state "old-k8s-version-20211117231110-9504": docker container inspect old-k8s-version-20211117231110-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20211117231110-9504
	I1117 23:12:24.776897   11448 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "old-k8s-version-20211117231110-9504": docker container inspect old-k8s-version-20211117231110-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20211117231110-9504
	I1117 23:12:24.784892   11448 cli_runner.go:115] Run: docker container inspect old-k8s-version-20211117231110-9504 --format={{.State.Status}}
	W1117 23:12:24.873979   11448 cli_runner.go:162] docker container inspect old-k8s-version-20211117231110-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:12:24.873979   11448 delete.go:82] Unable to get host status for old-k8s-version-20211117231110-9504, assuming it has already been deleted: state: unknown state "old-k8s-version-20211117231110-9504": docker container inspect old-k8s-version-20211117231110-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20211117231110-9504
	I1117 23:12:24.876980   11448 cli_runner.go:115] Run: docker container inspect -f {{.Id}} old-k8s-version-20211117231110-9504
	W1117 23:12:24.966975   11448 cli_runner.go:162] docker container inspect -f {{.Id}} old-k8s-version-20211117231110-9504 returned with exit code 1
	I1117 23:12:24.966975   11448 kic.go:360] could not find the container old-k8s-version-20211117231110-9504 to remove it. will try anyways
	I1117 23:12:24.973037   11448 cli_runner.go:115] Run: docker container inspect old-k8s-version-20211117231110-9504 --format={{.State.Status}}
	W1117 23:12:25.066426   11448 cli_runner.go:162] docker container inspect old-k8s-version-20211117231110-9504 --format={{.State.Status}} returned with exit code 1
	W1117 23:12:25.066654   11448 oci.go:83] error getting container status, will try to delete anyways: unknown state "old-k8s-version-20211117231110-9504": docker container inspect old-k8s-version-20211117231110-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20211117231110-9504
	I1117 23:12:25.070854   11448 cli_runner.go:115] Run: docker exec --privileged -t old-k8s-version-20211117231110-9504 /bin/bash -c "sudo init 0"
	W1117 23:12:25.177567   11448 cli_runner.go:162] docker exec --privileged -t old-k8s-version-20211117231110-9504 /bin/bash -c "sudo init 0" returned with exit code 1
	I1117 23:12:25.177655   11448 oci.go:658] error shutdown old-k8s-version-20211117231110-9504: docker exec --privileged -t old-k8s-version-20211117231110-9504 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: old-k8s-version-20211117231110-9504
	I1117 23:12:26.183939   11448 cli_runner.go:115] Run: docker container inspect old-k8s-version-20211117231110-9504 --format={{.State.Status}}
	W1117 23:12:26.275540   11448 cli_runner.go:162] docker container inspect old-k8s-version-20211117231110-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:12:26.275540   11448 oci.go:670] temporary error verifying shutdown: unknown state "old-k8s-version-20211117231110-9504": docker container inspect old-k8s-version-20211117231110-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20211117231110-9504
	I1117 23:12:26.275841   11448 oci.go:672] temporary error: container old-k8s-version-20211117231110-9504 status is  but expect it to be exited
	I1117 23:12:26.275841   11448 retry.go:31] will retry after 552.330144ms: couldn't verify container is exited. %v: unknown state "old-k8s-version-20211117231110-9504": docker container inspect old-k8s-version-20211117231110-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20211117231110-9504
	I1117 23:12:26.834516   11448 cli_runner.go:115] Run: docker container inspect old-k8s-version-20211117231110-9504 --format={{.State.Status}}
	W1117 23:12:26.935231   11448 cli_runner.go:162] docker container inspect old-k8s-version-20211117231110-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:12:26.935359   11448 oci.go:670] temporary error verifying shutdown: unknown state "old-k8s-version-20211117231110-9504": docker container inspect old-k8s-version-20211117231110-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20211117231110-9504
	I1117 23:12:26.935359   11448 oci.go:672] temporary error: container old-k8s-version-20211117231110-9504 status is  but expect it to be exited
	I1117 23:12:26.935464   11448 retry.go:31] will retry after 1.080381816s: couldn't verify container is exited. %v: unknown state "old-k8s-version-20211117231110-9504": docker container inspect old-k8s-version-20211117231110-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20211117231110-9504
	I1117 23:12:28.020509   11448 cli_runner.go:115] Run: docker container inspect old-k8s-version-20211117231110-9504 --format={{.State.Status}}
	W1117 23:12:28.109351   11448 cli_runner.go:162] docker container inspect old-k8s-version-20211117231110-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:12:28.109597   11448 oci.go:670] temporary error verifying shutdown: unknown state "old-k8s-version-20211117231110-9504": docker container inspect old-k8s-version-20211117231110-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20211117231110-9504
	I1117 23:12:28.109632   11448 oci.go:672] temporary error: container old-k8s-version-20211117231110-9504 status is  but expect it to be exited
	I1117 23:12:28.109661   11448 retry.go:31] will retry after 1.31013006s: couldn't verify container is exited. %v: unknown state "old-k8s-version-20211117231110-9504": docker container inspect old-k8s-version-20211117231110-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20211117231110-9504
	I1117 23:12:29.425648   11448 cli_runner.go:115] Run: docker container inspect old-k8s-version-20211117231110-9504 --format={{.State.Status}}
	W1117 23:12:29.514378   11448 cli_runner.go:162] docker container inspect old-k8s-version-20211117231110-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:12:29.514652   11448 oci.go:670] temporary error verifying shutdown: unknown state "old-k8s-version-20211117231110-9504": docker container inspect old-k8s-version-20211117231110-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20211117231110-9504
	I1117 23:12:29.514652   11448 oci.go:672] temporary error: container old-k8s-version-20211117231110-9504 status is  but expect it to be exited
	I1117 23:12:29.514652   11448 retry.go:31] will retry after 1.582392691s: couldn't verify container is exited. %v: unknown state "old-k8s-version-20211117231110-9504": docker container inspect old-k8s-version-20211117231110-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20211117231110-9504
	I1117 23:12:31.101802   11448 cli_runner.go:115] Run: docker container inspect old-k8s-version-20211117231110-9504 --format={{.State.Status}}
	W1117 23:12:31.193190   11448 cli_runner.go:162] docker container inspect old-k8s-version-20211117231110-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:12:31.193190   11448 oci.go:670] temporary error verifying shutdown: unknown state "old-k8s-version-20211117231110-9504": docker container inspect old-k8s-version-20211117231110-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20211117231110-9504
	I1117 23:12:31.193190   11448 oci.go:672] temporary error: container old-k8s-version-20211117231110-9504 status is  but expect it to be exited
	I1117 23:12:31.193190   11448 retry.go:31] will retry after 2.340488664s: couldn't verify container is exited. %v: unknown state "old-k8s-version-20211117231110-9504": docker container inspect old-k8s-version-20211117231110-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20211117231110-9504
	I1117 23:12:33.540083   11448 cli_runner.go:115] Run: docker container inspect old-k8s-version-20211117231110-9504 --format={{.State.Status}}
	W1117 23:12:33.626817   11448 cli_runner.go:162] docker container inspect old-k8s-version-20211117231110-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:12:33.626904   11448 oci.go:670] temporary error verifying shutdown: unknown state "old-k8s-version-20211117231110-9504": docker container inspect old-k8s-version-20211117231110-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20211117231110-9504
	I1117 23:12:33.626983   11448 oci.go:672] temporary error: container old-k8s-version-20211117231110-9504 status is  but expect it to be exited
	I1117 23:12:33.627026   11448 retry.go:31] will retry after 4.506218855s: couldn't verify container is exited. %v: unknown state "old-k8s-version-20211117231110-9504": docker container inspect old-k8s-version-20211117231110-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20211117231110-9504
	I1117 23:12:38.138521   11448 cli_runner.go:115] Run: docker container inspect old-k8s-version-20211117231110-9504 --format={{.State.Status}}
	W1117 23:12:38.233705   11448 cli_runner.go:162] docker container inspect old-k8s-version-20211117231110-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:12:38.233705   11448 oci.go:670] temporary error verifying shutdown: unknown state "old-k8s-version-20211117231110-9504": docker container inspect old-k8s-version-20211117231110-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20211117231110-9504
	I1117 23:12:38.233705   11448 oci.go:672] temporary error: container old-k8s-version-20211117231110-9504 status is  but expect it to be exited
	I1117 23:12:38.233705   11448 retry.go:31] will retry after 3.221479586s: couldn't verify container is exited. %v: unknown state "old-k8s-version-20211117231110-9504": docker container inspect old-k8s-version-20211117231110-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20211117231110-9504
	I1117 23:12:41.460104   11448 cli_runner.go:115] Run: docker container inspect old-k8s-version-20211117231110-9504 --format={{.State.Status}}
	W1117 23:12:41.568356   11448 cli_runner.go:162] docker container inspect old-k8s-version-20211117231110-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:12:41.568464   11448 oci.go:670] temporary error verifying shutdown: unknown state "old-k8s-version-20211117231110-9504": docker container inspect old-k8s-version-20211117231110-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20211117231110-9504
	I1117 23:12:41.568464   11448 oci.go:672] temporary error: container old-k8s-version-20211117231110-9504 status is  but expect it to be exited
	I1117 23:12:41.568544   11448 oci.go:87] couldn't shut down old-k8s-version-20211117231110-9504 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "old-k8s-version-20211117231110-9504": docker container inspect old-k8s-version-20211117231110-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20211117231110-9504
	 
	I1117 23:12:41.573418   11448 cli_runner.go:115] Run: docker rm -f -v old-k8s-version-20211117231110-9504
	W1117 23:12:41.668510   11448 cli_runner.go:162] docker rm -f -v old-k8s-version-20211117231110-9504 returned with exit code 1
	W1117 23:12:41.669639   11448 delete.go:139] delete failed (probably ok) <nil>
	I1117 23:12:41.669639   11448 fix.go:120] Sleeping 1 second for extra luck!
	I1117 23:12:42.670407   11448 start.go:126] createHost starting for "" (driver="docker")
	I1117 23:12:42.674294   11448 out.go:203] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I1117 23:12:42.674714   11448 start.go:160] libmachine.API.Create for "old-k8s-version-20211117231110-9504" (driver="docker")
	I1117 23:12:42.674822   11448 client.go:168] LocalClient.Create starting
	I1117 23:12:42.675387   11448 main.go:130] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I1117 23:12:42.675664   11448 main.go:130] libmachine: Decoding PEM data...
	I1117 23:12:42.675664   11448 main.go:130] libmachine: Parsing certificate...
	I1117 23:12:42.675664   11448 main.go:130] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I1117 23:12:42.675664   11448 main.go:130] libmachine: Decoding PEM data...
	I1117 23:12:42.675664   11448 main.go:130] libmachine: Parsing certificate...
	I1117 23:12:42.681109   11448 cli_runner.go:115] Run: docker network inspect old-k8s-version-20211117231110-9504 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1117 23:12:42.773747   11448 cli_runner.go:162] docker network inspect old-k8s-version-20211117231110-9504 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1117 23:12:42.777974   11448 network_create.go:254] running [docker network inspect old-k8s-version-20211117231110-9504] to gather additional debugging logs...
	I1117 23:12:42.777974   11448 cli_runner.go:115] Run: docker network inspect old-k8s-version-20211117231110-9504
	W1117 23:12:42.869693   11448 cli_runner.go:162] docker network inspect old-k8s-version-20211117231110-9504 returned with exit code 1
	I1117 23:12:42.869693   11448 network_create.go:257] error running [docker network inspect old-k8s-version-20211117231110-9504]: docker network inspect old-k8s-version-20211117231110-9504: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: old-k8s-version-20211117231110-9504
	I1117 23:12:42.869843   11448 network_create.go:259] output of [docker network inspect old-k8s-version-20211117231110-9504]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: old-k8s-version-20211117231110-9504
	
	** /stderr **
	I1117 23:12:42.874239   11448 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1117 23:12:42.979015   11448 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc00014e628] misses:0}
	I1117 23:12:42.979015   11448 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1117 23:12:42.979015   11448 network_create.go:106] attempt to create docker network old-k8s-version-20211117231110-9504 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1117 23:12:42.984382   11448 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true old-k8s-version-20211117231110-9504
	W1117 23:12:43.073711   11448 cli_runner.go:162] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true old-k8s-version-20211117231110-9504 returned with exit code 1
	W1117 23:12:43.073711   11448 network_create.go:98] failed to create docker network old-k8s-version-20211117231110-9504 192.168.49.0/24, will retry: subnet is taken
	I1117 23:12:43.088039   11448 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00014e628] amended:false}} dirty:map[] misses:0}
	I1117 23:12:43.088039   11448 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1117 23:12:43.105800   11448 network.go:288] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00014e628] amended:true}} dirty:map[192.168.49.0:0xc00014e628 192.168.58.0:0xc0006b4508] misses:0}
	I1117 23:12:43.105900   11448 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1117 23:12:43.105900   11448 network_create.go:106] attempt to create docker network old-k8s-version-20211117231110-9504 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I1117 23:12:43.109618   11448 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true old-k8s-version-20211117231110-9504
	W1117 23:12:43.198238   11448 cli_runner.go:162] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true old-k8s-version-20211117231110-9504 returned with exit code 1
	W1117 23:12:43.198379   11448 network_create.go:98] failed to create docker network old-k8s-version-20211117231110-9504 192.168.58.0/24, will retry: subnet is taken
	I1117 23:12:43.212415   11448 network.go:279] skipping subnet 192.168.58.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00014e628] amended:true}} dirty:map[192.168.49.0:0xc00014e628 192.168.58.0:0xc0006b4508] misses:1}
	I1117 23:12:43.213261   11448 network.go:238] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1117 23:12:43.228609   11448 network.go:288] reserving subnet 192.168.67.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00014e628] amended:true}} dirty:map[192.168.49.0:0xc00014e628 192.168.58.0:0xc0006b4508 192.168.67.0:0xc00014e6b0] misses:1}
	I1117 23:12:43.228609   11448 network.go:235] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1117 23:12:43.228609   11448 network_create.go:106] attempt to create docker network old-k8s-version-20211117231110-9504 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
	I1117 23:12:43.234136   11448 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true old-k8s-version-20211117231110-9504
	I1117 23:12:43.449703   11448 network_create.go:90] docker network old-k8s-version-20211117231110-9504 192.168.67.0/24 created
	I1117 23:12:43.449842   11448 kic.go:106] calculated static IP "192.168.67.2" for the "old-k8s-version-20211117231110-9504" container
	I1117 23:12:43.457686   11448 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
	I1117 23:12:43.551045   11448 cli_runner.go:115] Run: docker volume create old-k8s-version-20211117231110-9504 --label name.minikube.sigs.k8s.io=old-k8s-version-20211117231110-9504 --label created_by.minikube.sigs.k8s.io=true
	I1117 23:12:43.652370   11448 oci.go:102] Successfully created a docker volume old-k8s-version-20211117231110-9504
	I1117 23:12:43.657351   11448 cli_runner.go:115] Run: docker run --rm --name old-k8s-version-20211117231110-9504-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-20211117231110-9504 --entrypoint /usr/bin/test -v old-k8s-version-20211117231110-9504:/var gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -d /var/lib
	I1117 23:12:44.583993   11448 oci.go:106] Successfully prepared a docker volume old-k8s-version-20211117231110-9504
	I1117 23:12:44.584119   11448 preload.go:132] Checking if preload exists for k8s version v1.14.0 and runtime docker
	I1117 23:12:44.584192   11448 kic.go:179] Starting extracting preloaded images to volume ...
	I1117 23:12:44.588921   11448 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v14-v1.14.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-20211117231110-9504:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir
	I1117 23:12:44.589463   11448 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	W1117 23:12:44.718307   11448 cli_runner.go:162] docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v14-v1.14.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-20211117231110-9504:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir returned with exit code 125
	I1117 23:12:44.718307   11448 kic.go:186] Unable to extract preloaded tarball to volume: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v14-v1.14.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-20211117231110-9504:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: status code not OK but 500: {"Message":"Unhandled exception: The notification platform is unavailable.\r\n\r\nThe notification platform is unavailable.\r\n","StackTrace":"   at Windows.UI.Notifications.ToastNotificationManager.CreateToastNotifier(String applicationId)\r\n   at Docker.WPF.PromptShareDirectory.<PromptUserAsync>d__0.MoveNext() in C:\\workspaces\\stable-2.3.x\\src\\github.com\\docker\\pinata\\win\\src\\Docker.WPF\\PromptShareDirectory.cs:line 25\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at Docker.ApiServices.Mounting.FileSharing.<DoShareAsync>d__6.MoveNext() in C:\\workspaces\\stable-2.3.x\\src\\github.com\\docker\\pinata\\win\\src\\Docker.ApiServices\\Mounting\\FileSharing.cs:line 80\r\n--- End of stack trace from previous location whe
re exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at Docker.ApiServices.Mounting.FileSharing.<ShareAsync>d__4.MoveNext() in C:\\workspaces\\stable-2.3.x\\src\\github.com\\docker\\pinata\\win\\src\\Docker.ApiServices\\Mounting\\FileSharing.cs:line 47\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at Docker.HttpApi.Controllers.FilesharingController.<ShareDirectory>d__2.MoveNext() in C:\\workspaces\\stable-2.3.x\\src\\github.com\\docker\\pinata\\win\\src\\Docker.HttpApi\\Controllers\\FilesharingController.cs:line 21\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.Exceptio
nDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at System.Threading.Tasks.TaskHelpersExtensions.<CastToObject>d__1`1.MoveNext()\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at System.Web.Http.Controllers.ApiControllerActionInvoker.<InvokeActionAsyncCore>d__1.MoveNext()\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at System.Web.Http.Controllers.ActionFilterResult.<ExecuteAsync>d__5.MoveNext()\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.Excepti
onServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at System.Web.Http.Dispatcher.HttpControllerDispatcher.<SendAsync>d__15.MoveNext()"}.
	See 'docker run --help'.
	I1117 23:12:44.954839   11448 info.go:263] docker info: {ID:L2W4:ZZCI:FAHF:ZZT6:NAWJ:ODFR:2ZBQ:YQ77:PYZZ:VR3E:WLUM:OBSV Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:43 OomKillDisable:true NGoroutines:62 SystemTime:2021-11-17 23:12:44.682026055 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:4.19.76-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://inde
x.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:6239399936 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:19.03.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7ad184331fa3e55e52b890ea95e65ba581ae3429 Expected:7ad184331fa3e55e52b890ea95e65ba581ae3429} RuncCommit:{ID:dc9208a3303feef5b3839f4323d9beb36df0a9dd Expected:dc9208a3303feef5b3839f4323d9beb36df0a9dd} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense:Community Engine Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	E1117 23:12:44.955502   11448 oci.go:197] error getting kernel modules path: Unable to locate kernel modules
	I1117 23:12:44.955502   11448 client.go:171] LocalClient.Create took 2.2806623s
	I1117 23:12:46.965852   11448 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1117 23:12:46.968961   11448 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20211117231110-9504
	W1117 23:12:47.065725   11448 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20211117231110-9504 returned with exit code 1
	I1117 23:12:47.065866   11448 retry.go:31] will retry after 149.242379ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-20211117231110-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20211117231110-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20211117231110-9504
	I1117 23:12:47.220527   11448 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20211117231110-9504
	W1117 23:12:47.312009   11448 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20211117231110-9504 returned with exit code 1
	I1117 23:12:47.312288   11448 retry.go:31] will retry after 300.341948ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-20211117231110-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20211117231110-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20211117231110-9504
	I1117 23:12:47.617867   11448 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20211117231110-9504
	W1117 23:12:47.715760   11448 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20211117231110-9504 returned with exit code 1
	I1117 23:12:47.716076   11448 retry.go:31] will retry after 571.057104ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-20211117231110-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20211117231110-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20211117231110-9504
	I1117 23:12:48.293222   11448 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20211117231110-9504
	W1117 23:12:48.382941   11448 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20211117231110-9504 returned with exit code 1
	W1117 23:12:48.383217   11448 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-20211117231110-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20211117231110-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20211117231110-9504
	
	W1117 23:12:48.383299   11448 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-20211117231110-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20211117231110-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20211117231110-9504
	I1117 23:12:48.383299   11448 start.go:129] duration metric: createHost completed in 5.7128484s
	I1117 23:12:48.391638   11448 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1117 23:12:48.394589   11448 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20211117231110-9504
	W1117 23:12:48.489109   11448 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20211117231110-9504 returned with exit code 1
	I1117 23:12:48.489499   11448 retry.go:31] will retry after 178.565968ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-20211117231110-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20211117231110-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20211117231110-9504
	I1117 23:12:48.673530   11448 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20211117231110-9504
	W1117 23:12:48.763675   11448 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20211117231110-9504 returned with exit code 1
	I1117 23:12:48.763838   11448 retry.go:31] will retry after 330.246446ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-20211117231110-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20211117231110-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20211117231110-9504
	I1117 23:12:49.098725   11448 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20211117231110-9504
	W1117 23:12:49.190790   11448 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20211117231110-9504 returned with exit code 1
	I1117 23:12:49.190790   11448 retry.go:31] will retry after 460.157723ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-20211117231110-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20211117231110-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20211117231110-9504
	I1117 23:12:49.657268   11448 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20211117231110-9504
	W1117 23:12:49.747303   11448 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20211117231110-9504 returned with exit code 1
	W1117 23:12:49.747733   11448 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-20211117231110-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20211117231110-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20211117231110-9504
	
	W1117 23:12:49.747885   11448 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-20211117231110-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20211117231110-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20211117231110-9504
	I1117 23:12:49.747885   11448 fix.go:57] fixHost completed within 25.2037953s
	I1117 23:12:49.747885   11448 start.go:80] releasing machines lock for "old-k8s-version-20211117231110-9504", held for 25.2037953s
	W1117 23:12:49.748157   11448 start.go:532] error starting host: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	W1117 23:12:49.748187   11448 out.go:241] ! StartHost failed, but will try again: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	! StartHost failed, but will try again: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	I1117 23:12:49.748187   11448 start.go:547] Will try again in 5 seconds ...
	I1117 23:12:54.749223   11448 start.go:313] acquiring machines lock for old-k8s-version-20211117231110-9504: {Name:mkf20483f474415f88720279d1dc914d2f1e71fc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 23:12:54.749223   11448 start.go:317] acquired machines lock for "old-k8s-version-20211117231110-9504" in 0s
	I1117 23:12:54.749223   11448 start.go:93] Skipping create...Using existing machine configuration
	I1117 23:12:54.749776   11448 fix.go:55] fixHost starting: 
	I1117 23:12:54.756726   11448 cli_runner.go:115] Run: docker container inspect old-k8s-version-20211117231110-9504 --format={{.State.Status}}
	W1117 23:12:54.859072   11448 cli_runner.go:162] docker container inspect old-k8s-version-20211117231110-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:12:54.859072   11448 fix.go:108] recreateIfNeeded on old-k8s-version-20211117231110-9504: state= err=unknown state "old-k8s-version-20211117231110-9504": docker container inspect old-k8s-version-20211117231110-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20211117231110-9504
	I1117 23:12:54.859181   11448 fix.go:113] machineExists: false. err=machine does not exist
	I1117 23:12:54.863273   11448 out.go:176] * docker "old-k8s-version-20211117231110-9504" container is missing, will recreate.
	I1117 23:12:54.863416   11448 delete.go:124] DEMOLISHING old-k8s-version-20211117231110-9504 ...
	I1117 23:12:54.870639   11448 cli_runner.go:115] Run: docker container inspect old-k8s-version-20211117231110-9504 --format={{.State.Status}}
	W1117 23:12:54.962437   11448 cli_runner.go:162] docker container inspect old-k8s-version-20211117231110-9504 --format={{.State.Status}} returned with exit code 1
	W1117 23:12:54.962598   11448 stop.go:75] unable to get state: unknown state "old-k8s-version-20211117231110-9504": docker container inspect old-k8s-version-20211117231110-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20211117231110-9504
	I1117 23:12:54.962686   11448 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "old-k8s-version-20211117231110-9504": docker container inspect old-k8s-version-20211117231110-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20211117231110-9504
	I1117 23:12:54.970630   11448 cli_runner.go:115] Run: docker container inspect old-k8s-version-20211117231110-9504 --format={{.State.Status}}
	W1117 23:12:55.055085   11448 cli_runner.go:162] docker container inspect old-k8s-version-20211117231110-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:12:55.055085   11448 delete.go:82] Unable to get host status for old-k8s-version-20211117231110-9504, assuming it has already been deleted: state: unknown state "old-k8s-version-20211117231110-9504": docker container inspect old-k8s-version-20211117231110-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20211117231110-9504
	I1117 23:12:55.065482   11448 cli_runner.go:115] Run: docker container inspect -f {{.Id}} old-k8s-version-20211117231110-9504
	W1117 23:12:55.150901   11448 cli_runner.go:162] docker container inspect -f {{.Id}} old-k8s-version-20211117231110-9504 returned with exit code 1
	I1117 23:12:55.150901   11448 kic.go:360] could not find the container old-k8s-version-20211117231110-9504 to remove it. will try anyways
	I1117 23:12:55.155182   11448 cli_runner.go:115] Run: docker container inspect old-k8s-version-20211117231110-9504 --format={{.State.Status}}
	W1117 23:12:55.246003   11448 cli_runner.go:162] docker container inspect old-k8s-version-20211117231110-9504 --format={{.State.Status}} returned with exit code 1
	W1117 23:12:55.246185   11448 oci.go:83] error getting container status, will try to delete anyways: unknown state "old-k8s-version-20211117231110-9504": docker container inspect old-k8s-version-20211117231110-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20211117231110-9504
	I1117 23:12:55.250415   11448 cli_runner.go:115] Run: docker exec --privileged -t old-k8s-version-20211117231110-9504 /bin/bash -c "sudo init 0"
	W1117 23:12:55.341217   11448 cli_runner.go:162] docker exec --privileged -t old-k8s-version-20211117231110-9504 /bin/bash -c "sudo init 0" returned with exit code 1
	I1117 23:12:55.341217   11448 oci.go:658] error shutdown old-k8s-version-20211117231110-9504: docker exec --privileged -t old-k8s-version-20211117231110-9504 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: old-k8s-version-20211117231110-9504
	I1117 23:12:56.347081   11448 cli_runner.go:115] Run: docker container inspect old-k8s-version-20211117231110-9504 --format={{.State.Status}}
	W1117 23:12:56.447376   11448 cli_runner.go:162] docker container inspect old-k8s-version-20211117231110-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:12:56.447751   11448 oci.go:670] temporary error verifying shutdown: unknown state "old-k8s-version-20211117231110-9504": docker container inspect old-k8s-version-20211117231110-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20211117231110-9504
	I1117 23:12:56.447751   11448 oci.go:672] temporary error: container old-k8s-version-20211117231110-9504 status is  but expect it to be exited
	I1117 23:12:56.447751   11448 retry.go:31] will retry after 391.517075ms: couldn't verify container is exited. %v: unknown state "old-k8s-version-20211117231110-9504": docker container inspect old-k8s-version-20211117231110-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20211117231110-9504
	I1117 23:12:56.845914   11448 cli_runner.go:115] Run: docker container inspect old-k8s-version-20211117231110-9504 --format={{.State.Status}}
	W1117 23:12:56.935563   11448 cli_runner.go:162] docker container inspect old-k8s-version-20211117231110-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:12:56.935730   11448 oci.go:670] temporary error verifying shutdown: unknown state "old-k8s-version-20211117231110-9504": docker container inspect old-k8s-version-20211117231110-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20211117231110-9504
	I1117 23:12:56.935769   11448 oci.go:672] temporary error: container old-k8s-version-20211117231110-9504 status is  but expect it to be exited
	I1117 23:12:56.935808   11448 retry.go:31] will retry after 594.826393ms: couldn't verify container is exited. %v: unknown state "old-k8s-version-20211117231110-9504": docker container inspect old-k8s-version-20211117231110-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20211117231110-9504
	I1117 23:12:57.537901   11448 cli_runner.go:115] Run: docker container inspect old-k8s-version-20211117231110-9504 --format={{.State.Status}}
	W1117 23:12:57.641606   11448 cli_runner.go:162] docker container inspect old-k8s-version-20211117231110-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:12:57.641606   11448 oci.go:670] temporary error verifying shutdown: unknown state "old-k8s-version-20211117231110-9504": docker container inspect old-k8s-version-20211117231110-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20211117231110-9504
	I1117 23:12:57.641606   11448 oci.go:672] temporary error: container old-k8s-version-20211117231110-9504 status is  but expect it to be exited
	I1117 23:12:57.641606   11448 retry.go:31] will retry after 1.326470261s: couldn't verify container is exited. %v: unknown state "old-k8s-version-20211117231110-9504": docker container inspect old-k8s-version-20211117231110-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20211117231110-9504
	I1117 23:12:58.972407   11448 cli_runner.go:115] Run: docker container inspect old-k8s-version-20211117231110-9504 --format={{.State.Status}}
	W1117 23:12:59.064626   11448 cli_runner.go:162] docker container inspect old-k8s-version-20211117231110-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:12:59.064800   11448 oci.go:670] temporary error verifying shutdown: unknown state "old-k8s-version-20211117231110-9504": docker container inspect old-k8s-version-20211117231110-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20211117231110-9504
	I1117 23:12:59.064919   11448 oci.go:672] temporary error: container old-k8s-version-20211117231110-9504 status is  but expect it to be exited
	I1117 23:12:59.064944   11448 retry.go:31] will retry after 1.212558276s: couldn't verify container is exited. %v: unknown state "old-k8s-version-20211117231110-9504": docker container inspect old-k8s-version-20211117231110-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20211117231110-9504
	I1117 23:13:00.282369   11448 cli_runner.go:115] Run: docker container inspect old-k8s-version-20211117231110-9504 --format={{.State.Status}}
	W1117 23:13:00.381961   11448 cli_runner.go:162] docker container inspect old-k8s-version-20211117231110-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:13:00.382185   11448 oci.go:670] temporary error verifying shutdown: unknown state "old-k8s-version-20211117231110-9504": docker container inspect old-k8s-version-20211117231110-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20211117231110-9504
	I1117 23:13:00.382242   11448 oci.go:672] temporary error: container old-k8s-version-20211117231110-9504 status is  but expect it to be exited
	I1117 23:13:00.382287   11448 retry.go:31] will retry after 1.779941781s: couldn't verify container is exited. %v: unknown state "old-k8s-version-20211117231110-9504": docker container inspect old-k8s-version-20211117231110-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20211117231110-9504
	I1117 23:13:02.167934   11448 cli_runner.go:115] Run: docker container inspect old-k8s-version-20211117231110-9504 --format={{.State.Status}}
	W1117 23:13:02.256647   11448 cli_runner.go:162] docker container inspect old-k8s-version-20211117231110-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:13:02.256734   11448 oci.go:670] temporary error verifying shutdown: unknown state "old-k8s-version-20211117231110-9504": docker container inspect old-k8s-version-20211117231110-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20211117231110-9504
	I1117 23:13:02.256734   11448 oci.go:672] temporary error: container old-k8s-version-20211117231110-9504 status is  but expect it to be exited
	I1117 23:13:02.256813   11448 retry.go:31] will retry after 3.268621161s: couldn't verify container is exited. %v: unknown state "old-k8s-version-20211117231110-9504": docker container inspect old-k8s-version-20211117231110-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20211117231110-9504
	I1117 23:13:05.529846   11448 cli_runner.go:115] Run: docker container inspect old-k8s-version-20211117231110-9504 --format={{.State.Status}}
	W1117 23:13:05.622199   11448 cli_runner.go:162] docker container inspect old-k8s-version-20211117231110-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:13:05.622199   11448 oci.go:670] temporary error verifying shutdown: unknown state "old-k8s-version-20211117231110-9504": docker container inspect old-k8s-version-20211117231110-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20211117231110-9504
	I1117 23:13:05.622199   11448 oci.go:672] temporary error: container old-k8s-version-20211117231110-9504 status is  but expect it to be exited
	I1117 23:13:05.622199   11448 retry.go:31] will retry after 6.097818893s: couldn't verify container is exited. %v: unknown state "old-k8s-version-20211117231110-9504": docker container inspect old-k8s-version-20211117231110-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20211117231110-9504
	I1117 23:13:11.725251   11448 cli_runner.go:115] Run: docker container inspect old-k8s-version-20211117231110-9504 --format={{.State.Status}}
	W1117 23:13:11.811699   11448 cli_runner.go:162] docker container inspect old-k8s-version-20211117231110-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:13:11.811792   11448 oci.go:670] temporary error verifying shutdown: unknown state "old-k8s-version-20211117231110-9504": docker container inspect old-k8s-version-20211117231110-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20211117231110-9504
	I1117 23:13:11.811792   11448 oci.go:672] temporary error: container old-k8s-version-20211117231110-9504 status is  but expect it to be exited
	I1117 23:13:11.811906   11448 oci.go:87] couldn't shut down old-k8s-version-20211117231110-9504 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "old-k8s-version-20211117231110-9504": docker container inspect old-k8s-version-20211117231110-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20211117231110-9504
	 
	I1117 23:13:11.815889   11448 cli_runner.go:115] Run: docker rm -f -v old-k8s-version-20211117231110-9504
	W1117 23:13:11.908742   11448 cli_runner.go:162] docker rm -f -v old-k8s-version-20211117231110-9504 returned with exit code 1
	W1117 23:13:11.910089   11448 delete.go:139] delete failed (probably ok) <nil>
	I1117 23:13:11.910167   11448 fix.go:120] Sleeping 1 second for extra luck!
	I1117 23:13:12.910777   11448 start.go:126] createHost starting for "" (driver="docker")
	I1117 23:13:12.915261   11448 out.go:203] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I1117 23:13:12.915888   11448 start.go:160] libmachine.API.Create for "old-k8s-version-20211117231110-9504" (driver="docker")
	I1117 23:13:12.915888   11448 client.go:168] LocalClient.Create starting
	I1117 23:13:12.916741   11448 main.go:130] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I1117 23:13:12.916969   11448 main.go:130] libmachine: Decoding PEM data...
	I1117 23:13:12.917032   11448 main.go:130] libmachine: Parsing certificate...
	I1117 23:13:12.917084   11448 main.go:130] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I1117 23:13:12.917084   11448 main.go:130] libmachine: Decoding PEM data...
	I1117 23:13:12.917084   11448 main.go:130] libmachine: Parsing certificate...
	I1117 23:13:12.923300   11448 cli_runner.go:115] Run: docker network inspect old-k8s-version-20211117231110-9504 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1117 23:13:13.010846   11448 network_create.go:67] Found existing network {name:old-k8s-version-20211117231110-9504 subnet:0xc001015860 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 168 67 1] mtu:1500}
	I1117 23:13:13.010846   11448 kic.go:106] calculated static IP "192.168.67.2" for the "old-k8s-version-20211117231110-9504" container
	I1117 23:13:13.017560   11448 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
	I1117 23:13:13.120144   11448 cli_runner.go:115] Run: docker volume create old-k8s-version-20211117231110-9504 --label name.minikube.sigs.k8s.io=old-k8s-version-20211117231110-9504 --label created_by.minikube.sigs.k8s.io=true
	I1117 23:13:13.213139   11448 oci.go:102] Successfully created a docker volume old-k8s-version-20211117231110-9504
	I1117 23:13:13.217408   11448 cli_runner.go:115] Run: docker run --rm --name old-k8s-version-20211117231110-9504-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-20211117231110-9504 --entrypoint /usr/bin/test -v old-k8s-version-20211117231110-9504:/var gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -d /var/lib
	I1117 23:13:14.108124   11448 oci.go:106] Successfully prepared a docker volume old-k8s-version-20211117231110-9504
	I1117 23:13:14.108216   11448 preload.go:132] Checking if preload exists for k8s version v1.14.0 and runtime docker
	I1117 23:13:14.108216   11448 kic.go:179] Starting extracting preloaded images to volume ...
	I1117 23:13:14.113820   11448 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v14-v1.14.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-20211117231110-9504:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir
	I1117 23:13:14.116217   11448 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	W1117 23:13:14.241852   11448 cli_runner.go:162] docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v14-v1.14.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-20211117231110-9504:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir returned with exit code 125
	I1117 23:13:14.242145   11448 kic.go:186] Unable to extract preloaded tarball to volume: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v14-v1.14.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-20211117231110-9504:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: status code not OK but 500: {"Message":"Unhandled exception: The notification platform is unavailable.\r\n\r\nThe notification platform is unavailable.\r\n","StackTrace":"   at Windows.UI.Notifications.ToastNotificationManager.CreateToastNotifier(String applicationId)\r\n   at Docker.WPF.PromptShareDirectory.<PromptUserAsync>d__0.MoveNext() in C:\\workspaces\\stable-2.3.x\\src\\github.com\\docker\\pinata\\win\\src\\Docker.WPF\\PromptShareDirectory.cs:line 25\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at Docker.ApiServices.Mounting.FileSharing.<DoShareAsync>d__6.MoveNext() in C:\\workspaces\\stable-2.3.x\\src\\github.com\\docker\\pinata\\win\\src\\Docker.ApiServices\\Mounting\\FileSharing.cs:line 80\r\n--- End of stack trace from previous location whe
re exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at Docker.ApiServices.Mounting.FileSharing.<ShareAsync>d__4.MoveNext() in C:\\workspaces\\stable-2.3.x\\src\\github.com\\docker\\pinata\\win\\src\\Docker.ApiServices\\Mounting\\FileSharing.cs:line 47\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at Docker.HttpApi.Controllers.FilesharingController.<ShareDirectory>d__2.MoveNext() in C:\\workspaces\\stable-2.3.x\\src\\github.com\\docker\\pinata\\win\\src\\Docker.HttpApi\\Controllers\\FilesharingController.cs:line 21\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.Exceptio
nDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at System.Threading.Tasks.TaskHelpersExtensions.<CastToObject>d__1`1.MoveNext()\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at System.Web.Http.Controllers.ApiControllerActionInvoker.<InvokeActionAsyncCore>d__1.MoveNext()\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at System.Web.Http.Controllers.ActionFilterResult.<ExecuteAsync>d__5.MoveNext()\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.Excepti
onServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at System.Web.Http.Dispatcher.HttpControllerDispatcher.<SendAsync>d__15.MoveNext()"}.
	See 'docker run --help'.
	I1117 23:13:14.466871   11448 info.go:263] docker info: {ID:L2W4:ZZCI:FAHF:ZZT6:NAWJ:ODFR:2ZBQ:YQ77:PYZZ:VR3E:WLUM:OBSV Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:44 OomKillDisable:true NGoroutines:62 SystemTime:2021-11-17 23:13:14.199574057 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:4.19.76-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://inde
x.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:6239399936 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:19.03.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7ad184331fa3e55e52b890ea95e65ba581ae3429 Expected:7ad184331fa3e55e52b890ea95e65ba581ae3429} RuncCommit:{ID:dc9208a3303feef5b3839f4323d9beb36df0a9dd Expected:dc9208a3303feef5b3839f4323d9beb36df0a9dd} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense:Community Engine Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	E1117 23:13:14.467443   11448 oci.go:197] error getting kernel modules path: Unable to locate kernel modules
	I1117 23:13:14.467443   11448 client.go:171] LocalClient.Create took 1.5515434s
	I1117 23:13:16.476298   11448 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1117 23:13:16.479551   11448 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20211117231110-9504
	W1117 23:13:16.589887   11448 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20211117231110-9504 returned with exit code 1
	I1117 23:13:16.590091   11448 retry.go:31] will retry after 198.278561ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-20211117231110-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20211117231110-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20211117231110-9504
	I1117 23:13:16.793158   11448 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20211117231110-9504
	W1117 23:13:16.891721   11448 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20211117231110-9504 returned with exit code 1
	I1117 23:13:16.892115   11448 retry.go:31] will retry after 298.905961ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-20211117231110-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20211117231110-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20211117231110-9504
	I1117 23:13:17.194535   11448 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20211117231110-9504
	W1117 23:13:17.278703   11448 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20211117231110-9504 returned with exit code 1
	I1117 23:13:17.278948   11448 retry.go:31] will retry after 704.572333ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-20211117231110-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20211117231110-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20211117231110-9504
	I1117 23:13:17.989379   11448 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20211117231110-9504
	W1117 23:13:18.086939   11448 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20211117231110-9504 returned with exit code 1
	W1117 23:13:18.086939   11448 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-20211117231110-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20211117231110-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20211117231110-9504
	
	W1117 23:13:18.086939   11448 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-20211117231110-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20211117231110-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20211117231110-9504
	I1117 23:13:18.086939   11448 start.go:129] duration metric: createHost completed in 5.1761229s
	I1117 23:13:18.097402   11448 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1117 23:13:18.101383   11448 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20211117231110-9504
	W1117 23:13:18.200435   11448 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20211117231110-9504 returned with exit code 1
	I1117 23:13:18.200651   11448 retry.go:31] will retry after 341.333754ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-20211117231110-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20211117231110-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20211117231110-9504
	I1117 23:13:18.547486   11448 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20211117231110-9504
	W1117 23:13:18.637229   11448 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20211117231110-9504 returned with exit code 1
	I1117 23:13:18.637594   11448 retry.go:31] will retry after 448.769687ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-20211117231110-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20211117231110-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20211117231110-9504
	I1117 23:13:19.090695   11448 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20211117231110-9504
	W1117 23:13:19.192408   11448 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20211117231110-9504 returned with exit code 1
	I1117 23:13:19.192607   11448 retry.go:31] will retry after 575.898922ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-20211117231110-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20211117231110-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20211117231110-9504
	I1117 23:13:19.773828   11448 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20211117231110-9504
	W1117 23:13:19.860295   11448 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20211117231110-9504 returned with exit code 1
	W1117 23:13:19.860295   11448 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-20211117231110-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20211117231110-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20211117231110-9504
	
	W1117 23:13:19.860295   11448 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-20211117231110-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20211117231110-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20211117231110-9504
	I1117 23:13:19.860295   11448 fix.go:57] fixHost completed within 25.1103312s
	I1117 23:13:19.860295   11448 start.go:80] releasing machines lock for "old-k8s-version-20211117231110-9504", held for 25.1108841s
	W1117 23:13:19.860875   11448 out.go:241] * Failed to start docker container. Running "minikube delete -p old-k8s-version-20211117231110-9504" may fix it: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	* Failed to start docker container. Running "minikube delete -p old-k8s-version-20211117231110-9504" may fix it: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	I1117 23:13:19.870084   11448 out.go:176] 
	W1117 23:13:19.870760   11448 out.go:241] X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	W1117 23:13:19.870760   11448 out.go:241] * 
	* 
	W1117 23:13:19.871855   11448 out.go:241] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1117 23:13:19.874502   11448 out.go:176] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:244: failed to start minikube post-stop. args "out/minikube-windows-amd64.exe start -p old-k8s-version-20211117231110-9504 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --kubernetes-version=v1.14.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-20211117231110-9504
helpers_test.go:235: (dbg) docker inspect old-k8s-version-20211117231110-9504:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "old-k8s-version-20211117231110-9504",
	        "Id": "138f7d40ee5b452e80a6cb6a3edb4c912f1cda6b29239fd829c80dbf72edf47c",
	        "Created": "2021-11-17T23:12:43.316454722Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.67.0/24",
	                    "Gateway": "192.168.67.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-20211117231110-9504 -n old-k8s-version-20211117231110-9504

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/SecondStart
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-20211117231110-9504 -n old-k8s-version-20211117231110-9504: exit status 7 (1.8318387s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 23:13:21.927249    2560 status.go:247] status error: host: state: unknown state "old-k8s-version-20211117231110-9504": docker container inspect old-k8s-version-20211117231110-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20211117231110-9504

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-20211117231110-9504" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (60.54s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (60.31s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:241: (dbg) Run:  out/minikube-windows-amd64.exe start -p embed-certs-20211117231110-9504 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker --kubernetes-version=v1.22.3

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:241: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p embed-certs-20211117231110-9504 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker --kubernetes-version=v1.22.3: exit status 80 (58.2414559s)

                                                
                                                
-- stdout --
	* [embed-certs-20211117231110-9504] minikube v1.24.0 on Microsoft Windows 10 Enterprise N 10.0.18363 Build 18363
	  - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=12739
	* Using the docker driver based on existing profile
	* Starting control plane node embed-certs-20211117231110-9504 in cluster embed-certs-20211117231110-9504
	* Pulling base image ...
	* docker "embed-certs-20211117231110-9504" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* docker "embed-certs-20211117231110-9504" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1117 23:12:22.513689    7712 out.go:297] Setting OutFile to fd 1836 ...
	I1117 23:12:22.578851    7712 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 23:12:22.578851    7712 out.go:310] Setting ErrFile to fd 1864...
	I1117 23:12:22.578851    7712 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 23:12:22.591855    7712 out.go:304] Setting JSON to false
	I1117 23:12:22.593846    7712 start.go:112] hostinfo: {"hostname":"minikube2","uptime":80058,"bootTime":1637110684,"procs":133,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.18363 Build 18363","kernelVersion":"10.0.18363 Build 18363","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"29dced62-21fb-45d8-a34e-472b66ced002"}
	W1117 23:12:22.593846    7712 start.go:120] gopshost.Virtualization returned error: not implemented yet
	I1117 23:12:22.599852    7712 out.go:176] * [embed-certs-20211117231110-9504] minikube v1.24.0 on Microsoft Windows 10 Enterprise N 10.0.18363 Build 18363
	I1117 23:12:22.599852    7712 notify.go:174] Checking for updates...
	I1117 23:12:22.601881    7712 out.go:176]   - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	I1117 23:12:22.604857    7712 out.go:176]   - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	I1117 23:12:22.606845    7712 out.go:176]   - MINIKUBE_LOCATION=12739
	I1117 23:12:22.607854    7712 config.go:176] Loaded profile config "embed-certs-20211117231110-9504": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.3
	I1117 23:12:22.609849    7712 driver.go:343] Setting default libvirt URI to qemu:///system
	I1117 23:12:24.219002    7712 docker.go:132] docker version: linux-19.03.12
	I1117 23:12:24.222797    7712 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I1117 23:12:24.592166    7712 info.go:263] docker info: {ID:L2W4:ZZCI:FAHF:ZZT6:NAWJ:ODFR:2ZBQ:YQ77:PYZZ:VR3E:WLUM:OBSV Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:40 OomKillDisable:true NGoroutines:52 SystemTime:2021-11-17 23:12:24.313616985 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:4.19.76-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://inde
x.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:6239399936 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:19.03.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7ad184331fa3e55e52b890ea95e65ba581ae3429 Expected:7ad184331fa3e55e52b890ea95e65ba581ae3429} RuncCommit:{ID:dc9208a3303feef5b3839f4323d9beb36df0a9dd Expected:dc9208a3303feef5b3839f4323d9beb36df0a9dd} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense:Community Engine Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I1117 23:12:24.596160    7712 out.go:176] * Using the docker driver based on existing profile
	I1117 23:12:24.596690    7712 start.go:280] selected driver: docker
	I1117 23:12:24.596690    7712 start.go:775] validating driver "docker" against &{Name:embed-certs-20211117231110-9504 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.3 ClusterName:embed-certs-20211117231110-9504 Namespace:default APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.22.3 ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.3.1@sha256:ec27f462cf1946220f5a9ace416a84a57c18f98c777876a8054405d1428cc92e MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\U
sers\jenkins.minikube2:/minikube-host}
	I1117 23:12:24.596895    7712 start.go:786] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I1117 23:12:24.661592    7712 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I1117 23:12:25.023248    7712 info.go:263] docker info: {ID:L2W4:ZZCI:FAHF:ZZT6:NAWJ:ODFR:2ZBQ:YQ77:PYZZ:VR3E:WLUM:OBSV Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:49 OomKillDisable:true NGoroutines:71 SystemTime:2021-11-17 23:12:24.745675124 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:4.19.76-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://inde
x.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:6239399936 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:19.03.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7ad184331fa3e55e52b890ea95e65ba581ae3429 Expected:7ad184331fa3e55e52b890ea95e65ba581ae3429} RuncCommit:{ID:dc9208a3303feef5b3839f4323d9beb36df0a9dd Expected:dc9208a3303feef5b3839f4323d9beb36df0a9dd} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense:Community Engine Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I1117 23:12:25.023248    7712 start_flags.go:758] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1117 23:12:25.023248    7712 cni.go:93] Creating CNI manager for ""
	I1117 23:12:25.023248    7712 cni.go:167] CNI unnecessary in this configuration, recommending no CNI
	I1117 23:12:25.023248    7712 start_flags.go:282] config:
	{Name:embed-certs-20211117231110-9504 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.3 ClusterName:embed-certs-20211117231110-9504 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISoc
ket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.22.3 ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.3.1@sha256:ec27f462cf1946220f5a9ace416a84a57c18f98c777876a8054405d1428cc92e MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host}
	I1117 23:12:25.030254    7712 out.go:176] * Starting control plane node embed-certs-20211117231110-9504 in cluster embed-certs-20211117231110-9504
	I1117 23:12:25.030254    7712 cache.go:118] Beginning downloading kic base image for docker with docker
	I1117 23:12:25.034252    7712 out.go:176] * Pulling base image ...
	I1117 23:12:25.034252    7712 preload.go:132] Checking if preload exists for k8s version v1.22.3 and runtime docker
	I1117 23:12:25.034252    7712 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c in local docker daemon
	I1117 23:12:25.034252    7712 preload.go:148] Found local preload: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4
	I1117 23:12:25.034252    7712 cache.go:57] Caching tarball of preloaded images
	I1117 23:12:25.034252    7712 preload.go:174] Found C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1117 23:12:25.035251    7712 cache.go:60] Finished verifying existence of preloaded tar for  v1.22.3 on docker
	I1117 23:12:25.035251    7712 profile.go:147] Saving config to C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\embed-certs-20211117231110-9504\config.json ...
	I1117 23:12:25.137333    7712 image.go:79] Found gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c in local docker daemon, skipping pull
	I1117 23:12:25.137397    7712 cache.go:140] gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c exists in daemon, skipping load
	I1117 23:12:25.137480    7712 cache.go:206] Successfully downloaded all kic artifacts
	I1117 23:12:25.137741    7712 start.go:313] acquiring machines lock for embed-certs-20211117231110-9504: {Name:mke5160b0799570aa8eaa937f5551637df079826 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 23:12:25.137741    7712 start.go:317] acquired machines lock for "embed-certs-20211117231110-9504" in 0s
	I1117 23:12:25.137741    7712 start.go:93] Skipping create...Using existing machine configuration
	I1117 23:12:25.137741    7712 fix.go:55] fixHost starting: 
	I1117 23:12:25.147324    7712 cli_runner.go:115] Run: docker container inspect embed-certs-20211117231110-9504 --format={{.State.Status}}
	W1117 23:12:25.254485    7712 cli_runner.go:162] docker container inspect embed-certs-20211117231110-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:12:25.254688    7712 fix.go:108] recreateIfNeeded on embed-certs-20211117231110-9504: state= err=unknown state "embed-certs-20211117231110-9504": docker container inspect embed-certs-20211117231110-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20211117231110-9504
	I1117 23:12:25.254730    7712 fix.go:113] machineExists: false. err=machine does not exist
	I1117 23:12:25.258834    7712 out.go:176] * docker "embed-certs-20211117231110-9504" container is missing, will recreate.
	I1117 23:12:25.258904    7712 delete.go:124] DEMOLISHING embed-certs-20211117231110-9504 ...
	I1117 23:12:25.265773    7712 cli_runner.go:115] Run: docker container inspect embed-certs-20211117231110-9504 --format={{.State.Status}}
	W1117 23:12:25.363482    7712 cli_runner.go:162] docker container inspect embed-certs-20211117231110-9504 --format={{.State.Status}} returned with exit code 1
	W1117 23:12:25.363712    7712 stop.go:75] unable to get state: unknown state "embed-certs-20211117231110-9504": docker container inspect embed-certs-20211117231110-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20211117231110-9504
	I1117 23:12:25.363712    7712 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "embed-certs-20211117231110-9504": docker container inspect embed-certs-20211117231110-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20211117231110-9504
	I1117 23:12:25.371963    7712 cli_runner.go:115] Run: docker container inspect embed-certs-20211117231110-9504 --format={{.State.Status}}
	W1117 23:12:25.464406    7712 cli_runner.go:162] docker container inspect embed-certs-20211117231110-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:12:25.464704    7712 delete.go:82] Unable to get host status for embed-certs-20211117231110-9504, assuming it has already been deleted: state: unknown state "embed-certs-20211117231110-9504": docker container inspect embed-certs-20211117231110-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20211117231110-9504
	I1117 23:12:25.468523    7712 cli_runner.go:115] Run: docker container inspect -f {{.Id}} embed-certs-20211117231110-9504
	W1117 23:12:25.565209    7712 cli_runner.go:162] docker container inspect -f {{.Id}} embed-certs-20211117231110-9504 returned with exit code 1
	I1117 23:12:25.565209    7712 kic.go:360] could not find the container embed-certs-20211117231110-9504 to remove it. will try anyways
	I1117 23:12:25.568198    7712 cli_runner.go:115] Run: docker container inspect embed-certs-20211117231110-9504 --format={{.State.Status}}
	W1117 23:12:25.654536    7712 cli_runner.go:162] docker container inspect embed-certs-20211117231110-9504 --format={{.State.Status}} returned with exit code 1
	W1117 23:12:25.654607    7712 oci.go:83] error getting container status, will try to delete anyways: unknown state "embed-certs-20211117231110-9504": docker container inspect embed-certs-20211117231110-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20211117231110-9504
	I1117 23:12:25.658743    7712 cli_runner.go:115] Run: docker exec --privileged -t embed-certs-20211117231110-9504 /bin/bash -c "sudo init 0"
	W1117 23:12:25.749756    7712 cli_runner.go:162] docker exec --privileged -t embed-certs-20211117231110-9504 /bin/bash -c "sudo init 0" returned with exit code 1
	I1117 23:12:25.749756    7712 oci.go:658] error shutdown embed-certs-20211117231110-9504: docker exec --privileged -t embed-certs-20211117231110-9504 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: embed-certs-20211117231110-9504
	I1117 23:12:26.754791    7712 cli_runner.go:115] Run: docker container inspect embed-certs-20211117231110-9504 --format={{.State.Status}}
	W1117 23:12:26.855011    7712 cli_runner.go:162] docker container inspect embed-certs-20211117231110-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:12:26.855011    7712 oci.go:670] temporary error verifying shutdown: unknown state "embed-certs-20211117231110-9504": docker container inspect embed-certs-20211117231110-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20211117231110-9504
	I1117 23:12:26.855011    7712 oci.go:672] temporary error: container embed-certs-20211117231110-9504 status is  but expect it to be exited
	I1117 23:12:26.855011    7712 retry.go:31] will retry after 552.330144ms: couldn't verify container is exited. %v: unknown state "embed-certs-20211117231110-9504": docker container inspect embed-certs-20211117231110-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20211117231110-9504
	I1117 23:12:27.414095    7712 cli_runner.go:115] Run: docker container inspect embed-certs-20211117231110-9504 --format={{.State.Status}}
	W1117 23:12:27.505593    7712 cli_runner.go:162] docker container inspect embed-certs-20211117231110-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:12:27.505712    7712 oci.go:670] temporary error verifying shutdown: unknown state "embed-certs-20211117231110-9504": docker container inspect embed-certs-20211117231110-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20211117231110-9504
	I1117 23:12:27.505827    7712 oci.go:672] temporary error: container embed-certs-20211117231110-9504 status is  but expect it to be exited
	I1117 23:12:27.506033    7712 retry.go:31] will retry after 1.080381816s: couldn't verify container is exited. %v: unknown state "embed-certs-20211117231110-9504": docker container inspect embed-certs-20211117231110-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20211117231110-9504
	I1117 23:12:28.592619    7712 cli_runner.go:115] Run: docker container inspect embed-certs-20211117231110-9504 --format={{.State.Status}}
	W1117 23:12:28.683597    7712 cli_runner.go:162] docker container inspect embed-certs-20211117231110-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:12:28.683597    7712 oci.go:670] temporary error verifying shutdown: unknown state "embed-certs-20211117231110-9504": docker container inspect embed-certs-20211117231110-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20211117231110-9504
	I1117 23:12:28.683597    7712 oci.go:672] temporary error: container embed-certs-20211117231110-9504 status is  but expect it to be exited
	I1117 23:12:28.683597    7712 retry.go:31] will retry after 1.31013006s: couldn't verify container is exited. %v: unknown state "embed-certs-20211117231110-9504": docker container inspect embed-certs-20211117231110-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20211117231110-9504
	I1117 23:12:30.000612    7712 cli_runner.go:115] Run: docker container inspect embed-certs-20211117231110-9504 --format={{.State.Status}}
	W1117 23:12:30.100733    7712 cli_runner.go:162] docker container inspect embed-certs-20211117231110-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:12:30.100733    7712 oci.go:670] temporary error verifying shutdown: unknown state "embed-certs-20211117231110-9504": docker container inspect embed-certs-20211117231110-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20211117231110-9504
	I1117 23:12:30.100733    7712 oci.go:672] temporary error: container embed-certs-20211117231110-9504 status is  but expect it to be exited
	I1117 23:12:30.100733    7712 retry.go:31] will retry after 1.582392691s: couldn't verify container is exited. %v: unknown state "embed-certs-20211117231110-9504": docker container inspect embed-certs-20211117231110-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20211117231110-9504
	I1117 23:12:31.688115    7712 cli_runner.go:115] Run: docker container inspect embed-certs-20211117231110-9504 --format={{.State.Status}}
	W1117 23:12:31.777297    7712 cli_runner.go:162] docker container inspect embed-certs-20211117231110-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:12:31.777382    7712 oci.go:670] temporary error verifying shutdown: unknown state "embed-certs-20211117231110-9504": docker container inspect embed-certs-20211117231110-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20211117231110-9504
	I1117 23:12:31.777382    7712 oci.go:672] temporary error: container embed-certs-20211117231110-9504 status is  but expect it to be exited
	I1117 23:12:31.777382    7712 retry.go:31] will retry after 2.340488664s: couldn't verify container is exited. %v: unknown state "embed-certs-20211117231110-9504": docker container inspect embed-certs-20211117231110-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20211117231110-9504
	I1117 23:12:34.124023    7712 cli_runner.go:115] Run: docker container inspect embed-certs-20211117231110-9504 --format={{.State.Status}}
	W1117 23:12:34.213179    7712 cli_runner.go:162] docker container inspect embed-certs-20211117231110-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:12:34.213179    7712 oci.go:670] temporary error verifying shutdown: unknown state "embed-certs-20211117231110-9504": docker container inspect embed-certs-20211117231110-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20211117231110-9504
	I1117 23:12:34.213179    7712 oci.go:672] temporary error: container embed-certs-20211117231110-9504 status is  but expect it to be exited
	I1117 23:12:34.213179    7712 retry.go:31] will retry after 4.506218855s: couldn't verify container is exited. %v: unknown state "embed-certs-20211117231110-9504": docker container inspect embed-certs-20211117231110-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20211117231110-9504
	I1117 23:12:38.724518    7712 cli_runner.go:115] Run: docker container inspect embed-certs-20211117231110-9504 --format={{.State.Status}}
	W1117 23:12:38.823031    7712 cli_runner.go:162] docker container inspect embed-certs-20211117231110-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:12:38.823190    7712 oci.go:670] temporary error verifying shutdown: unknown state "embed-certs-20211117231110-9504": docker container inspect embed-certs-20211117231110-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20211117231110-9504
	I1117 23:12:38.823244    7712 oci.go:672] temporary error: container embed-certs-20211117231110-9504 status is  but expect it to be exited
	I1117 23:12:38.823369    7712 retry.go:31] will retry after 3.221479586s: couldn't verify container is exited. %v: unknown state "embed-certs-20211117231110-9504": docker container inspect embed-certs-20211117231110-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20211117231110-9504
	I1117 23:12:42.049459    7712 cli_runner.go:115] Run: docker container inspect embed-certs-20211117231110-9504 --format={{.State.Status}}
	W1117 23:12:42.138578    7712 cli_runner.go:162] docker container inspect embed-certs-20211117231110-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:12:42.138858    7712 oci.go:670] temporary error verifying shutdown: unknown state "embed-certs-20211117231110-9504": docker container inspect embed-certs-20211117231110-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20211117231110-9504
	I1117 23:12:42.138858    7712 oci.go:672] temporary error: container embed-certs-20211117231110-9504 status is  but expect it to be exited
	I1117 23:12:42.138980    7712 oci.go:87] couldn't shut down embed-certs-20211117231110-9504 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "embed-certs-20211117231110-9504": docker container inspect embed-certs-20211117231110-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20211117231110-9504
	 
	I1117 23:12:42.145829    7712 cli_runner.go:115] Run: docker rm -f -v embed-certs-20211117231110-9504
	W1117 23:12:42.233951    7712 cli_runner.go:162] docker rm -f -v embed-certs-20211117231110-9504 returned with exit code 1
	W1117 23:12:42.235288    7712 delete.go:139] delete failed (probably ok) <nil>
	I1117 23:12:42.235288    7712 fix.go:120] Sleeping 1 second for extra luck!
	I1117 23:12:43.237203    7712 start.go:126] createHost starting for "" (driver="docker")
	I1117 23:12:43.241357    7712 out.go:203] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I1117 23:12:43.242001    7712 start.go:160] libmachine.API.Create for "embed-certs-20211117231110-9504" (driver="docker")
	I1117 23:12:43.242001    7712 client.go:168] LocalClient.Create starting
	I1117 23:12:43.242540    7712 main.go:130] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I1117 23:12:43.242824    7712 main.go:130] libmachine: Decoding PEM data...
	I1117 23:12:43.242858    7712 main.go:130] libmachine: Parsing certificate...
	I1117 23:12:43.243037    7712 main.go:130] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I1117 23:12:43.243258    7712 main.go:130] libmachine: Decoding PEM data...
	I1117 23:12:43.243258    7712 main.go:130] libmachine: Parsing certificate...
	I1117 23:12:43.248901    7712 cli_runner.go:115] Run: docker network inspect embed-certs-20211117231110-9504 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1117 23:12:43.368288    7712 cli_runner.go:162] docker network inspect embed-certs-20211117231110-9504 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1117 23:12:43.372308    7712 network_create.go:254] running [docker network inspect embed-certs-20211117231110-9504] to gather additional debugging logs...
	I1117 23:12:43.372409    7712 cli_runner.go:115] Run: docker network inspect embed-certs-20211117231110-9504
	W1117 23:12:43.459822    7712 cli_runner.go:162] docker network inspect embed-certs-20211117231110-9504 returned with exit code 1
	I1117 23:12:43.459822    7712 network_create.go:257] error running [docker network inspect embed-certs-20211117231110-9504]: docker network inspect embed-certs-20211117231110-9504: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: embed-certs-20211117231110-9504
	I1117 23:12:43.459822    7712 network_create.go:259] output of [docker network inspect embed-certs-20211117231110-9504]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: embed-certs-20211117231110-9504
	
	** /stderr **
	I1117 23:12:43.464205    7712 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1117 23:12:43.581615    7712 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc00078c4c8] misses:0}
	I1117 23:12:43.581615    7712 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1117 23:12:43.581615    7712 network_create.go:106] attempt to create docker network embed-certs-20211117231110-9504 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1117 23:12:43.585554    7712 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true embed-certs-20211117231110-9504
	W1117 23:12:43.686534    7712 cli_runner.go:162] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true embed-certs-20211117231110-9504 returned with exit code 1
	W1117 23:12:43.686534    7712 network_create.go:98] failed to create docker network embed-certs-20211117231110-9504 192.168.49.0/24, will retry: subnet is taken
	I1117 23:12:43.709737    7712 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00078c4c8] amended:false}} dirty:map[] misses:0}
	I1117 23:12:43.709737    7712 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1117 23:12:43.723835    7712 network.go:288] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00078c4c8] amended:true}} dirty:map[192.168.49.0:0xc00078c4c8 192.168.58.0:0xc00012a388] misses:0}
	I1117 23:12:43.723835    7712 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1117 23:12:43.723835    7712 network_create.go:106] attempt to create docker network embed-certs-20211117231110-9504 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I1117 23:12:43.726855    7712 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true embed-certs-20211117231110-9504
	W1117 23:12:43.827532    7712 cli_runner.go:162] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true embed-certs-20211117231110-9504 returned with exit code 1
	W1117 23:12:43.827532    7712 network_create.go:98] failed to create docker network embed-certs-20211117231110-9504 192.168.58.0/24, will retry: subnet is taken
	I1117 23:12:43.842248    7712 network.go:279] skipping subnet 192.168.58.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00078c4c8] amended:true}} dirty:map[192.168.49.0:0xc00078c4c8 192.168.58.0:0xc00012a388] misses:1}
	I1117 23:12:43.842248    7712 network.go:238] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1117 23:12:43.858027    7712 network.go:288] reserving subnet 192.168.67.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00078c4c8] amended:true}} dirty:map[192.168.49.0:0xc00078c4c8 192.168.58.0:0xc00012a388 192.168.67.0:0xc0006fa2a0] misses:1}
	I1117 23:12:43.858027    7712 network.go:235] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1117 23:12:43.858146    7712 network_create.go:106] attempt to create docker network embed-certs-20211117231110-9504 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
	I1117 23:12:43.869900    7712 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true embed-certs-20211117231110-9504
	W1117 23:12:43.973343    7712 cli_runner.go:162] docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true embed-certs-20211117231110-9504 returned with exit code 1
	W1117 23:12:43.973343    7712 network_create.go:98] failed to create docker network embed-certs-20211117231110-9504 192.168.67.0/24, will retry: subnet is taken
	I1117 23:12:43.987357    7712 network.go:279] skipping subnet 192.168.67.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00078c4c8] amended:true}} dirty:map[192.168.49.0:0xc00078c4c8 192.168.58.0:0xc00012a388 192.168.67.0:0xc0006fa2a0] misses:2}
	I1117 23:12:43.988352    7712 network.go:238] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1117 23:12:44.002355    7712 network.go:288] reserving subnet 192.168.76.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00078c4c8] amended:true}} dirty:map[192.168.49.0:0xc00078c4c8 192.168.58.0:0xc00012a388 192.168.67.0:0xc0006fa2a0 192.168.76.0:0xc0003000d0] misses:2}
	I1117 23:12:44.002355    7712 network.go:235] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1117 23:12:44.002355    7712 network_create.go:106] attempt to create docker network embed-certs-20211117231110-9504 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1117 23:12:44.006359    7712 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true embed-certs-20211117231110-9504
	I1117 23:12:44.207609    7712 network_create.go:90] docker network embed-certs-20211117231110-9504 192.168.76.0/24 created
	I1117 23:12:44.207609    7712 kic.go:106] calculated static IP "192.168.76.2" for the "embed-certs-20211117231110-9504" container
	I1117 23:12:44.215974    7712 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
	I1117 23:12:44.309371    7712 cli_runner.go:115] Run: docker volume create embed-certs-20211117231110-9504 --label name.minikube.sigs.k8s.io=embed-certs-20211117231110-9504 --label created_by.minikube.sigs.k8s.io=true
	I1117 23:12:44.399583    7712 oci.go:102] Successfully created a docker volume embed-certs-20211117231110-9504
	I1117 23:12:44.403761    7712 cli_runner.go:115] Run: docker run --rm --name embed-certs-20211117231110-9504-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-20211117231110-9504 --entrypoint /usr/bin/test -v embed-certs-20211117231110-9504:/var gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -d /var/lib
	I1117 23:12:45.297784    7712 oci.go:106] Successfully prepared a docker volume embed-certs-20211117231110-9504
	I1117 23:12:45.298187    7712 preload.go:132] Checking if preload exists for k8s version v1.22.3 and runtime docker
	I1117 23:12:45.298297    7712 kic.go:179] Starting extracting preloaded images to volume ...
	I1117 23:12:45.303309    7712 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v embed-certs-20211117231110-9504:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir
	I1117 23:12:45.303623    7712 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	W1117 23:12:45.423425    7712 cli_runner.go:162] docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v embed-certs-20211117231110-9504:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir returned with exit code 125
	I1117 23:12:45.423533    7712 kic.go:186] Unable to extract preloaded tarball to volume: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v embed-certs-20211117231110-9504:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: status code not OK but 500: {"Message":"Unhandled exception: The notification platform is unavailable.\r\n\r\nThe notification platform is unavailable.\r\n","StackTrace":"   at Windows.UI.Notifications.ToastNotificationManager.CreateToastNotifier(String applicationId)\r\n   at Docker.WPF.PromptShareDirectory.<PromptUserAsync>d__0.MoveNext() in C:\\workspaces\\stable-2.3.x\\src\\github.com\\docker\\pinata\\win\\src\\Docker.WPF\\PromptShareDirectory.cs:line 25\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at Docker.ApiServices.Mounting.FileSharing.<DoShareAsync>d__6.MoveNext() in C:\\workspaces\\stable-2.3.x\\src\\github.com\\docker\\pinata\\win\\src\\Docker.ApiServices\\Mounting\\FileSharing.cs:line 80\r\n--- End of stack trace from previous location whe
re exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at Docker.ApiServices.Mounting.FileSharing.<ShareAsync>d__4.MoveNext() in C:\\workspaces\\stable-2.3.x\\src\\github.com\\docker\\pinata\\win\\src\\Docker.ApiServices\\Mounting\\FileSharing.cs:line 47\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at Docker.HttpApi.Controllers.FilesharingController.<ShareDirectory>d__2.MoveNext() in C:\\workspaces\\stable-2.3.x\\src\\github.com\\docker\\pinata\\win\\src\\Docker.HttpApi\\Controllers\\FilesharingController.cs:line 21\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.Exceptio
nDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at System.Threading.Tasks.TaskHelpersExtensions.<CastToObject>d__1`1.MoveNext()\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at System.Web.Http.Controllers.ApiControllerActionInvoker.<InvokeActionAsyncCore>d__1.MoveNext()\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at System.Web.Http.Controllers.ActionFilterResult.<ExecuteAsync>d__5.MoveNext()\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.Excepti
onServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at System.Web.Http.Dispatcher.HttpControllerDispatcher.<SendAsync>d__15.MoveNext()"}.
	See 'docker run --help'.
	I1117 23:12:45.650156    7712 info.go:263] docker info: {ID:L2W4:ZZCI:FAHF:ZZT6:NAWJ:ODFR:2ZBQ:YQ77:PYZZ:VR3E:WLUM:OBSV Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:41 OomKillDisable:true NGoroutines:53 SystemTime:2021-11-17 23:12:45.383763614 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:4.19.76-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://inde
x.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:6239399936 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:19.03.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7ad184331fa3e55e52b890ea95e65ba581ae3429 Expected:7ad184331fa3e55e52b890ea95e65ba581ae3429} RuncCommit:{ID:dc9208a3303feef5b3839f4323d9beb36df0a9dd Expected:dc9208a3303feef5b3839f4323d9beb36df0a9dd} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense:Community Engine Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	E1117 23:12:45.650418    7712 oci.go:197] error getting kernel modules path: Unable to locate kernel modules
	I1117 23:12:45.650418    7712 client.go:171] LocalClient.Create took 2.408399s
	I1117 23:12:47.658132    7712 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1117 23:12:47.662159    7712 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20211117231110-9504
	W1117 23:12:47.755356    7712 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20211117231110-9504 returned with exit code 1
	I1117 23:12:47.755737    7712 retry.go:31] will retry after 149.242379ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-20211117231110-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20211117231110-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20211117231110-9504
	I1117 23:12:47.910204    7712 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20211117231110-9504
	W1117 23:12:48.004704    7712 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20211117231110-9504 returned with exit code 1
	I1117 23:12:48.004704    7712 retry.go:31] will retry after 300.341948ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-20211117231110-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20211117231110-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20211117231110-9504
	I1117 23:12:48.310687    7712 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20211117231110-9504
	W1117 23:12:48.410457    7712 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20211117231110-9504 returned with exit code 1
	I1117 23:12:48.410722    7712 retry.go:31] will retry after 571.057104ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-20211117231110-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20211117231110-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20211117231110-9504
	I1117 23:12:48.987723    7712 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20211117231110-9504
	W1117 23:12:49.089031    7712 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20211117231110-9504 returned with exit code 1
	W1117 23:12:49.089254    7712 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-20211117231110-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20211117231110-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20211117231110-9504
	
	W1117 23:12:49.089254    7712 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-20211117231110-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20211117231110-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20211117231110-9504
	I1117 23:12:49.089254    7712 start.go:129] duration metric: createHost completed in 5.851884s
	I1117 23:12:49.098154    7712 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1117 23:12:49.102773    7712 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20211117231110-9504
	W1117 23:12:49.200577    7712 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20211117231110-9504 returned with exit code 1
	I1117 23:12:49.200931    7712 retry.go:31] will retry after 178.565968ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-20211117231110-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20211117231110-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20211117231110-9504
	I1117 23:12:49.385197    7712 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20211117231110-9504
	W1117 23:12:49.474951    7712 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20211117231110-9504 returned with exit code 1
	I1117 23:12:49.474951    7712 retry.go:31] will retry after 330.246446ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-20211117231110-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20211117231110-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20211117231110-9504
	I1117 23:12:49.810712    7712 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20211117231110-9504
	W1117 23:12:49.902673    7712 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20211117231110-9504 returned with exit code 1
	I1117 23:12:49.902968    7712 retry.go:31] will retry after 460.157723ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-20211117231110-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20211117231110-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20211117231110-9504
	I1117 23:12:50.367836    7712 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20211117231110-9504
	W1117 23:12:50.465881    7712 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20211117231110-9504 returned with exit code 1
	W1117 23:12:50.466062    7712 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-20211117231110-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20211117231110-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20211117231110-9504
	
	W1117 23:12:50.466062    7712 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-20211117231110-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20211117231110-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20211117231110-9504
	I1117 23:12:50.466062    7712 fix.go:57] fixHost completed within 25.3281316s
	I1117 23:12:50.466062    7712 start.go:80] releasing machines lock for "embed-certs-20211117231110-9504", held for 25.3281316s
	W1117 23:12:50.466062    7712 start.go:532] error starting host: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	W1117 23:12:50.466748    7712 out.go:241] ! StartHost failed, but will try again: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	! StartHost failed, but will try again: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	I1117 23:12:50.466848    7712 start.go:547] Will try again in 5 seconds ...
	I1117 23:12:55.468689    7712 start.go:313] acquiring machines lock for embed-certs-20211117231110-9504: {Name:mke5160b0799570aa8eaa937f5551637df079826 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 23:12:55.468981    7712 start.go:317] acquired machines lock for "embed-certs-20211117231110-9504" in 104.7µs
	I1117 23:12:55.469145    7712 start.go:93] Skipping create...Using existing machine configuration
	I1117 23:12:55.469221    7712 fix.go:55] fixHost starting: 
	I1117 23:12:55.478076    7712 cli_runner.go:115] Run: docker container inspect embed-certs-20211117231110-9504 --format={{.State.Status}}
	W1117 23:12:55.572914    7712 cli_runner.go:162] docker container inspect embed-certs-20211117231110-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:12:55.573093    7712 fix.go:108] recreateIfNeeded on embed-certs-20211117231110-9504: state= err=unknown state "embed-certs-20211117231110-9504": docker container inspect embed-certs-20211117231110-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20211117231110-9504
	I1117 23:12:55.573093    7712 fix.go:113] machineExists: false. err=machine does not exist
	I1117 23:12:55.577419    7712 out.go:176] * docker "embed-certs-20211117231110-9504" container is missing, will recreate.
	I1117 23:12:55.577494    7712 delete.go:124] DEMOLISHING embed-certs-20211117231110-9504 ...
	I1117 23:12:55.587381    7712 cli_runner.go:115] Run: docker container inspect embed-certs-20211117231110-9504 --format={{.State.Status}}
	W1117 23:12:55.678774    7712 cli_runner.go:162] docker container inspect embed-certs-20211117231110-9504 --format={{.State.Status}} returned with exit code 1
	W1117 23:12:55.678774    7712 stop.go:75] unable to get state: unknown state "embed-certs-20211117231110-9504": docker container inspect embed-certs-20211117231110-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20211117231110-9504
	I1117 23:12:55.678774    7712 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "embed-certs-20211117231110-9504": docker container inspect embed-certs-20211117231110-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20211117231110-9504
	I1117 23:12:55.686683    7712 cli_runner.go:115] Run: docker container inspect embed-certs-20211117231110-9504 --format={{.State.Status}}
	W1117 23:12:55.784637    7712 cli_runner.go:162] docker container inspect embed-certs-20211117231110-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:12:55.784749    7712 delete.go:82] Unable to get host status for embed-certs-20211117231110-9504, assuming it has already been deleted: state: unknown state "embed-certs-20211117231110-9504": docker container inspect embed-certs-20211117231110-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20211117231110-9504
	I1117 23:12:55.789575    7712 cli_runner.go:115] Run: docker container inspect -f {{.Id}} embed-certs-20211117231110-9504
	W1117 23:12:55.884982    7712 cli_runner.go:162] docker container inspect -f {{.Id}} embed-certs-20211117231110-9504 returned with exit code 1
	I1117 23:12:55.885141    7712 kic.go:360] could not find the container embed-certs-20211117231110-9504 to remove it. will try anyways
	I1117 23:12:55.889789    7712 cli_runner.go:115] Run: docker container inspect embed-certs-20211117231110-9504 --format={{.State.Status}}
	W1117 23:12:55.977790    7712 cli_runner.go:162] docker container inspect embed-certs-20211117231110-9504 --format={{.State.Status}} returned with exit code 1
	W1117 23:12:55.978034    7712 oci.go:83] error getting container status, will try to delete anyways: unknown state "embed-certs-20211117231110-9504": docker container inspect embed-certs-20211117231110-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20211117231110-9504
	I1117 23:12:55.982375    7712 cli_runner.go:115] Run: docker exec --privileged -t embed-certs-20211117231110-9504 /bin/bash -c "sudo init 0"
	W1117 23:12:56.072321    7712 cli_runner.go:162] docker exec --privileged -t embed-certs-20211117231110-9504 /bin/bash -c "sudo init 0" returned with exit code 1
	I1117 23:12:56.072595    7712 oci.go:658] error shutdown embed-certs-20211117231110-9504: docker exec --privileged -t embed-certs-20211117231110-9504 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: embed-certs-20211117231110-9504
	I1117 23:12:57.077368    7712 cli_runner.go:115] Run: docker container inspect embed-certs-20211117231110-9504 --format={{.State.Status}}
	W1117 23:12:57.166495    7712 cli_runner.go:162] docker container inspect embed-certs-20211117231110-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:12:57.166540    7712 oci.go:670] temporary error verifying shutdown: unknown state "embed-certs-20211117231110-9504": docker container inspect embed-certs-20211117231110-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20211117231110-9504
	I1117 23:12:57.166635    7712 oci.go:672] temporary error: container embed-certs-20211117231110-9504 status is  but expect it to be exited
	I1117 23:12:57.166635    7712 retry.go:31] will retry after 391.517075ms: couldn't verify container is exited. %v: unknown state "embed-certs-20211117231110-9504": docker container inspect embed-certs-20211117231110-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20211117231110-9504
	I1117 23:12:57.562187    7712 cli_runner.go:115] Run: docker container inspect embed-certs-20211117231110-9504 --format={{.State.Status}}
	W1117 23:12:57.657198    7712 cli_runner.go:162] docker container inspect embed-certs-20211117231110-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:12:57.657198    7712 oci.go:670] temporary error verifying shutdown: unknown state "embed-certs-20211117231110-9504": docker container inspect embed-certs-20211117231110-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20211117231110-9504
	I1117 23:12:57.657198    7712 oci.go:672] temporary error: container embed-certs-20211117231110-9504 status is  but expect it to be exited
	I1117 23:12:57.657198    7712 retry.go:31] will retry after 594.826393ms: couldn't verify container is exited. %v: unknown state "embed-certs-20211117231110-9504": docker container inspect embed-certs-20211117231110-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20211117231110-9504
	I1117 23:12:58.257292    7712 cli_runner.go:115] Run: docker container inspect embed-certs-20211117231110-9504 --format={{.State.Status}}
	W1117 23:12:58.353034    7712 cli_runner.go:162] docker container inspect embed-certs-20211117231110-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:12:58.353144    7712 oci.go:670] temporary error verifying shutdown: unknown state "embed-certs-20211117231110-9504": docker container inspect embed-certs-20211117231110-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20211117231110-9504
	I1117 23:12:58.353200    7712 oci.go:672] temporary error: container embed-certs-20211117231110-9504 status is  but expect it to be exited
	I1117 23:12:58.353274    7712 retry.go:31] will retry after 1.326470261s: couldn't verify container is exited. %v: unknown state "embed-certs-20211117231110-9504": docker container inspect embed-certs-20211117231110-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20211117231110-9504
	I1117 23:12:59.685218    7712 cli_runner.go:115] Run: docker container inspect embed-certs-20211117231110-9504 --format={{.State.Status}}
	W1117 23:12:59.774543    7712 cli_runner.go:162] docker container inspect embed-certs-20211117231110-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:12:59.774543    7712 oci.go:670] temporary error verifying shutdown: unknown state "embed-certs-20211117231110-9504": docker container inspect embed-certs-20211117231110-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20211117231110-9504
	I1117 23:12:59.774795    7712 oci.go:672] temporary error: container embed-certs-20211117231110-9504 status is  but expect it to be exited
	I1117 23:12:59.774795    7712 retry.go:31] will retry after 1.212558276s: couldn't verify container is exited. %v: unknown state "embed-certs-20211117231110-9504": docker container inspect embed-certs-20211117231110-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20211117231110-9504
	I1117 23:13:00.994477    7712 cli_runner.go:115] Run: docker container inspect embed-certs-20211117231110-9504 --format={{.State.Status}}
	W1117 23:13:01.090137    7712 cli_runner.go:162] docker container inspect embed-certs-20211117231110-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:13:01.090391    7712 oci.go:670] temporary error verifying shutdown: unknown state "embed-certs-20211117231110-9504": docker container inspect embed-certs-20211117231110-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20211117231110-9504
	I1117 23:13:01.090430    7712 oci.go:672] temporary error: container embed-certs-20211117231110-9504 status is  but expect it to be exited
	I1117 23:13:01.090430    7712 retry.go:31] will retry after 1.779941781s: couldn't verify container is exited. %v: unknown state "embed-certs-20211117231110-9504": docker container inspect embed-certs-20211117231110-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20211117231110-9504
	I1117 23:13:02.877052    7712 cli_runner.go:115] Run: docker container inspect embed-certs-20211117231110-9504 --format={{.State.Status}}
	W1117 23:13:02.965952    7712 cli_runner.go:162] docker container inspect embed-certs-20211117231110-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:13:02.965952    7712 oci.go:670] temporary error verifying shutdown: unknown state "embed-certs-20211117231110-9504": docker container inspect embed-certs-20211117231110-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20211117231110-9504
	I1117 23:13:02.965952    7712 oci.go:672] temporary error: container embed-certs-20211117231110-9504 status is  but expect it to be exited
	I1117 23:13:02.965952    7712 retry.go:31] will retry after 3.268621161s: couldn't verify container is exited. %v: unknown state "embed-certs-20211117231110-9504": docker container inspect embed-certs-20211117231110-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20211117231110-9504
	I1117 23:13:06.241053    7712 cli_runner.go:115] Run: docker container inspect embed-certs-20211117231110-9504 --format={{.State.Status}}
	W1117 23:13:06.330267    7712 cli_runner.go:162] docker container inspect embed-certs-20211117231110-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:13:06.330499    7712 oci.go:670] temporary error verifying shutdown: unknown state "embed-certs-20211117231110-9504": docker container inspect embed-certs-20211117231110-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20211117231110-9504
	I1117 23:13:06.330499    7712 oci.go:672] temporary error: container embed-certs-20211117231110-9504 status is  but expect it to be exited
	I1117 23:13:06.330499    7712 retry.go:31] will retry after 6.097818893s: couldn't verify container is exited. %v: unknown state "embed-certs-20211117231110-9504": docker container inspect embed-certs-20211117231110-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20211117231110-9504
	I1117 23:13:12.437273    7712 cli_runner.go:115] Run: docker container inspect embed-certs-20211117231110-9504 --format={{.State.Status}}
	W1117 23:13:12.526702    7712 cli_runner.go:162] docker container inspect embed-certs-20211117231110-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:13:12.526839    7712 oci.go:670] temporary error verifying shutdown: unknown state "embed-certs-20211117231110-9504": docker container inspect embed-certs-20211117231110-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20211117231110-9504
	I1117 23:13:12.526839    7712 oci.go:672] temporary error: container embed-certs-20211117231110-9504 status is  but expect it to be exited
	I1117 23:13:12.526949    7712 oci.go:87] couldn't shut down embed-certs-20211117231110-9504 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "embed-certs-20211117231110-9504": docker container inspect embed-certs-20211117231110-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20211117231110-9504
	 
	I1117 23:13:12.530915    7712 cli_runner.go:115] Run: docker rm -f -v embed-certs-20211117231110-9504
	W1117 23:13:12.616998    7712 cli_runner.go:162] docker rm -f -v embed-certs-20211117231110-9504 returned with exit code 1
	W1117 23:13:12.618081    7712 delete.go:139] delete failed (probably ok) <nil>
	I1117 23:13:12.618081    7712 fix.go:120] Sleeping 1 second for extra luck!
	I1117 23:13:13.619728    7712 start.go:126] createHost starting for "" (driver="docker")
	I1117 23:13:13.623710    7712 out.go:203] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I1117 23:13:13.623925    7712 start.go:160] libmachine.API.Create for "embed-certs-20211117231110-9504" (driver="docker")
	I1117 23:13:13.623925    7712 client.go:168] LocalClient.Create starting
	I1117 23:13:13.623925    7712 main.go:130] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I1117 23:13:13.623925    7712 main.go:130] libmachine: Decoding PEM data...
	I1117 23:13:13.623925    7712 main.go:130] libmachine: Parsing certificate...
	I1117 23:13:13.624845    7712 main.go:130] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I1117 23:13:13.625100    7712 main.go:130] libmachine: Decoding PEM data...
	I1117 23:13:13.625100    7712 main.go:130] libmachine: Parsing certificate...
	I1117 23:13:13.629687    7712 cli_runner.go:115] Run: docker network inspect embed-certs-20211117231110-9504 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1117 23:13:13.723118    7712 network_create.go:67] Found existing network {name:embed-certs-20211117231110-9504 subnet:0xc0012080c0 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 168 76 1] mtu:1500}
	I1117 23:13:13.723371    7712 kic.go:106] calculated static IP "192.168.76.2" for the "embed-certs-20211117231110-9504" container
	I1117 23:13:13.730821    7712 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
	I1117 23:13:13.822455    7712 cli_runner.go:115] Run: docker volume create embed-certs-20211117231110-9504 --label name.minikube.sigs.k8s.io=embed-certs-20211117231110-9504 --label created_by.minikube.sigs.k8s.io=true
	I1117 23:13:13.914228    7712 oci.go:102] Successfully created a docker volume embed-certs-20211117231110-9504
	I1117 23:13:13.918017    7712 cli_runner.go:115] Run: docker run --rm --name embed-certs-20211117231110-9504-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-20211117231110-9504 --entrypoint /usr/bin/test -v embed-certs-20211117231110-9504:/var gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -d /var/lib
	I1117 23:13:14.808643    7712 oci.go:106] Successfully prepared a docker volume embed-certs-20211117231110-9504
	I1117 23:13:14.808921    7712 preload.go:132] Checking if preload exists for k8s version v1.22.3 and runtime docker
	I1117 23:13:14.808921    7712 kic.go:179] Starting extracting preloaded images to volume ...
	I1117 23:13:14.813352    7712 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I1117 23:13:14.813639    7712 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v embed-certs-20211117231110-9504:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir
	W1117 23:13:14.921532    7712 cli_runner.go:162] docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v embed-certs-20211117231110-9504:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir returned with exit code 125
	I1117 23:13:14.921532    7712 kic.go:186] Unable to extract preloaded tarball to volume: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v embed-certs-20211117231110-9504:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: status code not OK but 500: {"Message":"Unhandled exception: The notification platform is unavailable.\r\n\r\nThe notification platform is unavailable.\r\n","StackTrace":"   at Windows.UI.Notifications.ToastNotificationManager.CreateToastNotifier(String applicationId)\r\n   at Docker.WPF.PromptShareDirectory.<PromptUserAsync>d__0.MoveNext() in C:\\workspaces\\stable-2.3.x\\src\\github.com\\docker\\pinata\\win\\src\\Docker.WPF\\PromptShareDirectory.cs:line 25\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at Docker.ApiServices.Mounting.FileSharing.<DoShareAsync>d__6.MoveNext() in C:\\workspaces\\stable-2.3.x\\src\\github.com\\docker\\pinata\\win\\src\\Docker.ApiServices\\Mounting\\FileSharing.cs:line 80\r\n--- End of stack trace from previous location whe
re exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at Docker.ApiServices.Mounting.FileSharing.<ShareAsync>d__4.MoveNext() in C:\\workspaces\\stable-2.3.x\\src\\github.com\\docker\\pinata\\win\\src\\Docker.ApiServices\\Mounting\\FileSharing.cs:line 47\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at Docker.HttpApi.Controllers.FilesharingController.<ShareDirectory>d__2.MoveNext() in C:\\workspaces\\stable-2.3.x\\src\\github.com\\docker\\pinata\\win\\src\\Docker.HttpApi\\Controllers\\FilesharingController.cs:line 21\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.Exceptio
nDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at System.Threading.Tasks.TaskHelpersExtensions.<CastToObject>d__1`1.MoveNext()\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at System.Web.Http.Controllers.ApiControllerActionInvoker.<InvokeActionAsyncCore>d__1.MoveNext()\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at System.Web.Http.Controllers.ActionFilterResult.<ExecuteAsync>d__5.MoveNext()\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.Excepti
onServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at System.Web.Http.Dispatcher.HttpControllerDispatcher.<SendAsync>d__15.MoveNext()"}.
	See 'docker run --help'.
	I1117 23:13:15.166084    7712 info.go:263] docker info: {ID:L2W4:ZZCI:FAHF:ZZT6:NAWJ:ODFR:2ZBQ:YQ77:PYZZ:VR3E:WLUM:OBSV Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:41 OomKillDisable:true NGoroutines:53 SystemTime:2021-11-17 23:13:14.90600235 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:4.19.76-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://index
.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:6239399936 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:19.03.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7ad184331fa3e55e52b890ea95e65ba581ae3429 Expected:7ad184331fa3e55e52b890ea95e65ba581ae3429} RuncCommit:{ID:dc9208a3303feef5b3839f4323d9beb36df0a9dd Expected:dc9208a3303feef5b3839f4323d9beb36df0a9dd} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense:Community Engine Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	E1117 23:13:15.166643    7712 oci.go:197] error getting kernel modules path: Unable to locate kernel modules
	I1117 23:13:15.166643    7712 client.go:171] LocalClient.Create took 1.5427063s
	I1117 23:13:17.174987    7712 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1117 23:13:17.178716    7712 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20211117231110-9504
	W1117 23:13:17.271840    7712 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20211117231110-9504 returned with exit code 1
	I1117 23:13:17.271967    7712 retry.go:31] will retry after 198.278561ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-20211117231110-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20211117231110-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20211117231110-9504
	I1117 23:13:17.473307    7712 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20211117231110-9504
	W1117 23:13:17.559883    7712 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20211117231110-9504 returned with exit code 1
	I1117 23:13:17.560019    7712 retry.go:31] will retry after 298.905961ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-20211117231110-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20211117231110-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20211117231110-9504
	I1117 23:13:17.863942    7712 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20211117231110-9504
	W1117 23:13:17.955616    7712 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20211117231110-9504 returned with exit code 1
	I1117 23:13:17.955800    7712 retry.go:31] will retry after 704.572333ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-20211117231110-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20211117231110-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20211117231110-9504
	I1117 23:13:18.666409    7712 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20211117231110-9504
	W1117 23:13:18.761460    7712 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20211117231110-9504 returned with exit code 1
	W1117 23:13:18.761460    7712 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-20211117231110-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20211117231110-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20211117231110-9504
	
	W1117 23:13:18.761460    7712 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-20211117231110-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20211117231110-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20211117231110-9504
	I1117 23:13:18.761460    7712 start.go:129] duration metric: createHost completed in 5.1416187s
	I1117 23:13:18.769787    7712 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1117 23:13:18.773690    7712 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20211117231110-9504
	W1117 23:13:18.854748    7712 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20211117231110-9504 returned with exit code 1
	I1117 23:13:18.854891    7712 retry.go:31] will retry after 341.333754ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-20211117231110-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20211117231110-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20211117231110-9504
	I1117 23:13:19.201961    7712 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20211117231110-9504
	W1117 23:13:19.300546    7712 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20211117231110-9504 returned with exit code 1
	I1117 23:13:19.300797    7712 retry.go:31] will retry after 448.769687ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-20211117231110-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20211117231110-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20211117231110-9504
	I1117 23:13:19.754888    7712 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20211117231110-9504
	W1117 23:13:19.853603    7712 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20211117231110-9504 returned with exit code 1
	I1117 23:13:19.853762    7712 retry.go:31] will retry after 575.898922ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-20211117231110-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20211117231110-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20211117231110-9504
	I1117 23:13:20.435012    7712 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20211117231110-9504
	W1117 23:13:20.526598    7712 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20211117231110-9504 returned with exit code 1
	W1117 23:13:20.527013    7712 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-20211117231110-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20211117231110-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20211117231110-9504
	
	W1117 23:13:20.527098    7712 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-20211117231110-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20211117231110-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20211117231110-9504
	I1117 23:13:20.527098    7712 fix.go:57] fixHost completed within 25.0576893s
	I1117 23:13:20.527098    7712 start.go:80] releasing machines lock for "embed-certs-20211117231110-9504", held for 25.0578609s
	W1117 23:13:20.527098    7712 out.go:241] * Failed to start docker container. Running "minikube delete -p embed-certs-20211117231110-9504" may fix it: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	* Failed to start docker container. Running "minikube delete -p embed-certs-20211117231110-9504" may fix it: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	I1117 23:13:20.532039    7712 out.go:176] 
	W1117 23:13:20.532039    7712 out.go:241] X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	W1117 23:13:20.532039    7712 out.go:241] * 
	* 
	W1117 23:13:20.533361    7712 out.go:241] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1117 23:13:20.535125    7712 out.go:176] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:244: failed to start minikube post-stop. args "out/minikube-windows-amd64.exe start -p embed-certs-20211117231110-9504 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker --kubernetes-version=v1.22.3": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/embed-certs/serial/SecondStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect embed-certs-20211117231110-9504
helpers_test.go:235: (dbg) docker inspect embed-certs-20211117231110-9504:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "embed-certs-20211117231110-9504",
	        "Id": "49a30a26e6046d6b1c8274a30b93a0b079cde1a03a9b3864d276f6a2cf99256e",
	        "Created": "2021-11-17T23:12:44.093259743Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.76.0/24",
	                    "Gateway": "192.168.76.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p embed-certs-20211117231110-9504 -n embed-certs-20211117231110-9504

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/SecondStart
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p embed-certs-20211117231110-9504 -n embed-certs-20211117231110-9504: exit status 7 (1.8554472s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 23:13:22.611566    6588 status.go:247] status error: host: state: unknown state "embed-certs-20211117231110-9504": docker container inspect embed-certs-20211117231110-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20211117231110-9504

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-20211117231110-9504" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/embed-certs/serial/SecondStart (60.31s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/DeployApp (3.91s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/DeployApp
start_stop_delete_test.go:181: (dbg) Run:  kubectl --context default-k8s-different-port-20211117231152-9504 create -f testdata\busybox.yaml
start_stop_delete_test.go:181: (dbg) Non-zero exit: kubectl --context default-k8s-different-port-20211117231152-9504 create -f testdata\busybox.yaml: exit status 1 (205.6926ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-different-port-20211117231152-9504" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:181: kubectl --context default-k8s-different-port-20211117231152-9504 create -f testdata\busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/default-k8s-different-port/serial/DeployApp]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect default-k8s-different-port-20211117231152-9504
helpers_test.go:235: (dbg) docker inspect default-k8s-different-port-20211117231152-9504:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "default-k8s-different-port-20211117231152-9504",
	        "Id": "958aad75e6207a47e0822eec615f5b93779f44fc3bfdf4fedaf773fd2df7bb30",
	        "Created": "2021-11-17T23:11:56.13952141Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.49.0/24",
	                    "Gateway": "192.168.49.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p default-k8s-different-port-20211117231152-9504 -n default-k8s-different-port-20211117231152-9504
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p default-k8s-different-port-20211117231152-9504 -n default-k8s-different-port-20211117231152-9504: exit status 7 (1.740899s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 23:12:34.479464    6940 status.go:247] status error: host: state: unknown state "default-k8s-different-port-20211117231152-9504": docker container inspect default-k8s-different-port-20211117231152-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20211117231152-9504

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-different-port-20211117231152-9504" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/default-k8s-different-port/serial/DeployApp]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect default-k8s-different-port-20211117231152-9504
helpers_test.go:235: (dbg) docker inspect default-k8s-different-port-20211117231152-9504:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "default-k8s-different-port-20211117231152-9504",
	        "Id": "958aad75e6207a47e0822eec615f5b93779f44fc3bfdf4fedaf773fd2df7bb30",
	        "Created": "2021-11-17T23:11:56.13952141Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.49.0/24",
	                    "Gateway": "192.168.49.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p default-k8s-different-port-20211117231152-9504 -n default-k8s-different-port-20211117231152-9504

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/DeployApp
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p default-k8s-different-port-20211117231152-9504 -n default-k8s-different-port-20211117231152-9504: exit status 7 (1.7517317s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 23:12:36.331442   10976 status.go:247] status error: host: state: unknown state "default-k8s-different-port-20211117231152-9504": docker container inspect default-k8s-different-port-20211117231152-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20211117231152-9504

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-different-port-20211117231152-9504" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/default-k8s-different-port/serial/DeployApp (3.91s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive (3.91s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:190: (dbg) Run:  out/minikube-windows-amd64.exe addons enable metrics-server -p default-k8s-different-port-20211117231152-9504 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:190: (dbg) Done: out/minikube-windows-amd64.exe addons enable metrics-server -p default-k8s-different-port-20211117231152-9504 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.8172378s)
start_stop_delete_test.go:200: (dbg) Run:  kubectl --context default-k8s-different-port-20211117231152-9504 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:200: (dbg) Non-zero exit: kubectl --context default-k8s-different-port-20211117231152-9504 describe deploy/metrics-server -n kube-system: exit status 1 (219.7079ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-different-port-20211117231152-9504" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:202: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-different-port-20211117231152-9504 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:206: addon did not load correct image. Expected to contain " fake.domain/k8s.gcr.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect default-k8s-different-port-20211117231152-9504
helpers_test.go:235: (dbg) docker inspect default-k8s-different-port-20211117231152-9504:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "default-k8s-different-port-20211117231152-9504",
	        "Id": "958aad75e6207a47e0822eec615f5b93779f44fc3bfdf4fedaf773fd2df7bb30",
	        "Created": "2021-11-17T23:11:56.13952141Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.49.0/24",
	                    "Gateway": "192.168.49.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p default-k8s-different-port-20211117231152-9504 -n default-k8s-different-port-20211117231152-9504

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p default-k8s-different-port-20211117231152-9504 -n default-k8s-different-port-20211117231152-9504: exit status 7 (1.7613317s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 23:12:40.247739    6916 status.go:247] status error: host: state: unknown state "default-k8s-different-port-20211117231152-9504": docker container inspect default-k8s-different-port-20211117231152-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20211117231152-9504

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-different-port-20211117231152-9504" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive (3.91s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (5.57s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:224: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-20211117231133-9504 -n no-preload-20211117231133-9504

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:224: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-20211117231133-9504 -n no-preload-20211117231133-9504: exit status 7 (1.7870531s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 23:12:39.843374    7808 status.go:247] status error: host: state: unknown state "no-preload-20211117231133-9504": docker container inspect no-preload-20211117231133-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20211117231133-9504

                                                
                                                
** /stderr **
start_stop_delete_test.go:224: status error: exit status 7 (may be ok)
start_stop_delete_test.go:226: expected post-stop host status to be -"Stopped"- but got *"Nonexistent"*
start_stop_delete_test.go:231: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p no-preload-20211117231133-9504 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:231: (dbg) Done: out/minikube-windows-amd64.exe addons enable dashboard -p no-preload-20211117231133-9504 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4: (1.7790112s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect no-preload-20211117231133-9504
helpers_test.go:235: (dbg) docker inspect no-preload-20211117231133-9504:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "no-preload-20211117231133-9504",
	        "Id": "03729738a14bc6e222aa8b654491630e4091cfad66b947339e45d35ea236214f",
	        "Created": "2021-11-17T23:12:04.406110101Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.58.0/24",
	                    "Gateway": "192.168.58.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-20211117231133-9504 -n no-preload-20211117231133-9504
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-20211117231133-9504 -n no-preload-20211117231133-9504: exit status 7 (1.8966302s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 23:12:43.631545   12220 status.go:247] status error: host: state: unknown state "no-preload-20211117231133-9504": docker container inspect no-preload-20211117231133-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20211117231133-9504

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-20211117231133-9504" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (5.57s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/Stop (17.08s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/Stop
start_stop_delete_test.go:213: (dbg) Run:  out/minikube-windows-amd64.exe stop -p default-k8s-different-port-20211117231152-9504 --alsologtostderr -v=3

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/Stop
start_stop_delete_test.go:213: (dbg) Non-zero exit: out/minikube-windows-amd64.exe stop -p default-k8s-different-port-20211117231152-9504 --alsologtostderr -v=3: exit status 82 (15.1819598s)

                                                
                                                
-- stdout --
	* Stopping node "default-k8s-different-port-20211117231152-9504"  ...
	* Stopping node "default-k8s-different-port-20211117231152-9504"  ...
	* Stopping node "default-k8s-different-port-20211117231152-9504"  ...
	* Stopping node "default-k8s-different-port-20211117231152-9504"  ...
	* Stopping node "default-k8s-different-port-20211117231152-9504"  ...
	* Stopping node "default-k8s-different-port-20211117231152-9504"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1117 23:12:40.442530    6100 out.go:297] Setting OutFile to fd 1984 ...
	I1117 23:12:40.510102    6100 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 23:12:40.510102    6100 out.go:310] Setting ErrFile to fd 1388...
	I1117 23:12:40.510102    6100 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 23:12:40.523034    6100 out.go:304] Setting JSON to false
	I1117 23:12:40.524054    6100 daemonize_windows.go:45] trying to kill existing schedule stop for profile default-k8s-different-port-20211117231152-9504...
	I1117 23:12:40.535028    6100 ssh_runner.go:152] Run: systemctl --version
	I1117 23:12:40.538506    6100 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211117231152-9504
	W1117 23:12:42.029091    6100 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211117231152-9504 returned with exit code 1
	I1117 23:12:42.029151    6100 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211117231152-9504: (1.4904187s)
	I1117 23:12:42.029241    6100 retry.go:31] will retry after 276.165072ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-different-port-20211117231152-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211117231152-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20211117231152-9504
	I1117 23:12:42.313001    6100 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211117231152-9504
	W1117 23:12:42.406331    6100 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211117231152-9504 returned with exit code 1
	I1117 23:12:42.413949    6100 ssh_runner.go:152] Run: sudo service minikube-scheduled-stop stop
	I1117 23:12:42.417586    6100 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211117231152-9504
	W1117 23:12:42.506361    6100 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211117231152-9504 returned with exit code 1
	I1117 23:12:42.506619    6100 retry.go:31] will retry after 291.140013ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-different-port-20211117231152-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211117231152-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20211117231152-9504
	I1117 23:12:42.803293    6100 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211117231152-9504
	W1117 23:12:42.893614    6100 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211117231152-9504 returned with exit code 1
	I1117 23:12:42.893875    6100 retry.go:31] will retry after 351.64282ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-different-port-20211117231152-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211117231152-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20211117231152-9504
	I1117 23:12:43.250245    6100 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211117231152-9504
	W1117 23:12:43.363781    6100 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211117231152-9504 returned with exit code 1
	I1117 23:12:43.363997    6100 retry.go:31] will retry after 520.108592ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-different-port-20211117231152-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211117231152-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20211117231152-9504
	I1117 23:12:43.889910    6100 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211117231152-9504
	W1117 23:12:43.992374    6100 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211117231152-9504 returned with exit code 1
	I1117 23:12:43.992374    6100 openrc.go:165] stop output: 
	E1117 23:12:43.992374    6100 daemonize_windows.go:39] error terminating scheduled stop for profile default-k8s-different-port-20211117231152-9504: stopping schedule-stop service for profile default-k8s-different-port-20211117231152-9504: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-different-port-20211117231152-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211117231152-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20211117231152-9504
	I1117 23:12:43.992374    6100 mustload.go:65] Loading cluster: default-k8s-different-port-20211117231152-9504
	I1117 23:12:43.993372    6100 config.go:176] Loaded profile config "default-k8s-different-port-20211117231152-9504": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.3
	I1117 23:12:43.993372    6100 stop.go:39] StopHost: default-k8s-different-port-20211117231152-9504
	I1117 23:12:43.997372    6100 out.go:176] * Stopping node "default-k8s-different-port-20211117231152-9504"  ...
	I1117 23:12:44.005355    6100 cli_runner.go:115] Run: docker container inspect default-k8s-different-port-20211117231152-9504 --format={{.State.Status}}
	W1117 23:12:44.096504    6100 cli_runner.go:162] docker container inspect default-k8s-different-port-20211117231152-9504 --format={{.State.Status}} returned with exit code 1
	W1117 23:12:44.096504    6100 stop.go:75] unable to get state: unknown state "default-k8s-different-port-20211117231152-9504": docker container inspect default-k8s-different-port-20211117231152-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20211117231152-9504
	W1117 23:12:44.096504    6100 stop.go:163] stop host returned error: ssh power off: unknown state "default-k8s-different-port-20211117231152-9504": docker container inspect default-k8s-different-port-20211117231152-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20211117231152-9504
	I1117 23:12:44.096504    6100 retry.go:31] will retry after 565.637019ms: docker container inspect default-k8s-different-port-20211117231152-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20211117231152-9504
	I1117 23:12:44.662387    6100 stop.go:39] StopHost: default-k8s-different-port-20211117231152-9504
	I1117 23:12:44.665309    6100 out.go:176] * Stopping node "default-k8s-different-port-20211117231152-9504"  ...
	I1117 23:12:44.677149    6100 cli_runner.go:115] Run: docker container inspect default-k8s-different-port-20211117231152-9504 --format={{.State.Status}}
	W1117 23:12:44.780801    6100 cli_runner.go:162] docker container inspect default-k8s-different-port-20211117231152-9504 --format={{.State.Status}} returned with exit code 1
	W1117 23:12:44.781061    6100 stop.go:75] unable to get state: unknown state "default-k8s-different-port-20211117231152-9504": docker container inspect default-k8s-different-port-20211117231152-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20211117231152-9504
	W1117 23:12:44.781125    6100 stop.go:163] stop host returned error: ssh power off: unknown state "default-k8s-different-port-20211117231152-9504": docker container inspect default-k8s-different-port-20211117231152-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20211117231152-9504
	I1117 23:12:44.781125    6100 retry.go:31] will retry after 984.778882ms: docker container inspect default-k8s-different-port-20211117231152-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20211117231152-9504
	I1117 23:12:45.767267    6100 stop.go:39] StopHost: default-k8s-different-port-20211117231152-9504
	I1117 23:12:45.771769    6100 out.go:176] * Stopping node "default-k8s-different-port-20211117231152-9504"  ...
	I1117 23:12:45.779233    6100 cli_runner.go:115] Run: docker container inspect default-k8s-different-port-20211117231152-9504 --format={{.State.Status}}
	W1117 23:12:45.864803    6100 cli_runner.go:162] docker container inspect default-k8s-different-port-20211117231152-9504 --format={{.State.Status}} returned with exit code 1
	W1117 23:12:45.864959    6100 stop.go:75] unable to get state: unknown state "default-k8s-different-port-20211117231152-9504": docker container inspect default-k8s-different-port-20211117231152-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20211117231152-9504
	W1117 23:12:45.865089    6100 stop.go:163] stop host returned error: ssh power off: unknown state "default-k8s-different-port-20211117231152-9504": docker container inspect default-k8s-different-port-20211117231152-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20211117231152-9504
	I1117 23:12:45.865089    6100 retry.go:31] will retry after 1.343181417s: docker container inspect default-k8s-different-port-20211117231152-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20211117231152-9504
	I1117 23:12:47.208531    6100 stop.go:39] StopHost: default-k8s-different-port-20211117231152-9504
	I1117 23:12:47.217542    6100 out.go:176] * Stopping node "default-k8s-different-port-20211117231152-9504"  ...
	I1117 23:12:47.225377    6100 cli_runner.go:115] Run: docker container inspect default-k8s-different-port-20211117231152-9504 --format={{.State.Status}}
	W1117 23:12:47.322671    6100 cli_runner.go:162] docker container inspect default-k8s-different-port-20211117231152-9504 --format={{.State.Status}} returned with exit code 1
	W1117 23:12:47.322956    6100 stop.go:75] unable to get state: unknown state "default-k8s-different-port-20211117231152-9504": docker container inspect default-k8s-different-port-20211117231152-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20211117231152-9504
	W1117 23:12:47.322956    6100 stop.go:163] stop host returned error: ssh power off: unknown state "default-k8s-different-port-20211117231152-9504": docker container inspect default-k8s-different-port-20211117231152-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20211117231152-9504
	I1117 23:12:47.322956    6100 retry.go:31] will retry after 2.703077529s: docker container inspect default-k8s-different-port-20211117231152-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20211117231152-9504
	I1117 23:12:50.026360    6100 stop.go:39] StopHost: default-k8s-different-port-20211117231152-9504
	I1117 23:12:50.030631    6100 out.go:176] * Stopping node "default-k8s-different-port-20211117231152-9504"  ...
	I1117 23:12:50.037128    6100 cli_runner.go:115] Run: docker container inspect default-k8s-different-port-20211117231152-9504 --format={{.State.Status}}
	W1117 23:12:50.142231    6100 cli_runner.go:162] docker container inspect default-k8s-different-port-20211117231152-9504 --format={{.State.Status}} returned with exit code 1
	W1117 23:12:50.142455    6100 stop.go:75] unable to get state: unknown state "default-k8s-different-port-20211117231152-9504": docker container inspect default-k8s-different-port-20211117231152-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20211117231152-9504
	W1117 23:12:50.142455    6100 stop.go:163] stop host returned error: ssh power off: unknown state "default-k8s-different-port-20211117231152-9504": docker container inspect default-k8s-different-port-20211117231152-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20211117231152-9504
	I1117 23:12:50.142455    6100 retry.go:31] will retry after 5.139513932s: docker container inspect default-k8s-different-port-20211117231152-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20211117231152-9504
	I1117 23:12:55.284016    6100 stop.go:39] StopHost: default-k8s-different-port-20211117231152-9504
	I1117 23:12:55.291996    6100 out.go:176] * Stopping node "default-k8s-different-port-20211117231152-9504"  ...
	I1117 23:12:55.300642    6100 cli_runner.go:115] Run: docker container inspect default-k8s-different-port-20211117231152-9504 --format={{.State.Status}}
	W1117 23:12:55.388936    6100 cli_runner.go:162] docker container inspect default-k8s-different-port-20211117231152-9504 --format={{.State.Status}} returned with exit code 1
	W1117 23:12:55.389147    6100 stop.go:75] unable to get state: unknown state "default-k8s-different-port-20211117231152-9504": docker container inspect default-k8s-different-port-20211117231152-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20211117231152-9504
	W1117 23:12:55.389226    6100 stop.go:163] stop host returned error: ssh power off: unknown state "default-k8s-different-port-20211117231152-9504": docker container inspect default-k8s-different-port-20211117231152-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20211117231152-9504
	I1117 23:12:55.392340    6100 out.go:176] 
	W1117 23:12:55.392582    6100 out.go:241] X Exiting due to GUEST_STOP_TIMEOUT: docker container inspect default-k8s-different-port-20211117231152-9504 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20211117231152-9504
	
	X Exiting due to GUEST_STOP_TIMEOUT: docker container inspect default-k8s-different-port-20211117231152-9504 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20211117231152-9504
	
	W1117 23:12:55.392611    6100 out.go:241] * 
	* 
	W1117 23:12:55.408786    6100 out.go:241] ╭──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                      │
	│    * If the above advice does not help, please let us know:                                                          │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                        │
	│                                                                                                                      │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                             │
	│    * Please also attach the following file to the GitHub issue:                                                      │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_stop_eeafc14e343686f0df1f1d4295ac2d3042636ff8_2.log    │
	│                                                                                                                      │
	╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                      │
	│    * If the above advice does not help, please let us know:                                                          │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                        │
	│                                                                                                                      │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                             │
	│    * Please also attach the following file to the GitHub issue:                                                      │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_stop_eeafc14e343686f0df1f1d4295ac2d3042636ff8_2.log    │
	│                                                                                                                      │
	╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I1117 23:12:55.418818    6100 out.go:176] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed stopping minikube - first stop-. args "out/minikube-windows-amd64.exe stop -p default-k8s-different-port-20211117231152-9504 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/default-k8s-different-port/serial/Stop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect default-k8s-different-port-20211117231152-9504
helpers_test.go:235: (dbg) docker inspect default-k8s-different-port-20211117231152-9504:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "default-k8s-different-port-20211117231152-9504",
	        "Id": "958aad75e6207a47e0822eec615f5b93779f44fc3bfdf4fedaf773fd2df7bb30",
	        "Created": "2021-11-17T23:11:56.13952141Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.49.0/24",
	                    "Gateway": "192.168.49.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p default-k8s-different-port-20211117231152-9504 -n default-k8s-different-port-20211117231152-9504
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p default-k8s-different-port-20211117231152-9504 -n default-k8s-different-port-20211117231152-9504: exit status 7 (1.7734158s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 23:12:57.332110   11336 status.go:247] status error: host: state: unknown state "default-k8s-different-port-20211117231152-9504": docker container inspect default-k8s-different-port-20211117231152-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20211117231152-9504

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-different-port-20211117231152-9504" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/default-k8s-different-port/serial/Stop (17.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (59.83s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:241: (dbg) Run:  out/minikube-windows-amd64.exe start -p no-preload-20211117231133-9504 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker --kubernetes-version=v1.22.4-rc.0

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:241: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p no-preload-20211117231133-9504 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker --kubernetes-version=v1.22.4-rc.0: exit status 80 (57.7312348s)

                                                
                                                
-- stdout --
	* [no-preload-20211117231133-9504] minikube v1.24.0 on Microsoft Windows 10 Enterprise N 10.0.18363 Build 18363
	  - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=12739
	* Using the docker driver based on existing profile
	* Starting control plane node no-preload-20211117231133-9504 in cluster no-preload-20211117231133-9504
	* Pulling base image ...
	* docker "no-preload-20211117231133-9504" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* docker "no-preload-20211117231133-9504" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1117 23:12:43.848179    5028 out.go:297] Setting OutFile to fd 1856 ...
	I1117 23:12:43.936719    5028 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 23:12:43.936719    5028 out.go:310] Setting ErrFile to fd 1420...
	I1117 23:12:43.936719    5028 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 23:12:43.950485    5028 out.go:304] Setting JSON to false
	I1117 23:12:43.952803    5028 start.go:112] hostinfo: {"hostname":"minikube2","uptime":80079,"bootTime":1637110684,"procs":134,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.18363 Build 18363","kernelVersion":"10.0.18363 Build 18363","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"29dced62-21fb-45d8-a34e-472b66ced002"}
	W1117 23:12:43.952803    5028 start.go:120] gopshost.Virtualization returned error: not implemented yet
	I1117 23:12:43.956915    5028 out.go:176] * [no-preload-20211117231133-9504] minikube v1.24.0 on Microsoft Windows 10 Enterprise N 10.0.18363 Build 18363
	I1117 23:12:43.956915    5028 notify.go:174] Checking for updates...
	I1117 23:12:43.960663    5028 out.go:176]   - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	I1117 23:12:43.963792    5028 out.go:176]   - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	I1117 23:12:43.966070    5028 out.go:176]   - MINIKUBE_LOCATION=12739
	I1117 23:12:43.967344    5028 config.go:176] Loaded profile config "no-preload-20211117231133-9504": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.4-rc.0
	I1117 23:12:43.974349    5028 driver.go:343] Setting default libvirt URI to qemu:///system
	I1117 23:12:45.616850    5028 docker.go:132] docker version: linux-19.03.12
	I1117 23:12:45.624269    5028 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I1117 23:12:45.958414    5028 info.go:263] docker info: {ID:L2W4:ZZCI:FAHF:ZZT6:NAWJ:ODFR:2ZBQ:YQ77:PYZZ:VR3E:WLUM:OBSV Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:40 OomKillDisable:true NGoroutines:52 SystemTime:2021-11-17 23:12:45.699956483 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:4.19.76-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://inde
x.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:6239399936 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:19.03.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7ad184331fa3e55e52b890ea95e65ba581ae3429 Expected:7ad184331fa3e55e52b890ea95e65ba581ae3429} RuncCommit:{ID:dc9208a3303feef5b3839f4323d9beb36df0a9dd Expected:dc9208a3303feef5b3839f4323d9beb36df0a9dd} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense:Community Engine Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I1117 23:12:45.963094    5028 out.go:176] * Using the docker driver based on existing profile
	I1117 23:12:45.963094    5028 start.go:280] selected driver: docker
	I1117 23:12:45.963094    5028 start.go:775] validating driver "docker" against &{Name:no-preload-20211117231133-9504 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.4-rc.0 ClusterName:no-preload-20211117231133-9504 Namespace:default APIServerName:minikubeCA APIS
erverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.22.4-rc.0 ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.3.1@sha256:ec27f462cf1946220f5a9ace416a84a57c18f98c777876a8054405d1428cc92e MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountSt
ring:C:\Users\jenkins.minikube2:/minikube-host}
	I1117 23:12:45.963094    5028 start.go:786] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I1117 23:12:46.022013    5028 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I1117 23:12:46.353875    5028 info.go:263] docker info: {ID:L2W4:ZZCI:FAHF:ZZT6:NAWJ:ODFR:2ZBQ:YQ77:PYZZ:VR3E:WLUM:OBSV Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:40 OomKillDisable:true NGoroutines:52 SystemTime:2021-11-17 23:12:46.099254175 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:4.19.76-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://inde
x.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:6239399936 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:19.03.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7ad184331fa3e55e52b890ea95e65ba581ae3429 Expected:7ad184331fa3e55e52b890ea95e65ba581ae3429} RuncCommit:{ID:dc9208a3303feef5b3839f4323d9beb36df0a9dd Expected:dc9208a3303feef5b3839f4323d9beb36df0a9dd} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense:Community Engine Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I1117 23:12:46.354339    5028 start_flags.go:758] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1117 23:12:46.354339    5028 cni.go:93] Creating CNI manager for ""
	I1117 23:12:46.354442    5028 cni.go:167] CNI unnecessary in this configuration, recommending no CNI
	I1117 23:12:46.354442    5028 start_flags.go:282] config:
	{Name:no-preload-20211117231133-9504 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.4-rc.0 ClusterName:no-preload-20211117231133-9504 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CR
ISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.22.4-rc.0 ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.3.1@sha256:ec27f462cf1946220f5a9ace416a84a57c18f98c777876a8054405d1428cc92e MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host}
	I1117 23:12:46.358966    5028 out.go:176] * Starting control plane node no-preload-20211117231133-9504 in cluster no-preload-20211117231133-9504
	I1117 23:12:46.358966    5028 cache.go:118] Beginning downloading kic base image for docker with docker
	I1117 23:12:46.361857    5028 out.go:176] * Pulling base image ...
	I1117 23:12:46.361857    5028 preload.go:132] Checking if preload exists for k8s version v1.22.4-rc.0 and runtime docker
	I1117 23:12:46.362401    5028 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c in local docker daemon
	I1117 23:12:46.362595    5028 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\k8s.gcr.io\kube-apiserver:v1.22.4-rc.0 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\k8s.gcr.io\kube-apiserver_v1.22.4-rc.0
	I1117 23:12:46.362595    5028 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\k8s.gcr.io\kube-scheduler:v1.22.4-rc.0 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\k8s.gcr.io\kube-scheduler_v1.22.4-rc.0
	I1117 23:12:46.362595    5028 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\docker.io\kubernetesui\dashboard:v2.3.1 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\docker.io\kubernetesui\dashboard_v2.3.1
	I1117 23:12:46.362595    5028 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\k8s.gcr.io\coredns\coredns:v1.8.4 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\k8s.gcr.io\coredns\coredns_v1.8.4
	I1117 23:12:46.362595    5028 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\k8s.gcr.io\kube-proxy:v1.22.4-rc.0 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\k8s.gcr.io\kube-proxy_v1.22.4-rc.0
	I1117 23:12:46.362595    5028 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\k8s.gcr.io\etcd:3.5.0-0 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\k8s.gcr.io\etcd_3.5.0-0
	I1117 23:12:46.362595    5028 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\k8s.gcr.io\pause:3.5 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\k8s.gcr.io\pause_3.5
	I1117 23:12:46.362595    5028 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\gcr.io\k8s-minikube\storage-provisioner:v5 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\gcr.io\k8s-minikube\storage-provisioner_v5
	I1117 23:12:46.362595    5028 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\docker.io\kubernetesui\metrics-scraper:v1.0.7 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\docker.io\kubernetesui\metrics-scraper_v1.0.7
	I1117 23:12:46.362595    5028 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\k8s.gcr.io\kube-controller-manager:v1.22.4-rc.0 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\k8s.gcr.io\kube-controller-manager_v1.22.4-rc.0
	I1117 23:12:46.362540    5028 profile.go:147] Saving config to C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\no-preload-20211117231133-9504\config.json ...
	I1117 23:12:46.500302    5028 image.go:79] Found gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c in local docker daemon, skipping pull
	I1117 23:12:46.500565    5028 cache.go:140] gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c exists in daemon, skipping load
	I1117 23:12:46.500565    5028 cache.go:206] Successfully downloaded all kic artifacts
	I1117 23:12:46.500713    5028 start.go:313] acquiring machines lock for no-preload-20211117231133-9504: {Name:mk72290d14abe23f276712b59e3d3211293a2fa6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 23:12:46.500986    5028 start.go:317] acquired machines lock for "no-preload-20211117231133-9504" in 186.4µs
	I1117 23:12:46.500986    5028 start.go:93] Skipping create...Using existing machine configuration
	I1117 23:12:46.500986    5028 fix.go:55] fixHost starting: 
	I1117 23:12:46.512576    5028 cli_runner.go:115] Run: docker container inspect no-preload-20211117231133-9504 --format={{.State.Status}}
	I1117 23:12:46.546364    5028 cache.go:107] acquiring lock: {Name:mk0c4800ed5b13ab291ff2265133357b20336a8c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 23:12:46.547082    5028 image.go:134] retrieving image: k8s.gcr.io/kube-proxy:v1.22.4-rc.0
	I1117 23:12:46.548155    5028 cache.go:107] acquiring lock: {Name:mk07753e378828d6a9b5c8273895167d2e474020 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 23:12:46.548155    5028 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\docker.io\kubernetesui\dashboard_v2.3.1 exists
	I1117 23:12:46.548692    5028 cache.go:96] cache image "docker.io/kubernetesui/dashboard:v2.3.1" -> "C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\cache\\images\\docker.io\\kubernetesui\\dashboard_v2.3.1" took 186.0951ms
	I1117 23:12:46.548791    5028 cache.go:80] save to tar file docker.io/kubernetesui/dashboard:v2.3.1 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\docker.io\kubernetesui\dashboard_v2.3.1 succeeded
	I1117 23:12:46.549723    5028 cache.go:107] acquiring lock: {Name:mkfa4d3d6685004524c7d13a9f49266b74c76ab8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 23:12:46.550018    5028 cache.go:107] acquiring lock: {Name:mkecddbdf5bdb96eb368bff20b8b8044de9c16ea Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 23:12:46.552900    5028 cache.go:107] acquiring lock: {Name:mke9439de88fd7cfde7b3c89f335155fffdfe7dd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 23:12:46.553189    5028 cache.go:107] acquiring lock: {Name:mkf2d8ca031c09006306827859434409adc972c8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 23:12:46.553189    5028 cache.go:107] acquiring lock: {Name:mk16b2c84e0562e7dfabdafa8a4b202b59aeeb0f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 23:12:46.553189    5028 cache.go:107] acquiring lock: {Name:mkf3b50dab57c642704a948e6ed1b538aa89c43f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 23:12:46.553391    5028 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\gcr.io\k8s-minikube\storage-provisioner_v5 exists
	I1117 23:12:46.553437    5028 image.go:134] retrieving image: k8s.gcr.io/kube-scheduler:v1.22.4-rc.0
	I1117 23:12:46.553540    5028 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\docker.io\kubernetesui\metrics-scraper_v1.0.7 exists
	I1117 23:12:46.553540    5028 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\k8s.gcr.io\coredns\coredns_v1.8.4 exists
	I1117 23:12:46.553645    5028 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\cache\\images\\gcr.io\\k8s-minikube\\storage-provisioner_v5" took 190.943ms
	I1117 23:12:46.553693    5028 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\gcr.io\k8s-minikube\storage-provisioner_v5 succeeded
	I1117 23:12:46.553815    5028 cache.go:96] cache image "k8s.gcr.io/coredns/coredns:v1.8.4" -> "C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\cache\\images\\k8s.gcr.io\\coredns\\coredns_v1.8.4" took 191.1726ms
	I1117 23:12:46.553769    5028 image.go:176] found k8s.gcr.io/kube-proxy:v1.22.4-rc.0 locally: &{ref:{Repository:{Registry:{insecure:false registry:k8s.gcr.io} repository:kube-proxy} tag:v1.22.4-rc.0 original:k8s.gcr.io/kube-proxy:v1.22.4-rc.0} opener:0xc0002705b0 tarballImage:<nil> once:{done:0 m:{state:0 sema:0}} err:<nil>}
	I1117 23:12:46.553869    5028 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\k8s.gcr.io\etcd_3.5.0-0 exists
	I1117 23:12:46.553907    5028 cache.go:161] opening:  \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\k8s.gcr.io\kube-proxy_v1.22.4-rc.0
	I1117 23:12:46.554028    5028 cache.go:96] cache image "k8s.gcr.io/etcd:3.5.0-0" -> "C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\cache\\images\\k8s.gcr.io\\etcd_3.5.0-0" took 191.4309ms
	I1117 23:12:46.553869    5028 cache.go:80] save to tar file k8s.gcr.io/coredns/coredns:v1.8.4 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\k8s.gcr.io\coredns\coredns_v1.8.4 succeeded
	I1117 23:12:46.554028    5028 cache.go:80] save to tar file k8s.gcr.io/etcd:3.5.0-0 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\k8s.gcr.io\etcd_3.5.0-0 succeeded
	I1117 23:12:46.554138    5028 image.go:134] retrieving image: k8s.gcr.io/kube-controller-manager:v1.22.4-rc.0
	I1117 23:12:46.554138    5028 cache.go:96] cache image "docker.io/kubernetesui/metrics-scraper:v1.0.7" -> "C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\cache\\images\\docker.io\\kubernetesui\\metrics-scraper_v1.0.7" took 191.0963ms
	I1117 23:12:46.554305    5028 cache.go:80] save to tar file docker.io/kubernetesui/metrics-scraper:v1.0.7 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\docker.io\kubernetesui\metrics-scraper_v1.0.7 succeeded
	I1117 23:12:46.555432    5028 cache.go:107] acquiring lock: {Name:mk7f425adc20e24994bc202f1792de2676d16e94 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 23:12:46.555576    5028 cache.go:107] acquiring lock: {Name:mk27464e4112fb40ec903ad32451be9529e7a06c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 23:12:46.555914    5028 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\k8s.gcr.io\pause_3.5 exists
	I1117 23:12:46.555948    5028 image.go:134] retrieving image: k8s.gcr.io/kube-apiserver:v1.22.4-rc.0
	I1117 23:12:46.556096    5028 cache.go:96] cache image "k8s.gcr.io/pause:3.5" -> "C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\cache\\images\\k8s.gcr.io\\pause_3.5" took 193.4995ms
	I1117 23:12:46.556096    5028 cache.go:80] save to tar file k8s.gcr.io/pause:3.5 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\k8s.gcr.io\pause_3.5 succeeded
	I1117 23:12:46.560927    5028 image.go:176] found k8s.gcr.io/kube-scheduler:v1.22.4-rc.0 locally: &{ref:{Repository:{Registry:{insecure:false registry:k8s.gcr.io} repository:kube-scheduler} tag:v1.22.4-rc.0 original:k8s.gcr.io/kube-scheduler:v1.22.4-rc.0} opener:0xc000270690 tarballImage:<nil> once:{done:0 m:{state:0 sema:0}} err:<nil>}
	I1117 23:12:46.560927    5028 cache.go:161] opening:  \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\k8s.gcr.io\kube-scheduler_v1.22.4-rc.0
	W1117 23:12:46.560927    5028 cache.go:172] failed to clean up the temp file \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\k8s.gcr.io\kube-proxy_v1.22.4-rc.0.1553829027.tmp: remove \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\k8s.gcr.io\kube-proxy_v1.22.4-rc.0.1553829027.tmp: The process cannot access the file because it is being used by another process.
	I1117 23:12:46.560927    5028 cache.go:96] cache image "k8s.gcr.io/kube-proxy:v1.22.4-rc.0" -> "C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\cache\\images\\k8s.gcr.io\\kube-proxy_v1.22.4-rc.0" took 198.3305ms
	W1117 23:12:46.564939    5028 cache.go:172] failed to clean up the temp file \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\k8s.gcr.io\kube-scheduler_v1.22.4-rc.0.2105933727.tmp: remove \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\k8s.gcr.io\kube-scheduler_v1.22.4-rc.0.2105933727.tmp: The process cannot access the file because it is being used by another process.
	I1117 23:12:46.565963    5028 cache.go:96] cache image "k8s.gcr.io/kube-scheduler:v1.22.4-rc.0" -> "C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\cache\\images\\k8s.gcr.io\\kube-scheduler_v1.22.4-rc.0" took 203.3664ms
	I1117 23:12:46.567930    5028 image.go:176] found k8s.gcr.io/kube-controller-manager:v1.22.4-rc.0 locally: &{ref:{Repository:{Registry:{insecure:false registry:k8s.gcr.io} repository:kube-controller-manager} tag:v1.22.4-rc.0 original:k8s.gcr.io/kube-controller-manager:v1.22.4-rc.0} opener:0xc0005162a0 tarballImage:<nil> once:{done:0 m:{state:0 sema:0}} err:<nil>}
	I1117 23:12:46.567930    5028 cache.go:161] opening:  \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\k8s.gcr.io\kube-controller-manager_v1.22.4-rc.0
	I1117 23:12:46.570937    5028 image.go:176] found k8s.gcr.io/kube-apiserver:v1.22.4-rc.0 locally: &{ref:{Repository:{Registry:{insecure:false registry:k8s.gcr.io} repository:kube-apiserver} tag:v1.22.4-rc.0 original:k8s.gcr.io/kube-apiserver:v1.22.4-rc.0} opener:0xc000b64070 tarballImage:<nil> once:{done:0 m:{state:0 sema:0}} err:<nil>}
	I1117 23:12:46.570937    5028 cache.go:161] opening:  \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\k8s.gcr.io\kube-apiserver_v1.22.4-rc.0
	W1117 23:12:46.573235    5028 cache.go:172] failed to clean up the temp file \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\k8s.gcr.io\kube-controller-manager_v1.22.4-rc.0.3923692041.tmp: remove \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\k8s.gcr.io\kube-controller-manager_v1.22.4-rc.0.3923692041.tmp: The process cannot access the file because it is being used by another process.
	I1117 23:12:46.573544    5028 cache.go:96] cache image "k8s.gcr.io/kube-controller-manager:v1.22.4-rc.0" -> "C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\cache\\images\\k8s.gcr.io\\kube-controller-manager_v1.22.4-rc.0" took 210.9473ms
	W1117 23:12:46.576930    5028 cache.go:172] failed to clean up the temp file \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\k8s.gcr.io\kube-apiserver_v1.22.4-rc.0.1064886035.tmp: remove \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\k8s.gcr.io\kube-apiserver_v1.22.4-rc.0.1064886035.tmp: The process cannot access the file because it is being used by another process.
	I1117 23:12:46.577921    5028 cache.go:96] cache image "k8s.gcr.io/kube-apiserver:v1.22.4-rc.0" -> "C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\cache\\images\\k8s.gcr.io\\kube-apiserver_v1.22.4-rc.0" took 215.3242ms
	W1117 23:12:46.627340    5028 cli_runner.go:162] docker container inspect no-preload-20211117231133-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:12:46.627340    5028 fix.go:108] recreateIfNeeded on no-preload-20211117231133-9504: state= err=unknown state "no-preload-20211117231133-9504": docker container inspect no-preload-20211117231133-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20211117231133-9504
	I1117 23:12:46.627340    5028 fix.go:113] machineExists: false. err=machine does not exist
	I1117 23:12:46.630211    5028 out.go:176] * docker "no-preload-20211117231133-9504" container is missing, will recreate.
	I1117 23:12:46.630211    5028 delete.go:124] DEMOLISHING no-preload-20211117231133-9504 ...
	I1117 23:12:46.638114    5028 cli_runner.go:115] Run: docker container inspect no-preload-20211117231133-9504 --format={{.State.Status}}
	W1117 23:12:46.739298    5028 cli_runner.go:162] docker container inspect no-preload-20211117231133-9504 --format={{.State.Status}} returned with exit code 1
	W1117 23:12:46.739399    5028 stop.go:75] unable to get state: unknown state "no-preload-20211117231133-9504": docker container inspect no-preload-20211117231133-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20211117231133-9504
	I1117 23:12:46.739478    5028 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "no-preload-20211117231133-9504": docker container inspect no-preload-20211117231133-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20211117231133-9504
	I1117 23:12:46.747538    5028 cli_runner.go:115] Run: docker container inspect no-preload-20211117231133-9504 --format={{.State.Status}}
	W1117 23:12:46.857869    5028 cli_runner.go:162] docker container inspect no-preload-20211117231133-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:12:46.857869    5028 delete.go:82] Unable to get host status for no-preload-20211117231133-9504, assuming it has already been deleted: state: unknown state "no-preload-20211117231133-9504": docker container inspect no-preload-20211117231133-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20211117231133-9504
	I1117 23:12:46.863968    5028 cli_runner.go:115] Run: docker container inspect -f {{.Id}} no-preload-20211117231133-9504
	W1117 23:12:46.949083    5028 cli_runner.go:162] docker container inspect -f {{.Id}} no-preload-20211117231133-9504 returned with exit code 1
	I1117 23:12:46.949194    5028 kic.go:360] could not find the container no-preload-20211117231133-9504 to remove it. will try anyways
	I1117 23:12:46.952570    5028 cli_runner.go:115] Run: docker container inspect no-preload-20211117231133-9504 --format={{.State.Status}}
	W1117 23:12:47.040840    5028 cli_runner.go:162] docker container inspect no-preload-20211117231133-9504 --format={{.State.Status}} returned with exit code 1
	W1117 23:12:47.040932    5028 oci.go:83] error getting container status, will try to delete anyways: unknown state "no-preload-20211117231133-9504": docker container inspect no-preload-20211117231133-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20211117231133-9504
	I1117 23:12:47.045106    5028 cli_runner.go:115] Run: docker exec --privileged -t no-preload-20211117231133-9504 /bin/bash -c "sudo init 0"
	W1117 23:12:47.137331    5028 cli_runner.go:162] docker exec --privileged -t no-preload-20211117231133-9504 /bin/bash -c "sudo init 0" returned with exit code 1
	I1117 23:12:47.137331    5028 oci.go:658] error shutdown no-preload-20211117231133-9504: docker exec --privileged -t no-preload-20211117231133-9504 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: no-preload-20211117231133-9504
	I1117 23:12:48.142176    5028 cli_runner.go:115] Run: docker container inspect no-preload-20211117231133-9504 --format={{.State.Status}}
	W1117 23:12:48.236848    5028 cli_runner.go:162] docker container inspect no-preload-20211117231133-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:12:48.237101    5028 oci.go:670] temporary error verifying shutdown: unknown state "no-preload-20211117231133-9504": docker container inspect no-preload-20211117231133-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20211117231133-9504
	I1117 23:12:48.237101    5028 oci.go:672] temporary error: container no-preload-20211117231133-9504 status is  but expect it to be exited
	I1117 23:12:48.237101    5028 retry.go:31] will retry after 552.330144ms: couldn't verify container is exited. %v: unknown state "no-preload-20211117231133-9504": docker container inspect no-preload-20211117231133-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20211117231133-9504
	I1117 23:12:48.795217    5028 cli_runner.go:115] Run: docker container inspect no-preload-20211117231133-9504 --format={{.State.Status}}
	W1117 23:12:48.893662    5028 cli_runner.go:162] docker container inspect no-preload-20211117231133-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:12:48.893931    5028 oci.go:670] temporary error verifying shutdown: unknown state "no-preload-20211117231133-9504": docker container inspect no-preload-20211117231133-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20211117231133-9504
	I1117 23:12:48.893931    5028 oci.go:672] temporary error: container no-preload-20211117231133-9504 status is  but expect it to be exited
	I1117 23:12:48.894030    5028 retry.go:31] will retry after 1.080381816s: couldn't verify container is exited. %v: unknown state "no-preload-20211117231133-9504": docker container inspect no-preload-20211117231133-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20211117231133-9504
	I1117 23:12:49.978944    5028 cli_runner.go:115] Run: docker container inspect no-preload-20211117231133-9504 --format={{.State.Status}}
	W1117 23:12:50.070934    5028 cli_runner.go:162] docker container inspect no-preload-20211117231133-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:12:50.070934    5028 oci.go:670] temporary error verifying shutdown: unknown state "no-preload-20211117231133-9504": docker container inspect no-preload-20211117231133-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20211117231133-9504
	I1117 23:12:50.070934    5028 oci.go:672] temporary error: container no-preload-20211117231133-9504 status is  but expect it to be exited
	I1117 23:12:50.070934    5028 retry.go:31] will retry after 1.31013006s: couldn't verify container is exited. %v: unknown state "no-preload-20211117231133-9504": docker container inspect no-preload-20211117231133-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20211117231133-9504
	I1117 23:12:51.386044    5028 cli_runner.go:115] Run: docker container inspect no-preload-20211117231133-9504 --format={{.State.Status}}
	W1117 23:12:51.474707    5028 cli_runner.go:162] docker container inspect no-preload-20211117231133-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:12:51.474998    5028 oci.go:670] temporary error verifying shutdown: unknown state "no-preload-20211117231133-9504": docker container inspect no-preload-20211117231133-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20211117231133-9504
	I1117 23:12:51.475039    5028 oci.go:672] temporary error: container no-preload-20211117231133-9504 status is  but expect it to be exited
	I1117 23:12:51.475127    5028 retry.go:31] will retry after 1.582392691s: couldn't verify container is exited. %v: unknown state "no-preload-20211117231133-9504": docker container inspect no-preload-20211117231133-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20211117231133-9504
	I1117 23:12:53.063108    5028 cli_runner.go:115] Run: docker container inspect no-preload-20211117231133-9504 --format={{.State.Status}}
	W1117 23:12:53.156166    5028 cli_runner.go:162] docker container inspect no-preload-20211117231133-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:12:53.156166    5028 oci.go:670] temporary error verifying shutdown: unknown state "no-preload-20211117231133-9504": docker container inspect no-preload-20211117231133-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20211117231133-9504
	I1117 23:12:53.156166    5028 oci.go:672] temporary error: container no-preload-20211117231133-9504 status is  but expect it to be exited
	I1117 23:12:53.156166    5028 retry.go:31] will retry after 2.340488664s: couldn't verify container is exited. %v: unknown state "no-preload-20211117231133-9504": docker container inspect no-preload-20211117231133-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20211117231133-9504
	I1117 23:12:55.501444    5028 cli_runner.go:115] Run: docker container inspect no-preload-20211117231133-9504 --format={{.State.Status}}
	W1117 23:12:55.594371    5028 cli_runner.go:162] docker container inspect no-preload-20211117231133-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:12:55.594371    5028 oci.go:670] temporary error verifying shutdown: unknown state "no-preload-20211117231133-9504": docker container inspect no-preload-20211117231133-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20211117231133-9504
	I1117 23:12:55.594371    5028 oci.go:672] temporary error: container no-preload-20211117231133-9504 status is  but expect it to be exited
	I1117 23:12:55.594371    5028 retry.go:31] will retry after 4.506218855s: couldn't verify container is exited. %v: unknown state "no-preload-20211117231133-9504": docker container inspect no-preload-20211117231133-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20211117231133-9504
	I1117 23:13:00.105867    5028 cli_runner.go:115] Run: docker container inspect no-preload-20211117231133-9504 --format={{.State.Status}}
	W1117 23:13:00.193223    5028 cli_runner.go:162] docker container inspect no-preload-20211117231133-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:13:00.193513    5028 oci.go:670] temporary error verifying shutdown: unknown state "no-preload-20211117231133-9504": docker container inspect no-preload-20211117231133-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20211117231133-9504
	I1117 23:13:00.193513    5028 oci.go:672] temporary error: container no-preload-20211117231133-9504 status is  but expect it to be exited
	I1117 23:13:00.193513    5028 retry.go:31] will retry after 3.221479586s: couldn't verify container is exited. %v: unknown state "no-preload-20211117231133-9504": docker container inspect no-preload-20211117231133-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20211117231133-9504
	I1117 23:13:03.419948    5028 cli_runner.go:115] Run: docker container inspect no-preload-20211117231133-9504 --format={{.State.Status}}
	W1117 23:13:03.519920    5028 cli_runner.go:162] docker container inspect no-preload-20211117231133-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:13:03.519920    5028 oci.go:670] temporary error verifying shutdown: unknown state "no-preload-20211117231133-9504": docker container inspect no-preload-20211117231133-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20211117231133-9504
	I1117 23:13:03.519920    5028 oci.go:672] temporary error: container no-preload-20211117231133-9504 status is  but expect it to be exited
	I1117 23:13:03.519920    5028 oci.go:87] couldn't shut down no-preload-20211117231133-9504 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "no-preload-20211117231133-9504": docker container inspect no-preload-20211117231133-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20211117231133-9504
	 
	I1117 23:13:03.524138    5028 cli_runner.go:115] Run: docker rm -f -v no-preload-20211117231133-9504
	W1117 23:13:03.615620    5028 cli_runner.go:162] docker rm -f -v no-preload-20211117231133-9504 returned with exit code 1
	W1117 23:13:03.616693    5028 delete.go:139] delete failed (probably ok) <nil>
	I1117 23:13:03.616693    5028 fix.go:120] Sleeping 1 second for extra luck!
	I1117 23:13:04.617360    5028 start.go:126] createHost starting for "" (driver="docker")
	I1117 23:13:04.621374    5028 out.go:203] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I1117 23:13:04.621510    5028 start.go:160] libmachine.API.Create for "no-preload-20211117231133-9504" (driver="docker")
	I1117 23:13:04.621510    5028 client.go:168] LocalClient.Create starting
	I1117 23:13:04.621510    5028 main.go:130] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I1117 23:13:04.622531    5028 main.go:130] libmachine: Decoding PEM data...
	I1117 23:13:04.622531    5028 main.go:130] libmachine: Parsing certificate...
	I1117 23:13:04.622766    5028 main.go:130] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I1117 23:13:04.622766    5028 main.go:130] libmachine: Decoding PEM data...
	I1117 23:13:04.622978    5028 main.go:130] libmachine: Parsing certificate...
	I1117 23:13:04.629407    5028 cli_runner.go:115] Run: docker network inspect no-preload-20211117231133-9504 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1117 23:13:04.722674    5028 network_create.go:67] Found existing network {name:no-preload-20211117231133-9504 subnet:0xc000bf1b30 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 168 58 1] mtu:1500}
	I1117 23:13:04.722674    5028 kic.go:106] calculated static IP "192.168.58.2" for the "no-preload-20211117231133-9504" container
	I1117 23:13:04.730645    5028 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
	I1117 23:13:04.824349    5028 cli_runner.go:115] Run: docker volume create no-preload-20211117231133-9504 --label name.minikube.sigs.k8s.io=no-preload-20211117231133-9504 --label created_by.minikube.sigs.k8s.io=true
	I1117 23:13:04.913181    5028 oci.go:102] Successfully created a docker volume no-preload-20211117231133-9504
	I1117 23:13:04.916684    5028 cli_runner.go:115] Run: docker run --rm --name no-preload-20211117231133-9504-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-20211117231133-9504 --entrypoint /usr/bin/test -v no-preload-20211117231133-9504:/var gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -d /var/lib
	I1117 23:13:05.770788    5028 oci.go:106] Successfully prepared a docker volume no-preload-20211117231133-9504
	I1117 23:13:05.770925    5028 preload.go:132] Checking if preload exists for k8s version v1.22.4-rc.0 and runtime docker
	I1117 23:13:05.775451    5028 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I1117 23:13:06.131027    5028 info.go:263] docker info: {ID:L2W4:ZZCI:FAHF:ZZT6:NAWJ:ODFR:2ZBQ:YQ77:PYZZ:VR3E:WLUM:OBSV Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:40 OomKillDisable:true NGoroutines:52 SystemTime:2021-11-17 23:13:05.857931418 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:4.19.76-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://inde
x.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:6239399936 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:19.03.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7ad184331fa3e55e52b890ea95e65ba581ae3429 Expected:7ad184331fa3e55e52b890ea95e65ba581ae3429} RuncCommit:{ID:dc9208a3303feef5b3839f4323d9beb36df0a9dd Expected:dc9208a3303feef5b3839f4323d9beb36df0a9dd} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense:Community Engine Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	E1117 23:13:06.131027    5028 oci.go:197] error getting kernel modules path: Unable to locate kernel modules
	I1117 23:13:06.131027    5028 client.go:171] LocalClient.Create took 1.5095053s
	I1117 23:13:08.138585    5028 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1117 23:13:08.141965    5028 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20211117231133-9504
	W1117 23:13:08.230855    5028 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20211117231133-9504 returned with exit code 1
	I1117 23:13:08.231243    5028 retry.go:31] will retry after 149.242379ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-20211117231133-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20211117231133-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20211117231133-9504
	I1117 23:13:08.386036    5028 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20211117231133-9504
	W1117 23:13:08.476502    5028 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20211117231133-9504 returned with exit code 1
	I1117 23:13:08.476778    5028 retry.go:31] will retry after 300.341948ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-20211117231133-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20211117231133-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20211117231133-9504
	I1117 23:13:08.783540    5028 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20211117231133-9504
	W1117 23:13:08.876222    5028 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20211117231133-9504 returned with exit code 1
	I1117 23:13:08.876467    5028 retry.go:31] will retry after 571.057104ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-20211117231133-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20211117231133-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20211117231133-9504
	I1117 23:13:09.452494    5028 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20211117231133-9504
	W1117 23:13:09.545568    5028 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20211117231133-9504 returned with exit code 1
	W1117 23:13:09.545957    5028 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-20211117231133-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20211117231133-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20211117231133-9504
	
	W1117 23:13:09.545957    5028 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-20211117231133-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20211117231133-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20211117231133-9504
	I1117 23:13:09.545957    5028 start.go:129] duration metric: createHost completed in 4.9285595s
	I1117 23:13:09.553342    5028 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1117 23:13:09.556949    5028 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20211117231133-9504
	W1117 23:13:09.645533    5028 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20211117231133-9504 returned with exit code 1
	I1117 23:13:09.645843    5028 retry.go:31] will retry after 178.565968ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-20211117231133-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20211117231133-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20211117231133-9504
	I1117 23:13:09.830526    5028 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20211117231133-9504
	W1117 23:13:09.928906    5028 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20211117231133-9504 returned with exit code 1
	I1117 23:13:09.929131    5028 retry.go:31] will retry after 330.246446ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-20211117231133-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20211117231133-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20211117231133-9504
	I1117 23:13:10.264627    5028 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20211117231133-9504
	W1117 23:13:10.356182    5028 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20211117231133-9504 returned with exit code 1
	I1117 23:13:10.356604    5028 retry.go:31] will retry after 460.157723ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-20211117231133-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20211117231133-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20211117231133-9504
	I1117 23:13:10.823598    5028 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20211117231133-9504
	W1117 23:13:10.941885    5028 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20211117231133-9504 returned with exit code 1
	W1117 23:13:10.942178    5028 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-20211117231133-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20211117231133-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20211117231133-9504
	
	W1117 23:13:10.942272    5028 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-20211117231133-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20211117231133-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20211117231133-9504
	I1117 23:13:10.942272    5028 fix.go:57] fixHost completed within 24.4411024s
	I1117 23:13:10.942272    5028 start.go:80] releasing machines lock for "no-preload-20211117231133-9504", held for 24.4411024s
	W1117 23:13:10.942558    5028 start.go:532] error starting host: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	W1117 23:13:10.942558    5028 out.go:241] ! StartHost failed, but will try again: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	! StartHost failed, but will try again: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	I1117 23:13:10.942558    5028 start.go:547] Will try again in 5 seconds ...
	I1117 23:13:15.944275    5028 start.go:313] acquiring machines lock for no-preload-20211117231133-9504: {Name:mk72290d14abe23f276712b59e3d3211293a2fa6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 23:13:15.944275    5028 start.go:317] acquired machines lock for "no-preload-20211117231133-9504" in 0s
	I1117 23:13:15.944275    5028 start.go:93] Skipping create...Using existing machine configuration
	I1117 23:13:15.944275    5028 fix.go:55] fixHost starting: 
	I1117 23:13:15.955877    5028 cli_runner.go:115] Run: docker container inspect no-preload-20211117231133-9504 --format={{.State.Status}}
	W1117 23:13:16.049737    5028 cli_runner.go:162] docker container inspect no-preload-20211117231133-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:13:16.049905    5028 fix.go:108] recreateIfNeeded on no-preload-20211117231133-9504: state= err=unknown state "no-preload-20211117231133-9504": docker container inspect no-preload-20211117231133-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20211117231133-9504
	I1117 23:13:16.049905    5028 fix.go:113] machineExists: false. err=machine does not exist
	I1117 23:13:16.053464    5028 out.go:176] * docker "no-preload-20211117231133-9504" container is missing, will recreate.
	I1117 23:13:16.053464    5028 delete.go:124] DEMOLISHING no-preload-20211117231133-9504 ...
	I1117 23:13:16.060435    5028 cli_runner.go:115] Run: docker container inspect no-preload-20211117231133-9504 --format={{.State.Status}}
	W1117 23:13:16.159381    5028 cli_runner.go:162] docker container inspect no-preload-20211117231133-9504 --format={{.State.Status}} returned with exit code 1
	W1117 23:13:16.159516    5028 stop.go:75] unable to get state: unknown state "no-preload-20211117231133-9504": docker container inspect no-preload-20211117231133-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20211117231133-9504
	I1117 23:13:16.159516    5028 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "no-preload-20211117231133-9504": docker container inspect no-preload-20211117231133-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20211117231133-9504
	I1117 23:13:16.168766    5028 cli_runner.go:115] Run: docker container inspect no-preload-20211117231133-9504 --format={{.State.Status}}
	W1117 23:13:16.264950    5028 cli_runner.go:162] docker container inspect no-preload-20211117231133-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:13:16.264950    5028 delete.go:82] Unable to get host status for no-preload-20211117231133-9504, assuming it has already been deleted: state: unknown state "no-preload-20211117231133-9504": docker container inspect no-preload-20211117231133-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20211117231133-9504
	I1117 23:13:16.269609    5028 cli_runner.go:115] Run: docker container inspect -f {{.Id}} no-preload-20211117231133-9504
	W1117 23:13:16.362587    5028 cli_runner.go:162] docker container inspect -f {{.Id}} no-preload-20211117231133-9504 returned with exit code 1
	I1117 23:13:16.362587    5028 kic.go:360] could not find the container no-preload-20211117231133-9504 to remove it. will try anyways
	I1117 23:13:16.367541    5028 cli_runner.go:115] Run: docker container inspect no-preload-20211117231133-9504 --format={{.State.Status}}
	W1117 23:13:16.461236    5028 cli_runner.go:162] docker container inspect no-preload-20211117231133-9504 --format={{.State.Status}} returned with exit code 1
	W1117 23:13:16.461236    5028 oci.go:83] error getting container status, will try to delete anyways: unknown state "no-preload-20211117231133-9504": docker container inspect no-preload-20211117231133-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20211117231133-9504
	I1117 23:13:16.465898    5028 cli_runner.go:115] Run: docker exec --privileged -t no-preload-20211117231133-9504 /bin/bash -c "sudo init 0"
	W1117 23:13:16.556526    5028 cli_runner.go:162] docker exec --privileged -t no-preload-20211117231133-9504 /bin/bash -c "sudo init 0" returned with exit code 1
	I1117 23:13:16.556526    5028 oci.go:658] error shutdown no-preload-20211117231133-9504: docker exec --privileged -t no-preload-20211117231133-9504 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: no-preload-20211117231133-9504
	I1117 23:13:17.561659    5028 cli_runner.go:115] Run: docker container inspect no-preload-20211117231133-9504 --format={{.State.Status}}
	W1117 23:13:17.658276    5028 cli_runner.go:162] docker container inspect no-preload-20211117231133-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:13:17.658276    5028 oci.go:670] temporary error verifying shutdown: unknown state "no-preload-20211117231133-9504": docker container inspect no-preload-20211117231133-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20211117231133-9504
	I1117 23:13:17.658276    5028 oci.go:672] temporary error: container no-preload-20211117231133-9504 status is  but expect it to be exited
	I1117 23:13:17.658276    5028 retry.go:31] will retry after 391.517075ms: couldn't verify container is exited. %v: unknown state "no-preload-20211117231133-9504": docker container inspect no-preload-20211117231133-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20211117231133-9504
	I1117 23:13:18.054566    5028 cli_runner.go:115] Run: docker container inspect no-preload-20211117231133-9504 --format={{.State.Status}}
	W1117 23:13:18.154993    5028 cli_runner.go:162] docker container inspect no-preload-20211117231133-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:13:18.155201    5028 oci.go:670] temporary error verifying shutdown: unknown state "no-preload-20211117231133-9504": docker container inspect no-preload-20211117231133-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20211117231133-9504
	I1117 23:13:18.155201    5028 oci.go:672] temporary error: container no-preload-20211117231133-9504 status is  but expect it to be exited
	I1117 23:13:18.155280    5028 retry.go:31] will retry after 594.826393ms: couldn't verify container is exited. %v: unknown state "no-preload-20211117231133-9504": docker container inspect no-preload-20211117231133-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20211117231133-9504
	I1117 23:13:18.755000    5028 cli_runner.go:115] Run: docker container inspect no-preload-20211117231133-9504 --format={{.State.Status}}
	W1117 23:13:18.854748    5028 cli_runner.go:162] docker container inspect no-preload-20211117231133-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:13:18.854822    5028 oci.go:670] temporary error verifying shutdown: unknown state "no-preload-20211117231133-9504": docker container inspect no-preload-20211117231133-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20211117231133-9504
	I1117 23:13:18.854822    5028 oci.go:672] temporary error: container no-preload-20211117231133-9504 status is  but expect it to be exited
	I1117 23:13:18.854891    5028 retry.go:31] will retry after 1.326470261s: couldn't verify container is exited. %v: unknown state "no-preload-20211117231133-9504": docker container inspect no-preload-20211117231133-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20211117231133-9504
	I1117 23:13:20.185957    5028 cli_runner.go:115] Run: docker container inspect no-preload-20211117231133-9504 --format={{.State.Status}}
	W1117 23:13:20.275037    5028 cli_runner.go:162] docker container inspect no-preload-20211117231133-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:13:20.275091    5028 oci.go:670] temporary error verifying shutdown: unknown state "no-preload-20211117231133-9504": docker container inspect no-preload-20211117231133-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20211117231133-9504
	I1117 23:13:20.275159    5028 oci.go:672] temporary error: container no-preload-20211117231133-9504 status is  but expect it to be exited
	I1117 23:13:20.275219    5028 retry.go:31] will retry after 1.212558276s: couldn't verify container is exited. %v: unknown state "no-preload-20211117231133-9504": docker container inspect no-preload-20211117231133-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20211117231133-9504
	I1117 23:13:21.495685    5028 cli_runner.go:115] Run: docker container inspect no-preload-20211117231133-9504 --format={{.State.Status}}
	W1117 23:13:21.587891    5028 cli_runner.go:162] docker container inspect no-preload-20211117231133-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:13:21.588044    5028 oci.go:670] temporary error verifying shutdown: unknown state "no-preload-20211117231133-9504": docker container inspect no-preload-20211117231133-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20211117231133-9504
	I1117 23:13:21.588044    5028 oci.go:672] temporary error: container no-preload-20211117231133-9504 status is  but expect it to be exited
	I1117 23:13:21.588149    5028 retry.go:31] will retry after 1.779941781s: couldn't verify container is exited. %v: unknown state "no-preload-20211117231133-9504": docker container inspect no-preload-20211117231133-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20211117231133-9504
	I1117 23:13:23.374184    5028 cli_runner.go:115] Run: docker container inspect no-preload-20211117231133-9504 --format={{.State.Status}}
	W1117 23:13:23.466426    5028 cli_runner.go:162] docker container inspect no-preload-20211117231133-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:13:23.466800    5028 oci.go:670] temporary error verifying shutdown: unknown state "no-preload-20211117231133-9504": docker container inspect no-preload-20211117231133-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20211117231133-9504
	I1117 23:13:23.466832    5028 oci.go:672] temporary error: container no-preload-20211117231133-9504 status is  but expect it to be exited
	I1117 23:13:23.466832    5028 retry.go:31] will retry after 3.268621161s: couldn't verify container is exited. %v: unknown state "no-preload-20211117231133-9504": docker container inspect no-preload-20211117231133-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20211117231133-9504
	I1117 23:13:26.739904    5028 cli_runner.go:115] Run: docker container inspect no-preload-20211117231133-9504 --format={{.State.Status}}
	W1117 23:13:26.838628    5028 cli_runner.go:162] docker container inspect no-preload-20211117231133-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:13:26.838833    5028 oci.go:670] temporary error verifying shutdown: unknown state "no-preload-20211117231133-9504": docker container inspect no-preload-20211117231133-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20211117231133-9504
	I1117 23:13:26.838867    5028 oci.go:672] temporary error: container no-preload-20211117231133-9504 status is  but expect it to be exited
	I1117 23:13:26.838918    5028 retry.go:31] will retry after 6.097818893s: couldn't verify container is exited. %v: unknown state "no-preload-20211117231133-9504": docker container inspect no-preload-20211117231133-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20211117231133-9504
	I1117 23:13:32.942761    5028 cli_runner.go:115] Run: docker container inspect no-preload-20211117231133-9504 --format={{.State.Status}}
	W1117 23:13:33.036721    5028 cli_runner.go:162] docker container inspect no-preload-20211117231133-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:13:33.036721    5028 oci.go:670] temporary error verifying shutdown: unknown state "no-preload-20211117231133-9504": docker container inspect no-preload-20211117231133-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20211117231133-9504
	I1117 23:13:33.036721    5028 oci.go:672] temporary error: container no-preload-20211117231133-9504 status is  but expect it to be exited
	I1117 23:13:33.036721    5028 oci.go:87] couldn't shut down no-preload-20211117231133-9504 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "no-preload-20211117231133-9504": docker container inspect no-preload-20211117231133-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20211117231133-9504
	 
	I1117 23:13:33.041406    5028 cli_runner.go:115] Run: docker rm -f -v no-preload-20211117231133-9504
	W1117 23:13:33.138130    5028 cli_runner.go:162] docker rm -f -v no-preload-20211117231133-9504 returned with exit code 1
	W1117 23:13:33.139295    5028 delete.go:139] delete failed (probably ok) <nil>
	I1117 23:13:33.139295    5028 fix.go:120] Sleeping 1 second for extra luck!
	I1117 23:13:34.139430    5028 start.go:126] createHost starting for "" (driver="docker")
	I1117 23:13:34.144005    5028 out.go:203] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I1117 23:13:34.144329    5028 start.go:160] libmachine.API.Create for "no-preload-20211117231133-9504" (driver="docker")
	I1117 23:13:34.144392    5028 client.go:168] LocalClient.Create starting
	I1117 23:13:34.144810    5028 main.go:130] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I1117 23:13:34.144810    5028 main.go:130] libmachine: Decoding PEM data...
	I1117 23:13:34.144810    5028 main.go:130] libmachine: Parsing certificate...
	I1117 23:13:34.144810    5028 main.go:130] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I1117 23:13:34.144810    5028 main.go:130] libmachine: Decoding PEM data...
	I1117 23:13:34.144810    5028 main.go:130] libmachine: Parsing certificate...
	I1117 23:13:34.151492    5028 cli_runner.go:115] Run: docker network inspect no-preload-20211117231133-9504 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1117 23:13:34.254406    5028 network_create.go:67] Found existing network {name:no-preload-20211117231133-9504 subnet:0xc0008bcea0 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 168 58 1] mtu:1500}
	I1117 23:13:34.254406    5028 kic.go:106] calculated static IP "192.168.58.2" for the "no-preload-20211117231133-9504" container
	I1117 23:13:34.263191    5028 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
	I1117 23:13:34.361441    5028 cli_runner.go:115] Run: docker volume create no-preload-20211117231133-9504 --label name.minikube.sigs.k8s.io=no-preload-20211117231133-9504 --label created_by.minikube.sigs.k8s.io=true
	I1117 23:13:34.454130    5028 oci.go:102] Successfully created a docker volume no-preload-20211117231133-9504
	I1117 23:13:34.457922    5028 cli_runner.go:115] Run: docker run --rm --name no-preload-20211117231133-9504-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-20211117231133-9504 --entrypoint /usr/bin/test -v no-preload-20211117231133-9504:/var gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -d /var/lib
	I1117 23:13:35.572744    5028 cli_runner.go:168] Completed: docker run --rm --name no-preload-20211117231133-9504-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-20211117231133-9504 --entrypoint /usr/bin/test -v no-preload-20211117231133-9504:/var gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -d /var/lib: (1.1146175s)
	I1117 23:13:35.572744    5028 oci.go:106] Successfully prepared a docker volume no-preload-20211117231133-9504
	I1117 23:13:35.572965    5028 preload.go:132] Checking if preload exists for k8s version v1.22.4-rc.0 and runtime docker
	I1117 23:13:35.578235    5028 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I1117 23:13:35.936547    5028 info.go:263] docker info: {ID:L2W4:ZZCI:FAHF:ZZT6:NAWJ:ODFR:2ZBQ:YQ77:PYZZ:VR3E:WLUM:OBSV Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:40 OomKillDisable:true NGoroutines:52 SystemTime:2021-11-17 23:13:35.665747197 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:4.19.76-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://inde
x.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:6239399936 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:19.03.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7ad184331fa3e55e52b890ea95e65ba581ae3429 Expected:7ad184331fa3e55e52b890ea95e65ba581ae3429} RuncCommit:{ID:dc9208a3303feef5b3839f4323d9beb36df0a9dd Expected:dc9208a3303feef5b3839f4323d9beb36df0a9dd} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense:Community Engine Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	E1117 23:13:35.937182    5028 oci.go:197] error getting kernel modules path: Unable to locate kernel modules
	I1117 23:13:35.937379    5028 client.go:171] LocalClient.Create took 1.7927757s
	I1117 23:13:37.945924    5028 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1117 23:13:37.949233    5028 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20211117231133-9504
	W1117 23:13:38.047989    5028 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20211117231133-9504 returned with exit code 1
	I1117 23:13:38.048090    5028 retry.go:31] will retry after 198.278561ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-20211117231133-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20211117231133-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20211117231133-9504
	I1117 23:13:38.253125    5028 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20211117231133-9504
	W1117 23:13:38.352325    5028 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20211117231133-9504 returned with exit code 1
	I1117 23:13:38.352325    5028 retry.go:31] will retry after 298.905961ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-20211117231133-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20211117231133-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20211117231133-9504
	I1117 23:13:38.657460    5028 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20211117231133-9504
	W1117 23:13:38.765509    5028 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20211117231133-9504 returned with exit code 1
	I1117 23:13:38.765509    5028 retry.go:31] will retry after 704.572333ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-20211117231133-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20211117231133-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20211117231133-9504
	I1117 23:13:39.475883    5028 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20211117231133-9504
	W1117 23:13:39.578119    5028 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20211117231133-9504 returned with exit code 1
	W1117 23:13:39.578242    5028 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-20211117231133-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20211117231133-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20211117231133-9504
	
	W1117 23:13:39.578352    5028 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-20211117231133-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20211117231133-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20211117231133-9504
	I1117 23:13:39.578415    5028 start.go:129] duration metric: createHost completed in 5.4387153s
	I1117 23:13:39.586750    5028 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1117 23:13:39.596110    5028 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20211117231133-9504
	W1117 23:13:39.691836    5028 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20211117231133-9504 returned with exit code 1
	I1117 23:13:39.691836    5028 retry.go:31] will retry after 341.333754ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-20211117231133-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20211117231133-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20211117231133-9504
	I1117 23:13:40.037691    5028 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20211117231133-9504
	W1117 23:13:40.128205    5028 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20211117231133-9504 returned with exit code 1
	I1117 23:13:40.128205    5028 retry.go:31] will retry after 448.769687ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-20211117231133-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20211117231133-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20211117231133-9504
	I1117 23:13:40.582541    5028 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20211117231133-9504
	W1117 23:13:40.680517    5028 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20211117231133-9504 returned with exit code 1
	I1117 23:13:40.680816    5028 retry.go:31] will retry after 575.898922ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-20211117231133-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20211117231133-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20211117231133-9504
	I1117 23:13:41.261328    5028 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20211117231133-9504
	W1117 23:13:41.350412    5028 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20211117231133-9504 returned with exit code 1
	W1117 23:13:41.350678    5028 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-20211117231133-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20211117231133-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20211117231133-9504
	
	W1117 23:13:41.350678    5028 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-20211117231133-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20211117231133-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20211117231133-9504
	I1117 23:13:41.350678    5028 fix.go:57] fixHost completed within 25.4062132s
	I1117 23:13:41.350767    5028 start.go:80] releasing machines lock for "no-preload-20211117231133-9504", held for 25.406302s
	W1117 23:13:41.350807    5028 out.go:241] * Failed to start docker container. Running "minikube delete -p no-preload-20211117231133-9504" may fix it: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	* Failed to start docker container. Running "minikube delete -p no-preload-20211117231133-9504" may fix it: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	I1117 23:13:41.356037    5028 out.go:176] 
	W1117 23:13:41.356037    5028 out.go:241] X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	W1117 23:13:41.356037    5028 out.go:241] * 
	* 
	W1117 23:13:41.357465    5028 out.go:241] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1117 23:13:41.359813    5028 out.go:176] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:244: failed to start minikube post-stop. args "out/minikube-windows-amd64.exe start -p no-preload-20211117231133-9504 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker --kubernetes-version=v1.22.4-rc.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/no-preload/serial/SecondStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect no-preload-20211117231133-9504
helpers_test.go:235: (dbg) docker inspect no-preload-20211117231133-9504:

                                                
                                                
-- stdout --
	[
	    {
	        "CreatedAt": "2021-11-17T23:11:36Z",
	        "Driver": "local",
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "no-preload-20211117231133-9504"
	        },
	        "Mountpoint": "/var/lib/docker/volumes/no-preload-20211117231133-9504/_data",
	        "Name": "no-preload-20211117231133-9504",
	        "Options": {},
	        "Scope": "local"
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-20211117231133-9504 -n no-preload-20211117231133-9504

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/SecondStart
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-20211117231133-9504 -n no-preload-20211117231133-9504: exit status 7 (1.8452867s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 23:13:43.457010    4792 status.go:247] status error: host: state: unknown state "no-preload-20211117231133-9504": docker container inspect no-preload-20211117231133-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20211117231133-9504

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-20211117231133-9504" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/no-preload/serial/SecondStart (59.83s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop (5.42s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:224: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p default-k8s-different-port-20211117231152-9504 -n default-k8s-different-port-20211117231152-9504
start_stop_delete_test.go:224: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p default-k8s-different-port-20211117231152-9504 -n default-k8s-different-port-20211117231152-9504: exit status 7 (1.7701698s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 23:12:59.077369    7552 status.go:247] status error: host: state: unknown state "default-k8s-different-port-20211117231152-9504": docker container inspect default-k8s-different-port-20211117231152-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20211117231152-9504

                                                
                                                
** /stderr **
start_stop_delete_test.go:224: status error: exit status 7 (may be ok)
start_stop_delete_test.go:226: expected post-stop host status to be -"Stopped"- but got *"Nonexistent"*
start_stop_delete_test.go:231: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p default-k8s-different-port-20211117231152-9504 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
start_stop_delete_test.go:231: (dbg) Done: out/minikube-windows-amd64.exe addons enable dashboard -p default-k8s-different-port-20211117231152-9504 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4: (1.8115483s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect default-k8s-different-port-20211117231152-9504
helpers_test.go:235: (dbg) docker inspect default-k8s-different-port-20211117231152-9504:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "default-k8s-different-port-20211117231152-9504",
	        "Id": "958aad75e6207a47e0822eec615f5b93779f44fc3bfdf4fedaf773fd2df7bb30",
	        "Created": "2021-11-17T23:11:56.13952141Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.49.0/24",
	                    "Gateway": "192.168.49.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p default-k8s-different-port-20211117231152-9504 -n default-k8s-different-port-20211117231152-9504
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p default-k8s-different-port-20211117231152-9504 -n default-k8s-different-port-20211117231152-9504: exit status 7 (1.7169774s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 23:13:02.741602   11080 status.go:247] status error: host: state: unknown state "default-k8s-different-port-20211117231152-9504": docker container inspect default-k8s-different-port-20211117231152-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20211117231152-9504

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-different-port-20211117231152-9504" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop (5.42s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/SecondStart (60.06s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/SecondStart
start_stop_delete_test.go:241: (dbg) Run:  out/minikube-windows-amd64.exe start -p default-k8s-different-port-20211117231152-9504 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker --kubernetes-version=v1.22.3

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/SecondStart
start_stop_delete_test.go:241: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p default-k8s-different-port-20211117231152-9504 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker --kubernetes-version=v1.22.3: exit status 80 (58.0107234s)

                                                
                                                
-- stdout --
	* [default-k8s-different-port-20211117231152-9504] minikube v1.24.0 on Microsoft Windows 10 Enterprise N 10.0.18363 Build 18363
	  - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=12739
	* Using the docker driver based on existing profile
	* Starting control plane node default-k8s-different-port-20211117231152-9504 in cluster default-k8s-different-port-20211117231152-9504
	* Pulling base image ...
	* docker "default-k8s-different-port-20211117231152-9504" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* docker "default-k8s-different-port-20211117231152-9504" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1117 23:13:02.940971    3192 out.go:297] Setting OutFile to fd 2008 ...
	I1117 23:13:03.002954    3192 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 23:13:03.002954    3192 out.go:310] Setting ErrFile to fd 1824...
	I1117 23:13:03.002954    3192 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 23:13:03.015957    3192 out.go:304] Setting JSON to false
	I1117 23:13:03.017943    3192 start.go:112] hostinfo: {"hostname":"minikube2","uptime":80098,"bootTime":1637110685,"procs":131,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.18363 Build 18363","kernelVersion":"10.0.18363 Build 18363","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"29dced62-21fb-45d8-a34e-472b66ced002"}
	W1117 23:13:03.017943    3192 start.go:120] gopshost.Virtualization returned error: not implemented yet
	I1117 23:13:03.020945    3192 out.go:176] * [default-k8s-different-port-20211117231152-9504] minikube v1.24.0 on Microsoft Windows 10 Enterprise N 10.0.18363 Build 18363
	I1117 23:13:03.021949    3192 notify.go:174] Checking for updates...
	I1117 23:13:03.023980    3192 out.go:176]   - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	I1117 23:13:03.026945    3192 out.go:176]   - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	I1117 23:13:03.028943    3192 out.go:176]   - MINIKUBE_LOCATION=12739
	I1117 23:13:03.029944    3192 config.go:176] Loaded profile config "default-k8s-different-port-20211117231152-9504": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.3
	I1117 23:13:03.029944    3192 driver.go:343] Setting default libvirt URI to qemu:///system
	I1117 23:13:04.639703    3192 docker.go:132] docker version: linux-19.03.12
	I1117 23:13:04.643792    3192 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I1117 23:13:05.009218    3192 info.go:263] docker info: {ID:L2W4:ZZCI:FAHF:ZZT6:NAWJ:ODFR:2ZBQ:YQ77:PYZZ:VR3E:WLUM:OBSV Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:40 OomKillDisable:true NGoroutines:52 SystemTime:2021-11-17 23:13:04.719498346 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:4.19.76-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://inde
x.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:6239399936 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:19.03.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7ad184331fa3e55e52b890ea95e65ba581ae3429 Expected:7ad184331fa3e55e52b890ea95e65ba581ae3429} RuncCommit:{ID:dc9208a3303feef5b3839f4323d9beb36df0a9dd Expected:dc9208a3303feef5b3839f4323d9beb36df0a9dd} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense:Community Engine Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I1117 23:13:05.013222    3192 out.go:176] * Using the docker driver based on existing profile
	I1117 23:13:05.013222    3192 start.go:280] selected driver: docker
	I1117 23:13:05.013222    3192 start.go:775] validating driver "docker" against &{Name:default-k8s-different-port-20211117231152-9504 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.3 ClusterName:default-k8s-different-port-20211117231152-9504 Namespace:default AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.22.3 ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.3.1@sha256:ec27f462cf1946220f5a9ace416a84a57c18f98c777876a8054405d1428cc92e MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m
0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host}
	I1117 23:13:05.013222    3192 start.go:786] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I1117 23:13:05.072779    3192 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I1117 23:13:05.423503    3192 info.go:263] docker info: {ID:L2W4:ZZCI:FAHF:ZZT6:NAWJ:ODFR:2ZBQ:YQ77:PYZZ:VR3E:WLUM:OBSV Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:44 OomKillDisable:true NGoroutines:62 SystemTime:2021-11-17 23:13:05.155514129 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:4.19.76-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://inde
x.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:6239399936 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:19.03.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7ad184331fa3e55e52b890ea95e65ba581ae3429 Expected:7ad184331fa3e55e52b890ea95e65ba581ae3429} RuncCommit:{ID:dc9208a3303feef5b3839f4323d9beb36df0a9dd Expected:dc9208a3303feef5b3839f4323d9beb36df0a9dd} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense:Community Engine Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I1117 23:13:05.423503    3192 start_flags.go:758] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1117 23:13:05.423503    3192 cni.go:93] Creating CNI manager for ""
	I1117 23:13:05.423503    3192 cni.go:167] CNI unnecessary in this configuration, recommending no CNI
	I1117 23:13:05.423503    3192 start_flags.go:282] config:
	{Name:default-k8s-different-port-20211117231152-9504 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.3 ClusterName:default-k8s-different-port-20211117231152-9504 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.22.3 ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.3.1@sha256:ec27f462cf1946220f5a9ace416a84a57c18f98c777876a8054405d1428cc92e MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host}
	I1117 23:13:05.427716    3192 out.go:176] * Starting control plane node default-k8s-different-port-20211117231152-9504 in cluster default-k8s-different-port-20211117231152-9504
	I1117 23:13:05.427716    3192 cache.go:118] Beginning downloading kic base image for docker with docker
	I1117 23:13:05.430139    3192 out.go:176] * Pulling base image ...
	I1117 23:13:05.430297    3192 preload.go:132] Checking if preload exists for k8s version v1.22.3 and runtime docker
	I1117 23:13:05.430297    3192 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c in local docker daemon
	I1117 23:13:05.430459    3192 preload.go:148] Found local preload: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4
	I1117 23:13:05.430576    3192 cache.go:57] Caching tarball of preloaded images
	I1117 23:13:05.431111    3192 preload.go:174] Found C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1117 23:13:05.431443    3192 cache.go:60] Finished verifying existence of preloaded tar for  v1.22.3 on docker
	I1117 23:13:05.431745    3192 profile.go:147] Saving config to C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\default-k8s-different-port-20211117231152-9504\config.json ...
	I1117 23:13:05.525102    3192 image.go:79] Found gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c in local docker daemon, skipping pull
	I1117 23:13:05.525102    3192 cache.go:140] gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c exists in daemon, skipping load
	I1117 23:13:05.525102    3192 cache.go:206] Successfully downloaded all kic artifacts
	I1117 23:13:05.525102    3192 start.go:313] acquiring machines lock for default-k8s-different-port-20211117231152-9504: {Name:mk2897e2360a69311577988e13dc34760667171e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 23:13:05.525102    3192 start.go:317] acquired machines lock for "default-k8s-different-port-20211117231152-9504" in 0s
	I1117 23:13:05.525102    3192 start.go:93] Skipping create...Using existing machine configuration
	I1117 23:13:05.525102    3192 fix.go:55] fixHost starting: 
	I1117 23:13:05.534853    3192 cli_runner.go:115] Run: docker container inspect default-k8s-different-port-20211117231152-9504 --format={{.State.Status}}
	W1117 23:13:05.625908    3192 cli_runner.go:162] docker container inspect default-k8s-different-port-20211117231152-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:13:05.626131    3192 fix.go:108] recreateIfNeeded on default-k8s-different-port-20211117231152-9504: state= err=unknown state "default-k8s-different-port-20211117231152-9504": docker container inspect default-k8s-different-port-20211117231152-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20211117231152-9504
	I1117 23:13:05.626131    3192 fix.go:113] machineExists: false. err=machine does not exist
	I1117 23:13:05.628022    3192 out.go:176] * docker "default-k8s-different-port-20211117231152-9504" container is missing, will recreate.
	I1117 23:13:05.628022    3192 delete.go:124] DEMOLISHING default-k8s-different-port-20211117231152-9504 ...
	I1117 23:13:05.635017    3192 cli_runner.go:115] Run: docker container inspect default-k8s-different-port-20211117231152-9504 --format={{.State.Status}}
	W1117 23:13:05.728284    3192 cli_runner.go:162] docker container inspect default-k8s-different-port-20211117231152-9504 --format={{.State.Status}} returned with exit code 1
	W1117 23:13:05.728516    3192 stop.go:75] unable to get state: unknown state "default-k8s-different-port-20211117231152-9504": docker container inspect default-k8s-different-port-20211117231152-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20211117231152-9504
	I1117 23:13:05.728516    3192 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "default-k8s-different-port-20211117231152-9504": docker container inspect default-k8s-different-port-20211117231152-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20211117231152-9504
	I1117 23:13:05.737606    3192 cli_runner.go:115] Run: docker container inspect default-k8s-different-port-20211117231152-9504 --format={{.State.Status}}
	W1117 23:13:05.842214    3192 cli_runner.go:162] docker container inspect default-k8s-different-port-20211117231152-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:13:05.842367    3192 delete.go:82] Unable to get host status for default-k8s-different-port-20211117231152-9504, assuming it has already been deleted: state: unknown state "default-k8s-different-port-20211117231152-9504": docker container inspect default-k8s-different-port-20211117231152-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20211117231152-9504
	I1117 23:13:05.846349    3192 cli_runner.go:115] Run: docker container inspect -f {{.Id}} default-k8s-different-port-20211117231152-9504
	W1117 23:13:05.938050    3192 cli_runner.go:162] docker container inspect -f {{.Id}} default-k8s-different-port-20211117231152-9504 returned with exit code 1
	I1117 23:13:05.938452    3192 kic.go:360] could not find the container default-k8s-different-port-20211117231152-9504 to remove it. will try anyways
	I1117 23:13:05.944888    3192 cli_runner.go:115] Run: docker container inspect default-k8s-different-port-20211117231152-9504 --format={{.State.Status}}
	W1117 23:13:06.034216    3192 cli_runner.go:162] docker container inspect default-k8s-different-port-20211117231152-9504 --format={{.State.Status}} returned with exit code 1
	W1117 23:13:06.034457    3192 oci.go:83] error getting container status, will try to delete anyways: unknown state "default-k8s-different-port-20211117231152-9504": docker container inspect default-k8s-different-port-20211117231152-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20211117231152-9504
	I1117 23:13:06.038519    3192 cli_runner.go:115] Run: docker exec --privileged -t default-k8s-different-port-20211117231152-9504 /bin/bash -c "sudo init 0"
	W1117 23:13:06.128980    3192 cli_runner.go:162] docker exec --privileged -t default-k8s-different-port-20211117231152-9504 /bin/bash -c "sudo init 0" returned with exit code 1
	I1117 23:13:06.129074    3192 oci.go:658] error shutdown default-k8s-different-port-20211117231152-9504: docker exec --privileged -t default-k8s-different-port-20211117231152-9504 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: default-k8s-different-port-20211117231152-9504
	I1117 23:13:07.134322    3192 cli_runner.go:115] Run: docker container inspect default-k8s-different-port-20211117231152-9504 --format={{.State.Status}}
	W1117 23:13:07.228022    3192 cli_runner.go:162] docker container inspect default-k8s-different-port-20211117231152-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:13:07.228298    3192 oci.go:670] temporary error verifying shutdown: unknown state "default-k8s-different-port-20211117231152-9504": docker container inspect default-k8s-different-port-20211117231152-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20211117231152-9504
	I1117 23:13:07.228298    3192 oci.go:672] temporary error: container default-k8s-different-port-20211117231152-9504 status is  but expect it to be exited
	I1117 23:13:07.228298    3192 retry.go:31] will retry after 552.330144ms: couldn't verify container is exited. %v: unknown state "default-k8s-different-port-20211117231152-9504": docker container inspect default-k8s-different-port-20211117231152-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20211117231152-9504
	I1117 23:13:07.785344    3192 cli_runner.go:115] Run: docker container inspect default-k8s-different-port-20211117231152-9504 --format={{.State.Status}}
	W1117 23:13:07.874403    3192 cli_runner.go:162] docker container inspect default-k8s-different-port-20211117231152-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:13:07.874509    3192 oci.go:670] temporary error verifying shutdown: unknown state "default-k8s-different-port-20211117231152-9504": docker container inspect default-k8s-different-port-20211117231152-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20211117231152-9504
	I1117 23:13:07.874509    3192 oci.go:672] temporary error: container default-k8s-different-port-20211117231152-9504 status is  but expect it to be exited
	I1117 23:13:07.874576    3192 retry.go:31] will retry after 1.080381816s: couldn't verify container is exited. %v: unknown state "default-k8s-different-port-20211117231152-9504": docker container inspect default-k8s-different-port-20211117231152-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20211117231152-9504
	I1117 23:13:08.960347    3192 cli_runner.go:115] Run: docker container inspect default-k8s-different-port-20211117231152-9504 --format={{.State.Status}}
	W1117 23:13:09.052003    3192 cli_runner.go:162] docker container inspect default-k8s-different-port-20211117231152-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:13:09.052003    3192 oci.go:670] temporary error verifying shutdown: unknown state "default-k8s-different-port-20211117231152-9504": docker container inspect default-k8s-different-port-20211117231152-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20211117231152-9504
	I1117 23:13:09.052003    3192 oci.go:672] temporary error: container default-k8s-different-port-20211117231152-9504 status is  but expect it to be exited
	I1117 23:13:09.052003    3192 retry.go:31] will retry after 1.31013006s: couldn't verify container is exited. %v: unknown state "default-k8s-different-port-20211117231152-9504": docker container inspect default-k8s-different-port-20211117231152-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20211117231152-9504
	I1117 23:13:10.369064    3192 cli_runner.go:115] Run: docker container inspect default-k8s-different-port-20211117231152-9504 --format={{.State.Status}}
	W1117 23:13:10.465141    3192 cli_runner.go:162] docker container inspect default-k8s-different-port-20211117231152-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:13:10.465252    3192 oci.go:670] temporary error verifying shutdown: unknown state "default-k8s-different-port-20211117231152-9504": docker container inspect default-k8s-different-port-20211117231152-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20211117231152-9504
	I1117 23:13:10.465252    3192 oci.go:672] temporary error: container default-k8s-different-port-20211117231152-9504 status is  but expect it to be exited
	I1117 23:13:10.465252    3192 retry.go:31] will retry after 1.582392691s: couldn't verify container is exited. %v: unknown state "default-k8s-different-port-20211117231152-9504": docker container inspect default-k8s-different-port-20211117231152-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20211117231152-9504
	I1117 23:13:12.053195    3192 cli_runner.go:115] Run: docker container inspect default-k8s-different-port-20211117231152-9504 --format={{.State.Status}}
	W1117 23:13:12.151165    3192 cli_runner.go:162] docker container inspect default-k8s-different-port-20211117231152-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:13:12.151572    3192 oci.go:670] temporary error verifying shutdown: unknown state "default-k8s-different-port-20211117231152-9504": docker container inspect default-k8s-different-port-20211117231152-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20211117231152-9504
	I1117 23:13:12.151572    3192 oci.go:672] temporary error: container default-k8s-different-port-20211117231152-9504 status is  but expect it to be exited
	I1117 23:13:12.151572    3192 retry.go:31] will retry after 2.340488664s: couldn't verify container is exited. %v: unknown state "default-k8s-different-port-20211117231152-9504": docker container inspect default-k8s-different-port-20211117231152-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20211117231152-9504
	I1117 23:13:14.497580    3192 cli_runner.go:115] Run: docker container inspect default-k8s-different-port-20211117231152-9504 --format={{.State.Status}}
	W1117 23:13:14.590491    3192 cli_runner.go:162] docker container inspect default-k8s-different-port-20211117231152-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:13:14.590714    3192 oci.go:670] temporary error verifying shutdown: unknown state "default-k8s-different-port-20211117231152-9504": docker container inspect default-k8s-different-port-20211117231152-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20211117231152-9504
	I1117 23:13:14.590714    3192 oci.go:672] temporary error: container default-k8s-different-port-20211117231152-9504 status is  but expect it to be exited
	I1117 23:13:14.590714    3192 retry.go:31] will retry after 4.506218855s: couldn't verify container is exited. %v: unknown state "default-k8s-different-port-20211117231152-9504": docker container inspect default-k8s-different-port-20211117231152-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20211117231152-9504
	I1117 23:13:19.102522    3192 cli_runner.go:115] Run: docker container inspect default-k8s-different-port-20211117231152-9504 --format={{.State.Status}}
	W1117 23:13:19.195503    3192 cli_runner.go:162] docker container inspect default-k8s-different-port-20211117231152-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:13:19.195503    3192 oci.go:670] temporary error verifying shutdown: unknown state "default-k8s-different-port-20211117231152-9504": docker container inspect default-k8s-different-port-20211117231152-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20211117231152-9504
	I1117 23:13:19.195503    3192 oci.go:672] temporary error: container default-k8s-different-port-20211117231152-9504 status is  but expect it to be exited
	I1117 23:13:19.195503    3192 retry.go:31] will retry after 3.221479586s: couldn't verify container is exited. %v: unknown state "default-k8s-different-port-20211117231152-9504": docker container inspect default-k8s-different-port-20211117231152-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20211117231152-9504
	I1117 23:13:22.423257    3192 cli_runner.go:115] Run: docker container inspect default-k8s-different-port-20211117231152-9504 --format={{.State.Status}}
	W1117 23:13:22.515236    3192 cli_runner.go:162] docker container inspect default-k8s-different-port-20211117231152-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:13:22.515236    3192 oci.go:670] temporary error verifying shutdown: unknown state "default-k8s-different-port-20211117231152-9504": docker container inspect default-k8s-different-port-20211117231152-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20211117231152-9504
	I1117 23:13:22.515236    3192 oci.go:672] temporary error: container default-k8s-different-port-20211117231152-9504 status is  but expect it to be exited
	I1117 23:13:22.515236    3192 oci.go:87] couldn't shut down default-k8s-different-port-20211117231152-9504 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "default-k8s-different-port-20211117231152-9504": docker container inspect default-k8s-different-port-20211117231152-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20211117231152-9504
	 
	I1117 23:13:22.520487    3192 cli_runner.go:115] Run: docker rm -f -v default-k8s-different-port-20211117231152-9504
	W1117 23:13:22.621265    3192 cli_runner.go:162] docker rm -f -v default-k8s-different-port-20211117231152-9504 returned with exit code 1
	W1117 23:13:22.622461    3192 delete.go:139] delete failed (probably ok) <nil>
	I1117 23:13:22.622527    3192 fix.go:120] Sleeping 1 second for extra luck!
	I1117 23:13:23.622716    3192 start.go:126] createHost starting for "" (driver="docker")
	I1117 23:13:23.628183    3192 out.go:203] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I1117 23:13:23.628511    3192 start.go:160] libmachine.API.Create for "default-k8s-different-port-20211117231152-9504" (driver="docker")
	I1117 23:13:23.628511    3192 client.go:168] LocalClient.Create starting
	I1117 23:13:23.629190    3192 main.go:130] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I1117 23:13:23.629295    3192 main.go:130] libmachine: Decoding PEM data...
	I1117 23:13:23.629295    3192 main.go:130] libmachine: Parsing certificate...
	I1117 23:13:23.629295    3192 main.go:130] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I1117 23:13:23.629843    3192 main.go:130] libmachine: Decoding PEM data...
	I1117 23:13:23.629843    3192 main.go:130] libmachine: Parsing certificate...
	I1117 23:13:23.635354    3192 cli_runner.go:115] Run: docker network inspect default-k8s-different-port-20211117231152-9504 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1117 23:13:23.731525    3192 network_create.go:67] Found existing network {name:default-k8s-different-port-20211117231152-9504 subnet:0xc001159410 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 168 49 1] mtu:1500}
	I1117 23:13:23.731525    3192 kic.go:106] calculated static IP "192.168.49.2" for the "default-k8s-different-port-20211117231152-9504" container
	I1117 23:13:23.741813    3192 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
	I1117 23:13:23.847902    3192 cli_runner.go:115] Run: docker volume create default-k8s-different-port-20211117231152-9504 --label name.minikube.sigs.k8s.io=default-k8s-different-port-20211117231152-9504 --label created_by.minikube.sigs.k8s.io=true
	I1117 23:13:23.951913    3192 oci.go:102] Successfully created a docker volume default-k8s-different-port-20211117231152-9504
	I1117 23:13:23.956620    3192 cli_runner.go:115] Run: docker run --rm --name default-k8s-different-port-20211117231152-9504-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-different-port-20211117231152-9504 --entrypoint /usr/bin/test -v default-k8s-different-port-20211117231152-9504:/var gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -d /var/lib
	I1117 23:13:24.854564    3192 oci.go:106] Successfully prepared a docker volume default-k8s-different-port-20211117231152-9504
	I1117 23:13:24.854746    3192 preload.go:132] Checking if preload exists for k8s version v1.22.3 and runtime docker
	I1117 23:13:24.854857    3192 kic.go:179] Starting extracting preloaded images to volume ...
	I1117 23:13:24.859512    3192 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I1117 23:13:24.859512    3192 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-different-port-20211117231152-9504:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir
	W1117 23:13:25.003068    3192 cli_runner.go:162] docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-different-port-20211117231152-9504:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir returned with exit code 125
	I1117 23:13:25.003068    3192 kic.go:186] Unable to extract preloaded tarball to volume: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-different-port-20211117231152-9504:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: status code not OK but 500: {"Message":"Unhandled exception: The notification platform is unavailable.\r\n\r\nThe notification platform is unavailable.\r\n","StackTrace":"   at Windows.UI.Notifications.ToastNotificationManager.CreateToastNotifier(String applicationId)\r\n   at Docker.WPF.PromptShareDirectory.<PromptUserAsync>d__0.MoveNext() in C:\\workspaces\\stable-2.3.x\\src\\github.com\\docker\\pinata\\win\\src\\Docker.WPF\\PromptShareDirectory.cs:line 25\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at Docker.ApiServices.Mounting.FileSharing.<DoShareAsync>d__6.MoveNext() in C:\\workspaces\\stable-2.3.x\\src\\github.com\\docker\\pinata\\win\\src\\Docker.ApiServices\\Mounting\\FileSharing.cs:line 80\r\n--- End of stack trace from previous location whe
re exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at Docker.ApiServices.Mounting.FileSharing.<ShareAsync>d__4.MoveNext() in C:\\workspaces\\stable-2.3.x\\src\\github.com\\docker\\pinata\\win\\src\\Docker.ApiServices\\Mounting\\FileSharing.cs:line 47\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at Docker.HttpApi.Controllers.FilesharingController.<ShareDirectory>d__2.MoveNext() in C:\\workspaces\\stable-2.3.x\\src\\github.com\\docker\\pinata\\win\\src\\Docker.HttpApi\\Controllers\\FilesharingController.cs:line 21\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.Exceptio
nDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at System.Threading.Tasks.TaskHelpersExtensions.<CastToObject>d__1`1.MoveNext()\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at System.Web.Http.Controllers.ApiControllerActionInvoker.<InvokeActionAsyncCore>d__1.MoveNext()\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at System.Web.Http.Controllers.ActionFilterResult.<ExecuteAsync>d__5.MoveNext()\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.Excepti
onServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at System.Web.Http.Dispatcher.HttpControllerDispatcher.<SendAsync>d__15.MoveNext()"}.
	See 'docker run --help'.
	I1117 23:13:25.259016    3192 info.go:263] docker info: {ID:L2W4:ZZCI:FAHF:ZZT6:NAWJ:ODFR:2ZBQ:YQ77:PYZZ:VR3E:WLUM:OBSV Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:40 OomKillDisable:true NGoroutines:52 SystemTime:2021-11-17 23:13:24.956943116 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:4.19.76-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://inde
x.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:6239399936 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:19.03.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7ad184331fa3e55e52b890ea95e65ba581ae3429 Expected:7ad184331fa3e55e52b890ea95e65ba581ae3429} RuncCommit:{ID:dc9208a3303feef5b3839f4323d9beb36df0a9dd Expected:dc9208a3303feef5b3839f4323d9beb36df0a9dd} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense:Community Engine Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	E1117 23:13:25.259539    3192 oci.go:197] error getting kernel modules path: Unable to locate kernel modules
	I1117 23:13:25.259644    3192 client.go:171] LocalClient.Create took 1.6311204s
	I1117 23:13:27.268586    3192 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1117 23:13:27.273332    3192 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211117231152-9504
	W1117 23:13:27.372836    3192 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211117231152-9504 returned with exit code 1
	I1117 23:13:27.373123    3192 retry.go:31] will retry after 149.242379ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-different-port-20211117231152-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211117231152-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20211117231152-9504
	I1117 23:13:27.530125    3192 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211117231152-9504
	W1117 23:13:27.637769    3192 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211117231152-9504 returned with exit code 1
	I1117 23:13:27.638070    3192 retry.go:31] will retry after 300.341948ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-different-port-20211117231152-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211117231152-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20211117231152-9504
	I1117 23:13:27.943642    3192 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211117231152-9504
	W1117 23:13:28.055873    3192 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211117231152-9504 returned with exit code 1
	I1117 23:13:28.056253    3192 retry.go:31] will retry after 571.057104ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-different-port-20211117231152-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211117231152-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20211117231152-9504
	I1117 23:13:28.637166    3192 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211117231152-9504
	W1117 23:13:28.729486    3192 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211117231152-9504 returned with exit code 1
	W1117 23:13:28.729486    3192 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-different-port-20211117231152-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211117231152-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20211117231152-9504
	
	W1117 23:13:28.729486    3192 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-different-port-20211117231152-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211117231152-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20211117231152-9504
	I1117 23:13:28.729486    3192 start.go:129] duration metric: createHost completed in 5.1067315s
	I1117 23:13:28.737099    3192 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1117 23:13:28.740784    3192 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211117231152-9504
	W1117 23:13:28.831589    3192 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211117231152-9504 returned with exit code 1
	I1117 23:13:28.831589    3192 retry.go:31] will retry after 178.565968ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-different-port-20211117231152-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211117231152-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20211117231152-9504
	I1117 23:13:29.015779    3192 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211117231152-9504
	W1117 23:13:29.115486    3192 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211117231152-9504 returned with exit code 1
	I1117 23:13:29.115848    3192 retry.go:31] will retry after 330.246446ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-different-port-20211117231152-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211117231152-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20211117231152-9504
	I1117 23:13:29.451186    3192 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211117231152-9504
	W1117 23:13:29.542647    3192 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211117231152-9504 returned with exit code 1
	I1117 23:13:29.543171    3192 retry.go:31] will retry after 460.157723ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-different-port-20211117231152-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211117231152-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20211117231152-9504
	I1117 23:13:30.008682    3192 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211117231152-9504
	W1117 23:13:30.110984    3192 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211117231152-9504 returned with exit code 1
	W1117 23:13:30.111383    3192 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-different-port-20211117231152-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211117231152-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20211117231152-9504
	
	W1117 23:13:30.111447    3192 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-different-port-20211117231152-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211117231152-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20211117231152-9504
	I1117 23:13:30.111503    3192 fix.go:57] fixHost completed within 24.5861602s
	I1117 23:13:30.111503    3192 start.go:80] releasing machines lock for "default-k8s-different-port-20211117231152-9504", held for 24.5862166s
	W1117 23:13:30.111619    3192 start.go:532] error starting host: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	W1117 23:13:30.111722    3192 out.go:241] ! StartHost failed, but will try again: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	! StartHost failed, but will try again: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	I1117 23:13:30.111722    3192 start.go:547] Will try again in 5 seconds ...
	I1117 23:13:35.112614    3192 start.go:313] acquiring machines lock for default-k8s-different-port-20211117231152-9504: {Name:mk2897e2360a69311577988e13dc34760667171e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 23:13:35.112920    3192 start.go:317] acquired machines lock for "default-k8s-different-port-20211117231152-9504" in 148.5µs
	I1117 23:13:35.112920    3192 start.go:93] Skipping create...Using existing machine configuration
	I1117 23:13:35.112920    3192 fix.go:55] fixHost starting: 
	I1117 23:13:35.120441    3192 cli_runner.go:115] Run: docker container inspect default-k8s-different-port-20211117231152-9504 --format={{.State.Status}}
	W1117 23:13:35.209632    3192 cli_runner.go:162] docker container inspect default-k8s-different-port-20211117231152-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:13:35.209632    3192 fix.go:108] recreateIfNeeded on default-k8s-different-port-20211117231152-9504: state= err=unknown state "default-k8s-different-port-20211117231152-9504": docker container inspect default-k8s-different-port-20211117231152-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20211117231152-9504
	I1117 23:13:35.209632    3192 fix.go:113] machineExists: false. err=machine does not exist
	I1117 23:13:35.213429    3192 out.go:176] * docker "default-k8s-different-port-20211117231152-9504" container is missing, will recreate.
	I1117 23:13:35.213534    3192 delete.go:124] DEMOLISHING default-k8s-different-port-20211117231152-9504 ...
	I1117 23:13:35.221248    3192 cli_runner.go:115] Run: docker container inspect default-k8s-different-port-20211117231152-9504 --format={{.State.Status}}
	W1117 23:13:35.314345    3192 cli_runner.go:162] docker container inspect default-k8s-different-port-20211117231152-9504 --format={{.State.Status}} returned with exit code 1
	W1117 23:13:35.314645    3192 stop.go:75] unable to get state: unknown state "default-k8s-different-port-20211117231152-9504": docker container inspect default-k8s-different-port-20211117231152-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20211117231152-9504
	I1117 23:13:35.314645    3192 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "default-k8s-different-port-20211117231152-9504": docker container inspect default-k8s-different-port-20211117231152-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20211117231152-9504
	I1117 23:13:35.323857    3192 cli_runner.go:115] Run: docker container inspect default-k8s-different-port-20211117231152-9504 --format={{.State.Status}}
	W1117 23:13:35.417144    3192 cli_runner.go:162] docker container inspect default-k8s-different-port-20211117231152-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:13:35.417351    3192 delete.go:82] Unable to get host status for default-k8s-different-port-20211117231152-9504, assuming it has already been deleted: state: unknown state "default-k8s-different-port-20211117231152-9504": docker container inspect default-k8s-different-port-20211117231152-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20211117231152-9504
	I1117 23:13:35.421826    3192 cli_runner.go:115] Run: docker container inspect -f {{.Id}} default-k8s-different-port-20211117231152-9504
	W1117 23:13:35.509614    3192 cli_runner.go:162] docker container inspect -f {{.Id}} default-k8s-different-port-20211117231152-9504 returned with exit code 1
	I1117 23:13:35.509614    3192 kic.go:360] could not find the container default-k8s-different-port-20211117231152-9504 to remove it. will try anyways
	I1117 23:13:35.515752    3192 cli_runner.go:115] Run: docker container inspect default-k8s-different-port-20211117231152-9504 --format={{.State.Status}}
	W1117 23:13:35.611880    3192 cli_runner.go:162] docker container inspect default-k8s-different-port-20211117231152-9504 --format={{.State.Status}} returned with exit code 1
	W1117 23:13:35.612517    3192 oci.go:83] error getting container status, will try to delete anyways: unknown state "default-k8s-different-port-20211117231152-9504": docker container inspect default-k8s-different-port-20211117231152-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20211117231152-9504
	I1117 23:13:35.616857    3192 cli_runner.go:115] Run: docker exec --privileged -t default-k8s-different-port-20211117231152-9504 /bin/bash -c "sudo init 0"
	W1117 23:13:35.706644    3192 cli_runner.go:162] docker exec --privileged -t default-k8s-different-port-20211117231152-9504 /bin/bash -c "sudo init 0" returned with exit code 1
	I1117 23:13:35.706644    3192 oci.go:658] error shutdown default-k8s-different-port-20211117231152-9504: docker exec --privileged -t default-k8s-different-port-20211117231152-9504 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: default-k8s-different-port-20211117231152-9504
	I1117 23:13:36.711400    3192 cli_runner.go:115] Run: docker container inspect default-k8s-different-port-20211117231152-9504 --format={{.State.Status}}
	W1117 23:13:36.804562    3192 cli_runner.go:162] docker container inspect default-k8s-different-port-20211117231152-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:13:36.804562    3192 oci.go:670] temporary error verifying shutdown: unknown state "default-k8s-different-port-20211117231152-9504": docker container inspect default-k8s-different-port-20211117231152-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20211117231152-9504
	I1117 23:13:36.804562    3192 oci.go:672] temporary error: container default-k8s-different-port-20211117231152-9504 status is  but expect it to be exited
	I1117 23:13:36.804939    3192 retry.go:31] will retry after 391.517075ms: couldn't verify container is exited. %v: unknown state "default-k8s-different-port-20211117231152-9504": docker container inspect default-k8s-different-port-20211117231152-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20211117231152-9504
	I1117 23:13:37.201500    3192 cli_runner.go:115] Run: docker container inspect default-k8s-different-port-20211117231152-9504 --format={{.State.Status}}
	W1117 23:13:37.289008    3192 cli_runner.go:162] docker container inspect default-k8s-different-port-20211117231152-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:13:37.289094    3192 oci.go:670] temporary error verifying shutdown: unknown state "default-k8s-different-port-20211117231152-9504": docker container inspect default-k8s-different-port-20211117231152-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20211117231152-9504
	I1117 23:13:37.289175    3192 oci.go:672] temporary error: container default-k8s-different-port-20211117231152-9504 status is  but expect it to be exited
	I1117 23:13:37.289175    3192 retry.go:31] will retry after 594.826393ms: couldn't verify container is exited. %v: unknown state "default-k8s-different-port-20211117231152-9504": docker container inspect default-k8s-different-port-20211117231152-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20211117231152-9504
	I1117 23:13:37.889355    3192 cli_runner.go:115] Run: docker container inspect default-k8s-different-port-20211117231152-9504 --format={{.State.Status}}
	W1117 23:13:37.987458    3192 cli_runner.go:162] docker container inspect default-k8s-different-port-20211117231152-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:13:37.987691    3192 oci.go:670] temporary error verifying shutdown: unknown state "default-k8s-different-port-20211117231152-9504": docker container inspect default-k8s-different-port-20211117231152-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20211117231152-9504
	I1117 23:13:37.987691    3192 oci.go:672] temporary error: container default-k8s-different-port-20211117231152-9504 status is  but expect it to be exited
	I1117 23:13:37.987691    3192 retry.go:31] will retry after 1.326470261s: couldn't verify container is exited. %v: unknown state "default-k8s-different-port-20211117231152-9504": docker container inspect default-k8s-different-port-20211117231152-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20211117231152-9504
	I1117 23:13:39.318920    3192 cli_runner.go:115] Run: docker container inspect default-k8s-different-port-20211117231152-9504 --format={{.State.Status}}
	W1117 23:13:39.411953    3192 cli_runner.go:162] docker container inspect default-k8s-different-port-20211117231152-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:13:39.411953    3192 oci.go:670] temporary error verifying shutdown: unknown state "default-k8s-different-port-20211117231152-9504": docker container inspect default-k8s-different-port-20211117231152-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20211117231152-9504
	I1117 23:13:39.411953    3192 oci.go:672] temporary error: container default-k8s-different-port-20211117231152-9504 status is  but expect it to be exited
	I1117 23:13:39.411953    3192 retry.go:31] will retry after 1.212558276s: couldn't verify container is exited. %v: unknown state "default-k8s-different-port-20211117231152-9504": docker container inspect default-k8s-different-port-20211117231152-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20211117231152-9504
	I1117 23:13:40.629792    3192 cli_runner.go:115] Run: docker container inspect default-k8s-different-port-20211117231152-9504 --format={{.State.Status}}
	W1117 23:13:40.713892    3192 cli_runner.go:162] docker container inspect default-k8s-different-port-20211117231152-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:13:40.714156    3192 oci.go:670] temporary error verifying shutdown: unknown state "default-k8s-different-port-20211117231152-9504": docker container inspect default-k8s-different-port-20211117231152-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20211117231152-9504
	I1117 23:13:40.714156    3192 oci.go:672] temporary error: container default-k8s-different-port-20211117231152-9504 status is  but expect it to be exited
	I1117 23:13:40.714156    3192 retry.go:31] will retry after 1.779941781s: couldn't verify container is exited. %v: unknown state "default-k8s-different-port-20211117231152-9504": docker container inspect default-k8s-different-port-20211117231152-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20211117231152-9504
	I1117 23:13:42.499903    3192 cli_runner.go:115] Run: docker container inspect default-k8s-different-port-20211117231152-9504 --format={{.State.Status}}
	W1117 23:13:42.598195    3192 cli_runner.go:162] docker container inspect default-k8s-different-port-20211117231152-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:13:42.598472    3192 oci.go:670] temporary error verifying shutdown: unknown state "default-k8s-different-port-20211117231152-9504": docker container inspect default-k8s-different-port-20211117231152-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20211117231152-9504
	I1117 23:13:42.598472    3192 oci.go:672] temporary error: container default-k8s-different-port-20211117231152-9504 status is  but expect it to be exited
	I1117 23:13:42.598632    3192 retry.go:31] will retry after 3.268621161s: couldn't verify container is exited. %v: unknown state "default-k8s-different-port-20211117231152-9504": docker container inspect default-k8s-different-port-20211117231152-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20211117231152-9504
	I1117 23:13:45.872084    3192 cli_runner.go:115] Run: docker container inspect default-k8s-different-port-20211117231152-9504 --format={{.State.Status}}
	W1117 23:13:45.961267    3192 cli_runner.go:162] docker container inspect default-k8s-different-port-20211117231152-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:13:45.961421    3192 oci.go:670] temporary error verifying shutdown: unknown state "default-k8s-different-port-20211117231152-9504": docker container inspect default-k8s-different-port-20211117231152-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20211117231152-9504
	I1117 23:13:45.961421    3192 oci.go:672] temporary error: container default-k8s-different-port-20211117231152-9504 status is  but expect it to be exited
	I1117 23:13:45.961421    3192 retry.go:31] will retry after 6.097818893s: couldn't verify container is exited. %v: unknown state "default-k8s-different-port-20211117231152-9504": docker container inspect default-k8s-different-port-20211117231152-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20211117231152-9504
	I1117 23:13:52.066820    3192 cli_runner.go:115] Run: docker container inspect default-k8s-different-port-20211117231152-9504 --format={{.State.Status}}
	W1117 23:13:52.155320    3192 cli_runner.go:162] docker container inspect default-k8s-different-port-20211117231152-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:13:52.155320    3192 oci.go:670] temporary error verifying shutdown: unknown state "default-k8s-different-port-20211117231152-9504": docker container inspect default-k8s-different-port-20211117231152-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20211117231152-9504
	I1117 23:13:52.155543    3192 oci.go:672] temporary error: container default-k8s-different-port-20211117231152-9504 status is  but expect it to be exited
	I1117 23:13:52.155543    3192 oci.go:87] couldn't shut down default-k8s-different-port-20211117231152-9504 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "default-k8s-different-port-20211117231152-9504": docker container inspect default-k8s-different-port-20211117231152-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20211117231152-9504
	 
	I1117 23:13:52.159520    3192 cli_runner.go:115] Run: docker rm -f -v default-k8s-different-port-20211117231152-9504
	W1117 23:13:52.252587    3192 cli_runner.go:162] docker rm -f -v default-k8s-different-port-20211117231152-9504 returned with exit code 1
	W1117 23:13:52.253594    3192 delete.go:139] delete failed (probably ok) <nil>
	I1117 23:13:52.253594    3192 fix.go:120] Sleeping 1 second for extra luck!
	I1117 23:13:53.253724    3192 start.go:126] createHost starting for "" (driver="docker")
	I1117 23:13:53.257845    3192 out.go:203] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I1117 23:13:53.258030    3192 start.go:160] libmachine.API.Create for "default-k8s-different-port-20211117231152-9504" (driver="docker")
	I1117 23:13:53.258030    3192 client.go:168] LocalClient.Create starting
	I1117 23:13:53.258030    3192 main.go:130] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I1117 23:13:53.258764    3192 main.go:130] libmachine: Decoding PEM data...
	I1117 23:13:53.258801    3192 main.go:130] libmachine: Parsing certificate...
	I1117 23:13:53.258926    3192 main.go:130] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I1117 23:13:53.258926    3192 main.go:130] libmachine: Decoding PEM data...
	I1117 23:13:53.258926    3192 main.go:130] libmachine: Parsing certificate...
	I1117 23:13:53.264502    3192 cli_runner.go:115] Run: docker network inspect default-k8s-different-port-20211117231152-9504 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1117 23:13:53.359525    3192 cli_runner.go:162] docker network inspect default-k8s-different-port-20211117231152-9504 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1117 23:13:53.363751    3192 network_create.go:254] running [docker network inspect default-k8s-different-port-20211117231152-9504] to gather additional debugging logs...
	I1117 23:13:53.363781    3192 cli_runner.go:115] Run: docker network inspect default-k8s-different-port-20211117231152-9504
	W1117 23:13:53.465905    3192 cli_runner.go:162] docker network inspect default-k8s-different-port-20211117231152-9504 returned with exit code 1
	I1117 23:13:53.465905    3192 network_create.go:257] error running [docker network inspect default-k8s-different-port-20211117231152-9504]: docker network inspect default-k8s-different-port-20211117231152-9504: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: default-k8s-different-port-20211117231152-9504
	I1117 23:13:53.465905    3192 network_create.go:259] output of [docker network inspect default-k8s-different-port-20211117231152-9504]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: default-k8s-different-port-20211117231152-9504
	
	** /stderr **
	I1117 23:13:53.470433    3192 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1117 23:13:53.575668    3192 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc000118150] misses:0}
	I1117 23:13:53.575668    3192 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1117 23:13:53.575668    3192 network_create.go:106] attempt to create docker network default-k8s-different-port-20211117231152-9504 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1117 23:13:53.579664    3192 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true default-k8s-different-port-20211117231152-9504
	W1117 23:13:53.669873    3192 cli_runner.go:162] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true default-k8s-different-port-20211117231152-9504 returned with exit code 1
	W1117 23:13:53.669873    3192 network_create.go:98] failed to create docker network default-k8s-different-port-20211117231152-9504 192.168.49.0/24, will retry: subnet is taken
	I1117 23:13:53.683663    3192 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000118150] amended:false}} dirty:map[] misses:0}
	I1117 23:13:53.684323    3192 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1117 23:13:53.696940    3192 network.go:288] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000118150] amended:true}} dirty:map[192.168.49.0:0xc000118150 192.168.58.0:0xc000ac61d8] misses:0}
	I1117 23:13:53.696940    3192 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1117 23:13:53.696940    3192 network_create.go:106] attempt to create docker network default-k8s-different-port-20211117231152-9504 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I1117 23:13:53.700376    3192 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true default-k8s-different-port-20211117231152-9504
	I1117 23:13:53.907700    3192 network_create.go:90] docker network default-k8s-different-port-20211117231152-9504 192.168.58.0/24 created
	I1117 23:13:53.907896    3192 kic.go:106] calculated static IP "192.168.58.2" for the "default-k8s-different-port-20211117231152-9504" container
	I1117 23:13:53.915617    3192 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
	I1117 23:13:54.017378    3192 cli_runner.go:115] Run: docker volume create default-k8s-different-port-20211117231152-9504 --label name.minikube.sigs.k8s.io=default-k8s-different-port-20211117231152-9504 --label created_by.minikube.sigs.k8s.io=true
	I1117 23:13:54.121417    3192 oci.go:102] Successfully created a docker volume default-k8s-different-port-20211117231152-9504
	I1117 23:13:54.125526    3192 cli_runner.go:115] Run: docker run --rm --name default-k8s-different-port-20211117231152-9504-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-different-port-20211117231152-9504 --entrypoint /usr/bin/test -v default-k8s-different-port-20211117231152-9504:/var gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -d /var/lib
	I1117 23:13:55.002052    3192 oci.go:106] Successfully prepared a docker volume default-k8s-different-port-20211117231152-9504
	I1117 23:13:55.002327    3192 preload.go:132] Checking if preload exists for k8s version v1.22.3 and runtime docker
	I1117 23:13:55.002327    3192 kic.go:179] Starting extracting preloaded images to volume ...
	I1117 23:13:55.006827    3192 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I1117 23:13:55.008482    3192 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-different-port-20211117231152-9504:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir
	W1117 23:13:55.124348    3192 cli_runner.go:162] docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-different-port-20211117231152-9504:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir returned with exit code 125
	I1117 23:13:55.124348    3192 kic.go:186] Unable to extract preloaded tarball to volume: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-different-port-20211117231152-9504:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: status code not OK but 500: {"Message":"Unhandled exception: The notification platform is unavailable.\r\n\r\nThe notification platform is unavailable.\r\n","StackTrace":"   at Windows.UI.Notifications.ToastNotificationManager.CreateToastNotifier(String applicationId)\r\n   at Docker.WPF.PromptShareDirectory.<PromptUserAsync>d__0.MoveNext() in C:\\workspaces\\stable-2.3.x\\src\\github.com\\docker\\pinata\\win\\src\\Docker.WPF\\PromptShareDirectory.cs:line 25\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at Docker.ApiServices.Mounting.FileSharing.<DoShareAsync>d__6.MoveNext() in C:\\workspaces\\stable-2.3.x\\src\\github.com\\docker\\pinata\\win\\src\\Docker.ApiServices\\Mounting\\FileSharing.cs:line 80\r\n--- End of stack trace from previous location whe
re exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at Docker.ApiServices.Mounting.FileSharing.<ShareAsync>d__4.MoveNext() in C:\\workspaces\\stable-2.3.x\\src\\github.com\\docker\\pinata\\win\\src\\Docker.ApiServices\\Mounting\\FileSharing.cs:line 47\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at Docker.HttpApi.Controllers.FilesharingController.<ShareDirectory>d__2.MoveNext() in C:\\workspaces\\stable-2.3.x\\src\\github.com\\docker\\pinata\\win\\src\\Docker.HttpApi\\Controllers\\FilesharingController.cs:line 21\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.Exceptio
nDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at System.Threading.Tasks.TaskHelpersExtensions.<CastToObject>d__1`1.MoveNext()\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at System.Web.Http.Controllers.ApiControllerActionInvoker.<InvokeActionAsyncCore>d__1.MoveNext()\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at System.Web.Http.Controllers.ActionFilterResult.<ExecuteAsync>d__5.MoveNext()\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.Excepti
onServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at System.Web.Http.Dispatcher.HttpControllerDispatcher.<SendAsync>d__15.MoveNext()"}.
	See 'docker run --help'.
	I1117 23:13:55.364662    3192 info.go:263] docker info: {ID:L2W4:ZZCI:FAHF:ZZT6:NAWJ:ODFR:2ZBQ:YQ77:PYZZ:VR3E:WLUM:OBSV Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:41 OomKillDisable:true NGoroutines:53 SystemTime:2021-11-17 23:13:55.101708331 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:4.19.76-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://inde
x.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:6239399936 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:19.03.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7ad184331fa3e55e52b890ea95e65ba581ae3429 Expected:7ad184331fa3e55e52b890ea95e65ba581ae3429} RuncCommit:{ID:dc9208a3303feef5b3839f4323d9beb36df0a9dd Expected:dc9208a3303feef5b3839f4323d9beb36df0a9dd} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense:Community Engine Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	E1117 23:13:55.364662    3192 oci.go:197] error getting kernel modules path: Unable to locate kernel modules
	I1117 23:13:55.364662    3192 client.go:171] LocalClient.Create took 2.1066156s
	I1117 23:13:57.372300    3192 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1117 23:13:57.376020    3192 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211117231152-9504
	W1117 23:13:57.465189    3192 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211117231152-9504 returned with exit code 1
	I1117 23:13:57.465438    3192 retry.go:31] will retry after 198.278561ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-different-port-20211117231152-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211117231152-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20211117231152-9504
	I1117 23:13:57.670442    3192 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211117231152-9504
	W1117 23:13:57.760509    3192 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211117231152-9504 returned with exit code 1
	I1117 23:13:57.760855    3192 retry.go:31] will retry after 298.905961ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-different-port-20211117231152-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211117231152-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20211117231152-9504
	I1117 23:13:58.063865    3192 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211117231152-9504
	W1117 23:13:58.156340    3192 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211117231152-9504 returned with exit code 1
	I1117 23:13:58.156616    3192 retry.go:31] will retry after 704.572333ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-different-port-20211117231152-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211117231152-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20211117231152-9504
	I1117 23:13:58.865794    3192 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211117231152-9504
	W1117 23:13:58.961784    3192 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211117231152-9504 returned with exit code 1
	W1117 23:13:58.961784    3192 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-different-port-20211117231152-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211117231152-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20211117231152-9504
	
	W1117 23:13:58.961784    3192 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-different-port-20211117231152-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211117231152-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20211117231152-9504
	I1117 23:13:58.961784    3192 start.go:129] duration metric: createHost completed in 5.707795s
	I1117 23:13:58.970503    3192 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1117 23:13:58.974918    3192 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211117231152-9504
	W1117 23:13:59.080792    3192 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211117231152-9504 returned with exit code 1
	I1117 23:13:59.080792    3192 retry.go:31] will retry after 341.333754ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-different-port-20211117231152-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211117231152-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20211117231152-9504
	I1117 23:13:59.428316    3192 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211117231152-9504
	W1117 23:13:59.516288    3192 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211117231152-9504 returned with exit code 1
	I1117 23:13:59.516288    3192 retry.go:31] will retry after 448.769687ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-different-port-20211117231152-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211117231152-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20211117231152-9504
	I1117 23:13:59.970242    3192 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211117231152-9504
	W1117 23:14:00.065281    3192 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211117231152-9504 returned with exit code 1
	I1117 23:14:00.065458    3192 retry.go:31] will retry after 575.898922ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-different-port-20211117231152-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211117231152-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20211117231152-9504
	I1117 23:14:00.646437    3192 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211117231152-9504
	W1117 23:14:00.741424    3192 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211117231152-9504 returned with exit code 1
	W1117 23:14:00.741759    3192 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-different-port-20211117231152-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211117231152-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20211117231152-9504
	
	W1117 23:14:00.741759    3192 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-different-port-20211117231152-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211117231152-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20211117231152-9504
	I1117 23:14:00.741836    3192 fix.go:57] fixHost completed within 25.6287232s
	I1117 23:14:00.741836    3192 start.go:80] releasing machines lock for "default-k8s-different-port-20211117231152-9504", held for 25.6287232s
	W1117 23:14:00.742278    3192 out.go:241] * Failed to start docker container. Running "minikube delete -p default-k8s-different-port-20211117231152-9504" may fix it: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	* Failed to start docker container. Running "minikube delete -p default-k8s-different-port-20211117231152-9504" may fix it: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	I1117 23:14:00.747458    3192 out.go:176] 
	W1117 23:14:00.747458    3192 out.go:241] X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	W1117 23:14:00.747458    3192 out.go:241] * 
	* 
	W1117 23:14:00.749055    3192 out.go:241] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1117 23:14:00.751959    3192 out.go:176] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:244: failed to start minikube post-stop. args "out/minikube-windows-amd64.exe start -p default-k8s-different-port-20211117231152-9504 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker --kubernetes-version=v1.22.3": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/default-k8s-different-port/serial/SecondStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect default-k8s-different-port-20211117231152-9504
helpers_test.go:235: (dbg) docker inspect default-k8s-different-port-20211117231152-9504:

                                                
                                                
-- stdout --
	[
	    {
	        "CreatedAt": "2021-11-17T23:11:56Z",
	        "Driver": "local",
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "default-k8s-different-port-20211117231152-9504"
	        },
	        "Mountpoint": "/var/lib/docker/volumes/default-k8s-different-port-20211117231152-9504/_data",
	        "Name": "default-k8s-different-port-20211117231152-9504",
	        "Options": {},
	        "Scope": "local"
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p default-k8s-different-port-20211117231152-9504 -n default-k8s-different-port-20211117231152-9504

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/SecondStart
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p default-k8s-different-port-20211117231152-9504 -n default-k8s-different-port-20211117231152-9504: exit status 7 (1.8382774s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 23:14:02.799927    6468 status.go:247] status error: host: state: unknown state "default-k8s-different-port-20211117231152-9504": docker container inspect default-k8s-different-port-20211117231152-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20211117231152-9504

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-different-port-20211117231152-9504" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/default-k8s-different-port/serial/SecondStart (60.06s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (1.95s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:260: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-20211117231110-9504" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-20211117231110-9504
helpers_test.go:235: (dbg) docker inspect old-k8s-version-20211117231110-9504:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "old-k8s-version-20211117231110-9504",
	        "Id": "138f7d40ee5b452e80a6cb6a3edb4c912f1cda6b29239fd829c80dbf72edf47c",
	        "Created": "2021-11-17T23:12:43.316454722Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.67.0/24",
	                    "Gateway": "192.168.67.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-20211117231110-9504 -n old-k8s-version-20211117231110-9504

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-20211117231110-9504 -n old-k8s-version-20211117231110-9504: exit status 7 (1.8459179s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 23:13:23.877884    4352 status.go:247] status error: host: state: unknown state "old-k8s-version-20211117231110-9504": docker container inspect old-k8s-version-20211117231110-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20211117231110-9504

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-20211117231110-9504" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (1.95s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (1.95s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:260: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-20211117231110-9504" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect embed-certs-20211117231110-9504
helpers_test.go:235: (dbg) docker inspect embed-certs-20211117231110-9504:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "embed-certs-20211117231110-9504",
	        "Id": "49a30a26e6046d6b1c8274a30b93a0b079cde1a03a9b3864d276f6a2cf99256e",
	        "Created": "2021-11-17T23:12:44.093259743Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.76.0/24",
	                    "Gateway": "192.168.76.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p embed-certs-20211117231110-9504 -n embed-certs-20211117231110-9504

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p embed-certs-20211117231110-9504 -n embed-certs-20211117231110-9504: exit status 7 (1.8477428s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 23:13:24.564395    8240 status.go:247] status error: host: state: unknown state "embed-certs-20211117231110-9504": docker container inspect embed-certs-20211117231110-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20211117231110-9504

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-20211117231110-9504" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (1.95s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (2.21s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:273: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-20211117231110-9504" does not exist
start_stop_delete_test.go:276: (dbg) Run:  kubectl --context old-k8s-version-20211117231110-9504 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:276: (dbg) Non-zero exit: kubectl --context old-k8s-version-20211117231110-9504 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (242.0997ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-20211117231110-9504" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:278: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-20211117231110-9504 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:282: addon did not load correct image. Expected to contain " k8s.gcr.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-20211117231110-9504
helpers_test.go:235: (dbg) docker inspect old-k8s-version-20211117231110-9504:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "old-k8s-version-20211117231110-9504",
	        "Id": "138f7d40ee5b452e80a6cb6a3edb4c912f1cda6b29239fd829c80dbf72edf47c",
	        "Created": "2021-11-17T23:12:43.316454722Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.67.0/24",
	                    "Gateway": "192.168.67.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-20211117231110-9504 -n old-k8s-version-20211117231110-9504

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-20211117231110-9504 -n old-k8s-version-20211117231110-9504: exit status 7 (1.8507256s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 23:13:26.085535    8260 status.go:247] status error: host: state: unknown state "old-k8s-version-20211117231110-9504": docker container inspect old-k8s-version-20211117231110-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20211117231110-9504

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-20211117231110-9504" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (2.21s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (2.19s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:273: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-20211117231110-9504" does not exist
start_stop_delete_test.go:276: (dbg) Run:  kubectl --context embed-certs-20211117231110-9504 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:276: (dbg) Non-zero exit: kubectl --context embed-certs-20211117231110-9504 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (212.8185ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-20211117231110-9504" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:278: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-20211117231110-9504 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:282: addon did not load correct image. Expected to contain " k8s.gcr.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect embed-certs-20211117231110-9504
helpers_test.go:235: (dbg) docker inspect embed-certs-20211117231110-9504:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "embed-certs-20211117231110-9504",
	        "Id": "49a30a26e6046d6b1c8274a30b93a0b079cde1a03a9b3864d276f6a2cf99256e",
	        "Created": "2021-11-17T23:12:44.093259743Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.76.0/24",
	                    "Gateway": "192.168.76.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p embed-certs-20211117231110-9504 -n embed-certs-20211117231110-9504

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p embed-certs-20211117231110-9504 -n embed-certs-20211117231110-9504: exit status 7 (1.8499874s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 23:13:26.759128    7412 status.go:247] status error: host: state: unknown state "embed-certs-20211117231110-9504": docker container inspect embed-certs-20211117231110-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20211117231110-9504

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-20211117231110-9504" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (2.19s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (3.83s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:289: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p old-k8s-version-20211117231110-9504 "sudo crictl images -o json"

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:289: (dbg) Non-zero exit: out/minikube-windows-amd64.exe ssh -p old-k8s-version-20211117231110-9504 "sudo crictl images -o json": exit status 80 (1.8915752s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "old-k8s-version-20211117231110-9504": docker container inspect old-k8s-version-20211117231110-9504 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20211117231110-9504
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                     │
	│    * If the above advice does not help, please let us know:                                                         │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                       │
	│                                                                                                                     │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                            │
	│    * Please also attach the following file to the GitHub issue:                                                     │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_ssh_2ebd0b017f5d88727e5083393ee181280e239d1d_0.log    │
	│                                                                                                                     │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:289: failed tp get images inside minikube. args "out/minikube-windows-amd64.exe ssh -p old-k8s-version-20211117231110-9504 \"sudo crictl images -o json\"": exit status 80
start_stop_delete_test.go:289: failed to decode images json unexpected end of JSON input. output:

                                                
                                                

                                                
                                                
start_stop_delete_test.go:289: v1.14.0 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"k8s.gcr.io/coredns:1.3.1",
- 	"k8s.gcr.io/etcd:3.3.10",
- 	"k8s.gcr.io/kube-apiserver:v1.14.0",
- 	"k8s.gcr.io/kube-controller-manager:v1.14.0",
- 	"k8s.gcr.io/kube-proxy:v1.14.0",
- 	"k8s.gcr.io/kube-scheduler:v1.14.0",
- 	"k8s.gcr.io/pause:3.1",
- 	"kubernetesui/dashboard:v2.3.1",
- 	"kubernetesui/metrics-scraper:v1.0.7",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-20211117231110-9504
helpers_test.go:235: (dbg) docker inspect old-k8s-version-20211117231110-9504:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "old-k8s-version-20211117231110-9504",
	        "Id": "138f7d40ee5b452e80a6cb6a3edb4c912f1cda6b29239fd829c80dbf72edf47c",
	        "Created": "2021-11-17T23:12:43.316454722Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.67.0/24",
	                    "Gateway": "192.168.67.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-20211117231110-9504 -n old-k8s-version-20211117231110-9504

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-20211117231110-9504 -n old-k8s-version-20211117231110-9504: exit status 7 (1.839344s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 23:13:29.921842    8400 status.go:247] status error: host: state: unknown state "old-k8s-version-20211117231110-9504": docker container inspect old-k8s-version-20211117231110-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20211117231110-9504

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-20211117231110-9504" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (3.83s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (3.79s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:289: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p embed-certs-20211117231110-9504 "sudo crictl images -o json"

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:289: (dbg) Non-zero exit: out/minikube-windows-amd64.exe ssh -p embed-certs-20211117231110-9504 "sudo crictl images -o json": exit status 80 (1.8565829s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "embed-certs-20211117231110-9504": docker container inspect embed-certs-20211117231110-9504 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20211117231110-9504
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                     │
	│    * If the above advice does not help, please let us know:                                                         │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                       │
	│                                                                                                                     │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                            │
	│    * Please also attach the following file to the GitHub issue:                                                     │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_ssh_2ebd0b017f5d88727e5083393ee181280e239d1d_0.log    │
	│                                                                                                                     │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:289: failed tp get images inside minikube. args "out/minikube-windows-amd64.exe ssh -p embed-certs-20211117231110-9504 \"sudo crictl images -o json\"": exit status 80
start_stop_delete_test.go:289: failed to decode images json unexpected end of JSON input. output:

                                                
                                                

                                                
                                                
start_stop_delete_test.go:289: v1.22.3 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"k8s.gcr.io/coredns/coredns:v1.8.4",
- 	"k8s.gcr.io/etcd:3.5.0-0",
- 	"k8s.gcr.io/kube-apiserver:v1.22.3",
- 	"k8s.gcr.io/kube-controller-manager:v1.22.3",
- 	"k8s.gcr.io/kube-proxy:v1.22.3",
- 	"k8s.gcr.io/kube-scheduler:v1.22.3",
- 	"k8s.gcr.io/pause:3.5",
- 	"kubernetesui/dashboard:v2.3.1",
- 	"kubernetesui/metrics-scraper:v1.0.7",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/embed-certs/serial/VerifyKubernetesImages]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect embed-certs-20211117231110-9504
helpers_test.go:235: (dbg) docker inspect embed-certs-20211117231110-9504:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "embed-certs-20211117231110-9504",
	        "Id": "49a30a26e6046d6b1c8274a30b93a0b079cde1a03a9b3864d276f6a2cf99256e",
	        "Created": "2021-11-17T23:12:44.093259743Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.76.0/24",
	                    "Gateway": "192.168.76.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p embed-certs-20211117231110-9504 -n embed-certs-20211117231110-9504

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p embed-certs-20211117231110-9504 -n embed-certs-20211117231110-9504: exit status 7 (1.8109982s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 23:13:30.548983    4376 status.go:247] status error: host: state: unknown state "embed-certs-20211117231110-9504": docker container inspect embed-certs-20211117231110-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20211117231110-9504

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-20211117231110-9504" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (3.79s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (5.8s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-windows-amd64.exe pause -p old-k8s-version-20211117231110-9504 --alsologtostderr -v=1

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:296: (dbg) Non-zero exit: out/minikube-windows-amd64.exe pause -p old-k8s-version-20211117231110-9504 --alsologtostderr -v=1: exit status 80 (1.876926s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1117 23:13:30.157669    6320 out.go:297] Setting OutFile to fd 1940 ...
	I1117 23:13:30.240500    6320 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 23:13:30.240604    6320 out.go:310] Setting ErrFile to fd 1748...
	I1117 23:13:30.240604    6320 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 23:13:30.253570    6320 out.go:304] Setting JSON to false
	I1117 23:13:30.253570    6320 mustload.go:65] Loading cluster: old-k8s-version-20211117231110-9504
	I1117 23:13:30.253570    6320 config.go:176] Loaded profile config "old-k8s-version-20211117231110-9504": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.14.0
	I1117 23:13:30.265281    6320 cli_runner.go:115] Run: docker container inspect old-k8s-version-20211117231110-9504 --format={{.State.Status}}
	W1117 23:13:31.784074    6320 cli_runner.go:162] docker container inspect old-k8s-version-20211117231110-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:13:31.784148    6320 cli_runner.go:168] Completed: docker container inspect old-k8s-version-20211117231110-9504 --format={{.State.Status}}: (1.51864s)
	I1117 23:13:31.787889    6320 out.go:176] 
	W1117 23:13:31.788072    6320 out.go:241] X Exiting due to GUEST_STATUS: state: unknown state "old-k8s-version-20211117231110-9504": docker container inspect old-k8s-version-20211117231110-9504 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20211117231110-9504
	
	X Exiting due to GUEST_STATUS: state: unknown state "old-k8s-version-20211117231110-9504": docker container inspect old-k8s-version-20211117231110-9504 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20211117231110-9504
	
	W1117 23:13:31.788194    6320 out.go:241] * 
	* 
	W1117 23:13:31.797291    6320 out.go:241] ╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                       │
	│    * If the above advice does not help, please let us know:                                                           │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                         │
	│                                                                                                                       │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                              │
	│    * Please also attach the following file to the GitHub issue:                                                       │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_pause_8893f1c977cc86351b34571029ffce3d31854fd6_1.log    │
	│                                                                                                                       │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                       │
	│    * If the above advice does not help, please let us know:                                                           │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                         │
	│                                                                                                                       │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                              │
	│    * Please also attach the following file to the GitHub issue:                                                       │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_pause_8893f1c977cc86351b34571029ffce3d31854fd6_1.log    │
	│                                                                                                                       │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I1117 23:13:31.799564    6320 out.go:176] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:296: out/minikube-windows-amd64.exe pause -p old-k8s-version-20211117231110-9504 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-20211117231110-9504
helpers_test.go:235: (dbg) docker inspect old-k8s-version-20211117231110-9504:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "old-k8s-version-20211117231110-9504",
	        "Id": "138f7d40ee5b452e80a6cb6a3edb4c912f1cda6b29239fd829c80dbf72edf47c",
	        "Created": "2021-11-17T23:12:43.316454722Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.67.0/24",
	                    "Gateway": "192.168.67.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-20211117231110-9504 -n old-k8s-version-20211117231110-9504

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/Pause
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-20211117231110-9504 -n old-k8s-version-20211117231110-9504: exit status 7 (1.8380416s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 23:13:33.747407    6668 status.go:247] status error: host: state: unknown state "old-k8s-version-20211117231110-9504": docker container inspect old-k8s-version-20211117231110-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20211117231110-9504

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-20211117231110-9504" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-20211117231110-9504
helpers_test.go:235: (dbg) docker inspect old-k8s-version-20211117231110-9504:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "old-k8s-version-20211117231110-9504",
	        "Id": "138f7d40ee5b452e80a6cb6a3edb4c912f1cda6b29239fd829c80dbf72edf47c",
	        "Created": "2021-11-17T23:12:43.316454722Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.67.0/24",
	                    "Gateway": "192.168.67.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-20211117231110-9504 -n old-k8s-version-20211117231110-9504

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/Pause
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-20211117231110-9504 -n old-k8s-version-20211117231110-9504: exit status 7 (1.8626374s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 23:13:35.726465    8996 status.go:247] status error: host: state: unknown state "old-k8s-version-20211117231110-9504": docker container inspect old-k8s-version-20211117231110-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20211117231110-9504

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-20211117231110-9504" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (5.80s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (5.8s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-windows-amd64.exe pause -p embed-certs-20211117231110-9504 --alsologtostderr -v=1

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:296: (dbg) Non-zero exit: out/minikube-windows-amd64.exe pause -p embed-certs-20211117231110-9504 --alsologtostderr -v=1: exit status 80 (1.8741942s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1117 23:13:30.750753   12084 out.go:297] Setting OutFile to fd 1544 ...
	I1117 23:13:30.842533   12084 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 23:13:30.842533   12084 out.go:310] Setting ErrFile to fd 1528...
	I1117 23:13:30.842533   12084 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 23:13:30.852903   12084 out.go:304] Setting JSON to false
	I1117 23:13:30.853435   12084 mustload.go:65] Loading cluster: embed-certs-20211117231110-9504
	I1117 23:13:30.853682   12084 config.go:176] Loaded profile config "embed-certs-20211117231110-9504": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.3
	I1117 23:13:30.862597   12084 cli_runner.go:115] Run: docker container inspect embed-certs-20211117231110-9504 --format={{.State.Status}}
	W1117 23:13:32.395959   12084 cli_runner.go:162] docker container inspect embed-certs-20211117231110-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:13:32.395959   12084 cli_runner.go:168] Completed: docker container inspect embed-certs-20211117231110-9504 --format={{.State.Status}}: (1.5333502s)
	I1117 23:13:32.402582   12084 out.go:176] 
	W1117 23:13:32.403181   12084 out.go:241] X Exiting due to GUEST_STATUS: state: unknown state "embed-certs-20211117231110-9504": docker container inspect embed-certs-20211117231110-9504 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20211117231110-9504
	
	X Exiting due to GUEST_STATUS: state: unknown state "embed-certs-20211117231110-9504": docker container inspect embed-certs-20211117231110-9504 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20211117231110-9504
	
	W1117 23:13:32.403181   12084 out.go:241] * 
	* 
	W1117 23:13:32.411465   12084 out.go:241] ╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                       │
	│    * If the above advice does not help, please let us know:                                                           │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                         │
	│                                                                                                                       │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                              │
	│    * Please also attach the following file to the GitHub issue:                                                       │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_pause_8893f1c977cc86351b34571029ffce3d31854fd6_1.log    │
	│                                                                                                                       │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                       │
	│    * If the above advice does not help, please let us know:                                                           │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                         │
	│                                                                                                                       │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                              │
	│    * Please also attach the following file to the GitHub issue:                                                       │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_pause_8893f1c977cc86351b34571029ffce3d31854fd6_1.log    │
	│                                                                                                                       │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I1117 23:13:32.413946   12084 out.go:176] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:296: out/minikube-windows-amd64.exe pause -p embed-certs-20211117231110-9504 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect embed-certs-20211117231110-9504
helpers_test.go:235: (dbg) docker inspect embed-certs-20211117231110-9504:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "embed-certs-20211117231110-9504",
	        "Id": "49a30a26e6046d6b1c8274a30b93a0b079cde1a03a9b3864d276f6a2cf99256e",
	        "Created": "2021-11-17T23:12:44.093259743Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.76.0/24",
	                    "Gateway": "192.168.76.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p embed-certs-20211117231110-9504 -n embed-certs-20211117231110-9504

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/Pause
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p embed-certs-20211117231110-9504 -n embed-certs-20211117231110-9504: exit status 7 (1.8552229s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 23:13:34.390135    8664 status.go:247] status error: host: state: unknown state "embed-certs-20211117231110-9504": docker container inspect embed-certs-20211117231110-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20211117231110-9504

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-20211117231110-9504" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect embed-certs-20211117231110-9504
helpers_test.go:235: (dbg) docker inspect embed-certs-20211117231110-9504:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "embed-certs-20211117231110-9504",
	        "Id": "49a30a26e6046d6b1c8274a30b93a0b079cde1a03a9b3864d276f6a2cf99256e",
	        "Created": "2021-11-17T23:12:44.093259743Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.76.0/24",
	                    "Gateway": "192.168.76.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p embed-certs-20211117231110-9504 -n embed-certs-20211117231110-9504

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/Pause
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p embed-certs-20211117231110-9504 -n embed-certs-20211117231110-9504: exit status 7 (1.8440746s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 23:13:36.347578    5908 status.go:247] status error: host: state: unknown state "embed-certs-20211117231110-9504": docker container inspect embed-certs-20211117231110-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20211117231110-9504

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-20211117231110-9504" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (5.80s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (39.9s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:171: (dbg) Run:  out/minikube-windows-amd64.exe start -p newest-cni-20211117231341-9504 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubelet.network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker --kubernetes-version=v1.22.4-rc.0

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:171: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p newest-cni-20211117231341-9504 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubelet.network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker --kubernetes-version=v1.22.4-rc.0: exit status 80 (37.886625s)

                                                
                                                
-- stdout --
	* [newest-cni-20211117231341-9504] minikube v1.24.0 on Microsoft Windows 10 Enterprise N 10.0.18363 Build 18363
	  - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=12739
	* Using the docker driver based on user configuration
	* Starting control plane node newest-cni-20211117231341-9504 in cluster newest-cni-20211117231341-9504
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* docker "newest-cni-20211117231341-9504" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1117 23:13:42.102042    9004 out.go:297] Setting OutFile to fd 1364 ...
	I1117 23:13:42.175033    9004 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 23:13:42.175033    9004 out.go:310] Setting ErrFile to fd 1512...
	I1117 23:13:42.175033    9004 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 23:13:42.189476    9004 out.go:304] Setting JSON to false
	I1117 23:13:42.191465    9004 start.go:112] hostinfo: {"hostname":"minikube2","uptime":80137,"bootTime":1637110685,"procs":131,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.18363 Build 18363","kernelVersion":"10.0.18363 Build 18363","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"29dced62-21fb-45d8-a34e-472b66ced002"}
	W1117 23:13:42.191465    9004 start.go:120] gopshost.Virtualization returned error: not implemented yet
	I1117 23:13:42.199094    9004 out.go:176] * [newest-cni-20211117231341-9504] minikube v1.24.0 on Microsoft Windows 10 Enterprise N 10.0.18363 Build 18363
	I1117 23:13:42.199663    9004 notify.go:174] Checking for updates...
	I1117 23:13:42.202280    9004 out.go:176]   - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	I1117 23:13:42.204809    9004 out.go:176]   - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	I1117 23:13:42.207603    9004 out.go:176]   - MINIKUBE_LOCATION=12739
	I1117 23:13:42.212334    9004 config.go:176] Loaded profile config "default-k8s-different-port-20211117231152-9504": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.3
	I1117 23:13:42.212334    9004 config.go:176] Loaded profile config "multinode-20211117225530-9504-m01": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.3
	I1117 23:13:42.213407    9004 config.go:176] Loaded profile config "no-preload-20211117231133-9504": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.4-rc.0
	I1117 23:13:42.213550    9004 driver.go:343] Setting default libvirt URI to qemu:///system
	I1117 23:13:43.788670    9004 docker.go:132] docker version: linux-19.03.12
	I1117 23:13:43.793159    9004 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I1117 23:13:44.161964    9004 info.go:263] docker info: {ID:L2W4:ZZCI:FAHF:ZZT6:NAWJ:ODFR:2ZBQ:YQ77:PYZZ:VR3E:WLUM:OBSV Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:40 OomKillDisable:true NGoroutines:52 SystemTime:2021-11-17 23:13:43.889726076 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:4.19.76-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://inde
x.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:6239399936 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:19.03.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7ad184331fa3e55e52b890ea95e65ba581ae3429 Expected:7ad184331fa3e55e52b890ea95e65ba581ae3429} RuncCommit:{ID:dc9208a3303feef5b3839f4323d9beb36df0a9dd Expected:dc9208a3303feef5b3839f4323d9beb36df0a9dd} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense:Community Engine Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I1117 23:13:44.169217    9004 out.go:176] * Using the docker driver based on user configuration
	I1117 23:13:44.169217    9004 start.go:280] selected driver: docker
	I1117 23:13:44.169217    9004 start.go:775] validating driver "docker" against <nil>
	I1117 23:13:44.169217    9004 start.go:786] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I1117 23:13:44.227326    9004 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I1117 23:13:44.582927    9004 info.go:263] docker info: {ID:L2W4:ZZCI:FAHF:ZZT6:NAWJ:ODFR:2ZBQ:YQ77:PYZZ:VR3E:WLUM:OBSV Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:40 OomKillDisable:true NGoroutines:52 SystemTime:2021-11-17 23:13:44.313131545 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:4.19.76-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://inde
x.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:6239399936 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:19.03.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7ad184331fa3e55e52b890ea95e65ba581ae3429 Expected:7ad184331fa3e55e52b890ea95e65ba581ae3429} RuncCommit:{ID:dc9208a3303feef5b3839f4323d9beb36df0a9dd Expected:dc9208a3303feef5b3839f4323d9beb36df0a9dd} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense:Community Engine Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I1117 23:13:44.583122    9004 start_flags.go:268] no existing cluster config was found, will generate one from the flags 
	W1117 23:13:44.583122    9004 out.go:241] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1117 23:13:44.584101    9004 start_flags.go:777] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1117 23:13:44.584101    9004 cni.go:93] Creating CNI manager for ""
	I1117 23:13:44.584101    9004 cni.go:167] CNI unnecessary in this configuration, recommending no CNI
	I1117 23:13:44.584101    9004 start_flags.go:282] config:
	{Name:newest-cni-20211117231341-9504 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.4-rc.0 ClusterName:newest-cni-20211117231341-9504 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CR
ISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:network-plugin Value:cni} {Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host}
	I1117 23:13:44.591452    9004 out.go:176] * Starting control plane node newest-cni-20211117231341-9504 in cluster newest-cni-20211117231341-9504
	I1117 23:13:44.591452    9004 cache.go:118] Beginning downloading kic base image for docker with docker
	I1117 23:13:44.598715    9004 out.go:176] * Pulling base image ...
	I1117 23:13:44.598715    9004 preload.go:132] Checking if preload exists for k8s version v1.22.4-rc.0 and runtime docker
	I1117 23:13:44.598969    9004 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c in local docker daemon
	I1117 23:13:44.599060    9004 preload.go:148] Found local preload: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v14-v1.22.4-rc.0-docker-overlay2-amd64.tar.lz4
	I1117 23:13:44.599115    9004 cache.go:57] Caching tarball of preloaded images
	I1117 23:13:44.599371    9004 preload.go:174] Found C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v14-v1.22.4-rc.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1117 23:13:44.599649    9004 cache.go:60] Finished verifying existence of preloaded tar for  v1.22.4-rc.0 on docker
	I1117 23:13:44.599940    9004 profile.go:147] Saving config to C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\newest-cni-20211117231341-9504\config.json ...
	I1117 23:13:44.600214    9004 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\newest-cni-20211117231341-9504\config.json: {Name:mk9fcc1b90059c78c8a8b0ab767654d6edfb8638 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1117 23:13:44.696903    9004 image.go:79] Found gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c in local docker daemon, skipping pull
	I1117 23:13:44.696903    9004 cache.go:140] gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c exists in daemon, skipping load
	I1117 23:13:44.696903    9004 cache.go:206] Successfully downloaded all kic artifacts
	I1117 23:13:44.697191    9004 start.go:313] acquiring machines lock for newest-cni-20211117231341-9504: {Name:mkb4f6b61af8e77a295e231eb5be8b44810e2cdb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 23:13:44.697472    9004 start.go:317] acquired machines lock for "newest-cni-20211117231341-9504" in 182.1µs
	I1117 23:13:44.697714    9004 start.go:89] Provisioning new machine with config: &{Name:newest-cni-20211117231341-9504 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.4-rc.0 ClusterName:newest-cni-20211117231341-9504 Namespace:default APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:network-plugin Value:cni} {Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.22.4-rc.0 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host} &{Name: IP: Port:8443 KubernetesVersion:v1.22.4-rc.0 Control
Plane:true Worker:true}
	I1117 23:13:44.697829    9004 start.go:126] createHost starting for "" (driver="docker")
	I1117 23:13:44.703381    9004 out.go:203] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I1117 23:13:44.704117    9004 start.go:160] libmachine.API.Create for "newest-cni-20211117231341-9504" (driver="docker")
	I1117 23:13:44.704117    9004 client.go:168] LocalClient.Create starting
	I1117 23:13:44.704117    9004 main.go:130] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I1117 23:13:44.704648    9004 main.go:130] libmachine: Decoding PEM data...
	I1117 23:13:44.704648    9004 main.go:130] libmachine: Parsing certificate...
	I1117 23:13:44.704957    9004 main.go:130] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I1117 23:13:44.704957    9004 main.go:130] libmachine: Decoding PEM data...
	I1117 23:13:44.704957    9004 main.go:130] libmachine: Parsing certificate...
	I1117 23:13:44.712606    9004 cli_runner.go:115] Run: docker network inspect newest-cni-20211117231341-9504 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1117 23:13:44.806812    9004 cli_runner.go:162] docker network inspect newest-cni-20211117231341-9504 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1117 23:13:44.811148    9004 network_create.go:254] running [docker network inspect newest-cni-20211117231341-9504] to gather additional debugging logs...
	I1117 23:13:44.811282    9004 cli_runner.go:115] Run: docker network inspect newest-cni-20211117231341-9504
	W1117 23:13:44.902453    9004 cli_runner.go:162] docker network inspect newest-cni-20211117231341-9504 returned with exit code 1
	I1117 23:13:44.902766    9004 network_create.go:257] error running [docker network inspect newest-cni-20211117231341-9504]: docker network inspect newest-cni-20211117231341-9504: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: newest-cni-20211117231341-9504
	I1117 23:13:44.902766    9004 network_create.go:259] output of [docker network inspect newest-cni-20211117231341-9504]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: newest-cni-20211117231341-9504
	
	** /stderr **
	I1117 23:13:44.909911    9004 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1117 23:13:45.020213    9004 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc0006c6148] misses:0}
	I1117 23:13:45.020755    9004 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1117 23:13:45.020755    9004 network_create.go:106] attempt to create docker network newest-cni-20211117231341-9504 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1117 23:13:45.024895    9004 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true newest-cni-20211117231341-9504
	I1117 23:13:45.228222    9004 network_create.go:90] docker network newest-cni-20211117231341-9504 192.168.49.0/24 created
	I1117 23:13:45.228444    9004 kic.go:106] calculated static IP "192.168.49.2" for the "newest-cni-20211117231341-9504" container
	I1117 23:13:45.239965    9004 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
	I1117 23:13:45.335879    9004 cli_runner.go:115] Run: docker volume create newest-cni-20211117231341-9504 --label name.minikube.sigs.k8s.io=newest-cni-20211117231341-9504 --label created_by.minikube.sigs.k8s.io=true
	I1117 23:13:45.437060    9004 oci.go:102] Successfully created a docker volume newest-cni-20211117231341-9504
	I1117 23:13:45.441932    9004 cli_runner.go:115] Run: docker run --rm --name newest-cni-20211117231341-9504-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-20211117231341-9504 --entrypoint /usr/bin/test -v newest-cni-20211117231341-9504:/var gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -d /var/lib
	I1117 23:13:46.554504    9004 cli_runner.go:168] Completed: docker run --rm --name newest-cni-20211117231341-9504-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-20211117231341-9504 --entrypoint /usr/bin/test -v newest-cni-20211117231341-9504:/var gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -d /var/lib: (1.1125634s)
	I1117 23:13:46.554841    9004 oci.go:106] Successfully prepared a docker volume newest-cni-20211117231341-9504
	I1117 23:13:46.554913    9004 preload.go:132] Checking if preload exists for k8s version v1.22.4-rc.0 and runtime docker
	I1117 23:13:46.555022    9004 kic.go:179] Starting extracting preloaded images to volume ...
	I1117 23:13:46.559535    9004 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I1117 23:13:46.560144    9004 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v14-v1.22.4-rc.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-20211117231341-9504:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir
	W1117 23:13:46.667844    9004 cli_runner.go:162] docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v14-v1.22.4-rc.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-20211117231341-9504:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir returned with exit code 125
	I1117 23:13:46.667844    9004 kic.go:186] Unable to extract preloaded tarball to volume: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v14-v1.22.4-rc.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-20211117231341-9504:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: status code not OK but 500: {"Message":"Unhandled exception: The notification platform is unavailable.\r\n\r\nThe notification platform is unavailable.\r\n","StackTrace":"   at Windows.UI.Notifications.ToastNotificationManager.CreateToastNotifier(String applicationId)\r\n   at Docker.WPF.PromptShareDirectory.<PromptUserAsync>d__0.MoveNext() in C:\\workspaces\\stable-2.3.x\\src\\github.com\\docker\\pinata\\win\\src\\Docker.WPF\\PromptShareDirectory.cs:line 25\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at Docker.ApiServices.Mounting.FileSharing.<DoShareAsync>d__6.MoveNext() in C:\\workspaces\\stable-2.3.x\\src\\github.com\\docker\\pinata\\win\\src\\Docker.ApiServices\\Mounting\\FileSharing.cs:line 80\r\n--- End of stack trace from previous location whe
re exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at Docker.ApiServices.Mounting.FileSharing.<ShareAsync>d__4.MoveNext() in C:\\workspaces\\stable-2.3.x\\src\\github.com\\docker\\pinata\\win\\src\\Docker.ApiServices\\Mounting\\FileSharing.cs:line 47\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at Docker.HttpApi.Controllers.FilesharingController.<ShareDirectory>d__2.MoveNext() in C:\\workspaces\\stable-2.3.x\\src\\github.com\\docker\\pinata\\win\\src\\Docker.HttpApi\\Controllers\\FilesharingController.cs:line 21\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.Exceptio
nDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at System.Threading.Tasks.TaskHelpersExtensions.<CastToObject>d__1`1.MoveNext()\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at System.Web.Http.Controllers.ApiControllerActionInvoker.<InvokeActionAsyncCore>d__1.MoveNext()\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at System.Web.Http.Controllers.ActionFilterResult.<ExecuteAsync>d__5.MoveNext()\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.Excepti
onServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at System.Web.Http.Dispatcher.HttpControllerDispatcher.<SendAsync>d__15.MoveNext()"}.
	See 'docker run --help'.
	I1117 23:13:46.929681    9004 info.go:263] docker info: {ID:L2W4:ZZCI:FAHF:ZZT6:NAWJ:ODFR:2ZBQ:YQ77:PYZZ:VR3E:WLUM:OBSV Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:41 OomKillDisable:true NGoroutines:53 SystemTime:2021-11-17 23:13:46.649442341 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:4.19.76-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://inde
x.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:6239399936 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:19.03.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7ad184331fa3e55e52b890ea95e65ba581ae3429 Expected:7ad184331fa3e55e52b890ea95e65ba581ae3429} RuncCommit:{ID:dc9208a3303feef5b3839f4323d9beb36df0a9dd Expected:dc9208a3303feef5b3839f4323d9beb36df0a9dd} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense:Community Engine Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	E1117 23:13:46.930179    9004 oci.go:197] error getting kernel modules path: Unable to locate kernel modules
	I1117 23:13:46.930179    9004 client.go:171] LocalClient.Create took 2.2260454s
	I1117 23:13:48.938882    9004 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1117 23:13:48.941613    9004 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20211117231341-9504
	W1117 23:13:49.031575    9004 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20211117231341-9504 returned with exit code 1
	I1117 23:13:49.031916    9004 retry.go:31] will retry after 276.165072ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-20211117231341-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20211117231341-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20211117231341-9504
	I1117 23:13:49.313463    9004 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20211117231341-9504
	W1117 23:13:49.411670    9004 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20211117231341-9504 returned with exit code 1
	I1117 23:13:49.411670    9004 retry.go:31] will retry after 540.190908ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-20211117231341-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20211117231341-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20211117231341-9504
	I1117 23:13:49.958015    9004 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20211117231341-9504
	W1117 23:13:50.042876    9004 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20211117231341-9504 returned with exit code 1
	I1117 23:13:50.043161    9004 retry.go:31] will retry after 655.06503ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-20211117231341-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20211117231341-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20211117231341-9504
	I1117 23:13:50.703913    9004 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20211117231341-9504
	W1117 23:13:50.796607    9004 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20211117231341-9504 returned with exit code 1
	W1117 23:13:50.796753    9004 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-20211117231341-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20211117231341-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20211117231341-9504
	
	W1117 23:13:50.796753    9004 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-20211117231341-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20211117231341-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20211117231341-9504
	I1117 23:13:50.796753    9004 start.go:129] duration metric: createHost completed in 6.0987797s
	I1117 23:13:50.796753    9004 start.go:80] releasing machines lock for "newest-cni-20211117231341-9504", held for 6.0992358s
	W1117 23:13:50.796753    9004 start.go:532] error starting host: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	I1117 23:13:50.806267    9004 cli_runner.go:115] Run: docker container inspect newest-cni-20211117231341-9504 --format={{.State.Status}}
	W1117 23:13:50.894590    9004 cli_runner.go:162] docker container inspect newest-cni-20211117231341-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:13:50.894899    9004 delete.go:82] Unable to get host status for newest-cni-20211117231341-9504, assuming it has already been deleted: state: unknown state "newest-cni-20211117231341-9504": docker container inspect newest-cni-20211117231341-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20211117231341-9504
	W1117 23:13:50.894972    9004 out.go:241] ! StartHost failed, but will try again: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	! StartHost failed, but will try again: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	I1117 23:13:50.894972    9004 start.go:547] Will try again in 5 seconds ...
	I1117 23:13:55.895716    9004 start.go:313] acquiring machines lock for newest-cni-20211117231341-9504: {Name:mkb4f6b61af8e77a295e231eb5be8b44810e2cdb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 23:13:55.895716    9004 start.go:317] acquired machines lock for "newest-cni-20211117231341-9504" in 0s
	I1117 23:13:55.895716    9004 start.go:93] Skipping create...Using existing machine configuration
	I1117 23:13:55.895716    9004 fix.go:55] fixHost starting: 
	I1117 23:13:55.904092    9004 cli_runner.go:115] Run: docker container inspect newest-cni-20211117231341-9504 --format={{.State.Status}}
	W1117 23:13:56.001271    9004 cli_runner.go:162] docker container inspect newest-cni-20211117231341-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:13:56.001459    9004 fix.go:108] recreateIfNeeded on newest-cni-20211117231341-9504: state= err=unknown state "newest-cni-20211117231341-9504": docker container inspect newest-cni-20211117231341-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20211117231341-9504
	I1117 23:13:56.001459    9004 fix.go:113] machineExists: false. err=machine does not exist
	I1117 23:13:56.005181    9004 out.go:176] * docker "newest-cni-20211117231341-9504" container is missing, will recreate.
	I1117 23:13:56.005181    9004 delete.go:124] DEMOLISHING newest-cni-20211117231341-9504 ...
	I1117 23:13:56.011929    9004 cli_runner.go:115] Run: docker container inspect newest-cni-20211117231341-9504 --format={{.State.Status}}
	W1117 23:13:56.099523    9004 cli_runner.go:162] docker container inspect newest-cni-20211117231341-9504 --format={{.State.Status}} returned with exit code 1
	W1117 23:13:56.099786    9004 stop.go:75] unable to get state: unknown state "newest-cni-20211117231341-9504": docker container inspect newest-cni-20211117231341-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20211117231341-9504
	I1117 23:13:56.099786    9004 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "newest-cni-20211117231341-9504": docker container inspect newest-cni-20211117231341-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20211117231341-9504
	I1117 23:13:56.107619    9004 cli_runner.go:115] Run: docker container inspect newest-cni-20211117231341-9504 --format={{.State.Status}}
	W1117 23:13:56.197208    9004 cli_runner.go:162] docker container inspect newest-cni-20211117231341-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:13:56.197333    9004 delete.go:82] Unable to get host status for newest-cni-20211117231341-9504, assuming it has already been deleted: state: unknown state "newest-cni-20211117231341-9504": docker container inspect newest-cni-20211117231341-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20211117231341-9504
	I1117 23:13:56.201754    9004 cli_runner.go:115] Run: docker container inspect -f {{.Id}} newest-cni-20211117231341-9504
	W1117 23:13:56.289748    9004 cli_runner.go:162] docker container inspect -f {{.Id}} newest-cni-20211117231341-9504 returned with exit code 1
	I1117 23:13:56.289926    9004 kic.go:360] could not find the container newest-cni-20211117231341-9504 to remove it. will try anyways
	I1117 23:13:56.295274    9004 cli_runner.go:115] Run: docker container inspect newest-cni-20211117231341-9504 --format={{.State.Status}}
	W1117 23:13:56.387027    9004 cli_runner.go:162] docker container inspect newest-cni-20211117231341-9504 --format={{.State.Status}} returned with exit code 1
	W1117 23:13:56.387027    9004 oci.go:83] error getting container status, will try to delete anyways: unknown state "newest-cni-20211117231341-9504": docker container inspect newest-cni-20211117231341-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20211117231341-9504
	I1117 23:13:56.391232    9004 cli_runner.go:115] Run: docker exec --privileged -t newest-cni-20211117231341-9504 /bin/bash -c "sudo init 0"
	W1117 23:13:56.479021    9004 cli_runner.go:162] docker exec --privileged -t newest-cni-20211117231341-9504 /bin/bash -c "sudo init 0" returned with exit code 1
	I1117 23:13:56.479415    9004 oci.go:658] error shutdown newest-cni-20211117231341-9504: docker exec --privileged -t newest-cni-20211117231341-9504 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: newest-cni-20211117231341-9504
	I1117 23:13:57.484241    9004 cli_runner.go:115] Run: docker container inspect newest-cni-20211117231341-9504 --format={{.State.Status}}
	W1117 23:13:57.589634    9004 cli_runner.go:162] docker container inspect newest-cni-20211117231341-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:13:57.589917    9004 oci.go:670] temporary error verifying shutdown: unknown state "newest-cni-20211117231341-9504": docker container inspect newest-cni-20211117231341-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20211117231341-9504
	I1117 23:13:57.589917    9004 oci.go:672] temporary error: container newest-cni-20211117231341-9504 status is  but expect it to be exited
	I1117 23:13:57.589917    9004 retry.go:31] will retry after 462.318748ms: couldn't verify container is exited. %v: unknown state "newest-cni-20211117231341-9504": docker container inspect newest-cni-20211117231341-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20211117231341-9504
	I1117 23:13:58.056766    9004 cli_runner.go:115] Run: docker container inspect newest-cni-20211117231341-9504 --format={{.State.Status}}
	W1117 23:13:58.163251    9004 cli_runner.go:162] docker container inspect newest-cni-20211117231341-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:13:58.163632    9004 oci.go:670] temporary error verifying shutdown: unknown state "newest-cni-20211117231341-9504": docker container inspect newest-cni-20211117231341-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20211117231341-9504
	I1117 23:13:58.163632    9004 oci.go:672] temporary error: container newest-cni-20211117231341-9504 status is  but expect it to be exited
	I1117 23:13:58.163632    9004 retry.go:31] will retry after 890.117305ms: couldn't verify container is exited. %v: unknown state "newest-cni-20211117231341-9504": docker container inspect newest-cni-20211117231341-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20211117231341-9504
	I1117 23:13:59.059312    9004 cli_runner.go:115] Run: docker container inspect newest-cni-20211117231341-9504 --format={{.State.Status}}
	W1117 23:13:59.157678    9004 cli_runner.go:162] docker container inspect newest-cni-20211117231341-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:13:59.157747    9004 oci.go:670] temporary error verifying shutdown: unknown state "newest-cni-20211117231341-9504": docker container inspect newest-cni-20211117231341-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20211117231341-9504
	I1117 23:13:59.157747    9004 oci.go:672] temporary error: container newest-cni-20211117231341-9504 status is  but expect it to be exited
	I1117 23:13:59.157747    9004 retry.go:31] will retry after 636.341646ms: couldn't verify container is exited. %v: unknown state "newest-cni-20211117231341-9504": docker container inspect newest-cni-20211117231341-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20211117231341-9504
	I1117 23:13:59.798328    9004 cli_runner.go:115] Run: docker container inspect newest-cni-20211117231341-9504 --format={{.State.Status}}
	W1117 23:13:59.893333    9004 cli_runner.go:162] docker container inspect newest-cni-20211117231341-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:13:59.893515    9004 oci.go:670] temporary error verifying shutdown: unknown state "newest-cni-20211117231341-9504": docker container inspect newest-cni-20211117231341-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20211117231341-9504
	I1117 23:13:59.893515    9004 oci.go:672] temporary error: container newest-cni-20211117231341-9504 status is  but expect it to be exited
	I1117 23:13:59.893515    9004 retry.go:31] will retry after 1.107876242s: couldn't verify container is exited. %v: unknown state "newest-cni-20211117231341-9504": docker container inspect newest-cni-20211117231341-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20211117231341-9504
	I1117 23:14:01.010578    9004 cli_runner.go:115] Run: docker container inspect newest-cni-20211117231341-9504 --format={{.State.Status}}
	W1117 23:14:01.107341    9004 cli_runner.go:162] docker container inspect newest-cni-20211117231341-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:14:01.107491    9004 oci.go:670] temporary error verifying shutdown: unknown state "newest-cni-20211117231341-9504": docker container inspect newest-cni-20211117231341-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20211117231341-9504
	I1117 23:14:01.107491    9004 oci.go:672] temporary error: container newest-cni-20211117231341-9504 status is  but expect it to be exited
	I1117 23:14:01.107491    9004 retry.go:31] will retry after 1.511079094s: couldn't verify container is exited. %v: unknown state "newest-cni-20211117231341-9504": docker container inspect newest-cni-20211117231341-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20211117231341-9504
	I1117 23:14:02.622432    9004 cli_runner.go:115] Run: docker container inspect newest-cni-20211117231341-9504 --format={{.State.Status}}
	W1117 23:14:02.710380    9004 cli_runner.go:162] docker container inspect newest-cni-20211117231341-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:14:02.710464    9004 oci.go:670] temporary error verifying shutdown: unknown state "newest-cni-20211117231341-9504": docker container inspect newest-cni-20211117231341-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20211117231341-9504
	I1117 23:14:02.710464    9004 oci.go:672] temporary error: container newest-cni-20211117231341-9504 status is  but expect it to be exited
	I1117 23:14:02.710464    9004 retry.go:31] will retry after 3.04096222s: couldn't verify container is exited. %v: unknown state "newest-cni-20211117231341-9504": docker container inspect newest-cni-20211117231341-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20211117231341-9504
	I1117 23:14:05.757295    9004 cli_runner.go:115] Run: docker container inspect newest-cni-20211117231341-9504 --format={{.State.Status}}
	W1117 23:14:05.854324    9004 cli_runner.go:162] docker container inspect newest-cni-20211117231341-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:14:05.854702    9004 oci.go:670] temporary error verifying shutdown: unknown state "newest-cni-20211117231341-9504": docker container inspect newest-cni-20211117231341-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20211117231341-9504
	I1117 23:14:05.854702    9004 oci.go:672] temporary error: container newest-cni-20211117231341-9504 status is  but expect it to be exited
	I1117 23:14:05.854702    9004 retry.go:31] will retry after 5.781953173s: couldn't verify container is exited. %v: unknown state "newest-cni-20211117231341-9504": docker container inspect newest-cni-20211117231341-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20211117231341-9504
	I1117 23:14:11.641435    9004 cli_runner.go:115] Run: docker container inspect newest-cni-20211117231341-9504 --format={{.State.Status}}
	W1117 23:14:11.733145    9004 cli_runner.go:162] docker container inspect newest-cni-20211117231341-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:14:11.733145    9004 oci.go:670] temporary error verifying shutdown: unknown state "newest-cni-20211117231341-9504": docker container inspect newest-cni-20211117231341-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20211117231341-9504
	I1117 23:14:11.733631    9004 oci.go:672] temporary error: container newest-cni-20211117231341-9504 status is  but expect it to be exited
	I1117 23:14:11.733631    9004 oci.go:87] couldn't shut down newest-cni-20211117231341-9504 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "newest-cni-20211117231341-9504": docker container inspect newest-cni-20211117231341-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20211117231341-9504
	 
	I1117 23:14:11.737878    9004 cli_runner.go:115] Run: docker rm -f -v newest-cni-20211117231341-9504
	W1117 23:14:11.828789    9004 cli_runner.go:162] docker rm -f -v newest-cni-20211117231341-9504 returned with exit code 1
	W1117 23:14:11.829560    9004 delete.go:139] delete failed (probably ok) <nil>
	I1117 23:14:11.829560    9004 fix.go:120] Sleeping 1 second for extra luck!
	I1117 23:14:12.831534    9004 start.go:126] createHost starting for "" (driver="docker")
	I1117 23:14:12.836488    9004 out.go:203] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I1117 23:14:12.837223    9004 start.go:160] libmachine.API.Create for "newest-cni-20211117231341-9504" (driver="docker")
	I1117 23:14:12.837223    9004 client.go:168] LocalClient.Create starting
	I1117 23:14:12.837911    9004 main.go:130] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I1117 23:14:12.837911    9004 main.go:130] libmachine: Decoding PEM data...
	I1117 23:14:12.837911    9004 main.go:130] libmachine: Parsing certificate...
	I1117 23:14:12.837911    9004 main.go:130] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I1117 23:14:12.838562    9004 main.go:130] libmachine: Decoding PEM data...
	I1117 23:14:12.838657    9004 main.go:130] libmachine: Parsing certificate...
	I1117 23:14:12.842701    9004 cli_runner.go:115] Run: docker network inspect newest-cni-20211117231341-9504 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1117 23:14:12.934752    9004 cli_runner.go:162] docker network inspect newest-cni-20211117231341-9504 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1117 23:14:12.940077    9004 network_create.go:254] running [docker network inspect newest-cni-20211117231341-9504] to gather additional debugging logs...
	I1117 23:14:12.940225    9004 cli_runner.go:115] Run: docker network inspect newest-cni-20211117231341-9504
	W1117 23:14:13.031771    9004 cli_runner.go:162] docker network inspect newest-cni-20211117231341-9504 returned with exit code 1
	I1117 23:14:13.031892    9004 network_create.go:257] error running [docker network inspect newest-cni-20211117231341-9504]: docker network inspect newest-cni-20211117231341-9504: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: newest-cni-20211117231341-9504
	I1117 23:14:13.031892    9004 network_create.go:259] output of [docker network inspect newest-cni-20211117231341-9504]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: newest-cni-20211117231341-9504
	
	** /stderr **
	I1117 23:14:13.035464    9004 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1117 23:14:13.144427    9004 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0006c6148] amended:false}} dirty:map[] misses:0}
	I1117 23:14:13.144427    9004 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1117 23:14:13.155427    9004 network.go:288] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0006c6148] amended:true}} dirty:map[192.168.49.0:0xc0006c6148 192.168.58.0:0xc0006102b8] misses:0}
	I1117 23:14:13.156428    9004 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1117 23:14:13.156428    9004 network_create.go:106] attempt to create docker network newest-cni-20211117231341-9504 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I1117 23:14:13.160419    9004 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true newest-cni-20211117231341-9504
	I1117 23:14:13.358984    9004 network_create.go:90] docker network newest-cni-20211117231341-9504 192.168.58.0/24 created
	I1117 23:14:13.358984    9004 kic.go:106] calculated static IP "192.168.58.2" for the "newest-cni-20211117231341-9504" container
	I1117 23:14:13.366724    9004 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
	I1117 23:14:13.475425    9004 cli_runner.go:115] Run: docker volume create newest-cni-20211117231341-9504 --label name.minikube.sigs.k8s.io=newest-cni-20211117231341-9504 --label created_by.minikube.sigs.k8s.io=true
	I1117 23:14:13.566946    9004 oci.go:102] Successfully created a docker volume newest-cni-20211117231341-9504
	I1117 23:14:13.571064    9004 cli_runner.go:115] Run: docker run --rm --name newest-cni-20211117231341-9504-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-20211117231341-9504 --entrypoint /usr/bin/test -v newest-cni-20211117231341-9504:/var gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -d /var/lib
	I1117 23:14:14.445423    9004 oci.go:106] Successfully prepared a docker volume newest-cni-20211117231341-9504
	I1117 23:14:14.445582    9004 preload.go:132] Checking if preload exists for k8s version v1.22.4-rc.0 and runtime docker
	I1117 23:14:14.445582    9004 kic.go:179] Starting extracting preloaded images to volume ...
	I1117 23:14:14.449762    9004 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v14-v1.22.4-rc.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-20211117231341-9504:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir
	I1117 23:14:14.453386    9004 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	W1117 23:14:14.581266    9004 cli_runner.go:162] docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v14-v1.22.4-rc.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-20211117231341-9504:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir returned with exit code 125
	I1117 23:14:14.581464    9004 kic.go:186] Unable to extract preloaded tarball to volume: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v14-v1.22.4-rc.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-20211117231341-9504:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: status code not OK but 500: {"Message":"Unhandled exception: The notification platform is unavailable.\r\n\r\nThe notification platform is unavailable.\r\n","StackTrace":"   at Windows.UI.Notifications.ToastNotificationManager.CreateToastNotifier(String applicationId)\r\n   at Docker.WPF.PromptShareDirectory.<PromptUserAsync>d__0.MoveNext() in C:\\workspaces\\stable-2.3.x\\src\\github.com\\docker\\pinata\\win\\src\\Docker.WPF\\PromptShareDirectory.cs:line 25\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at Docker.ApiServices.Mounting.FileSharing.<DoShareAsync>d__6.MoveNext() in C:\\workspaces\\stable-2.3.x\\src\\github.com\\docker\\pinata\\win\\src\\Docker.ApiServices\\Mounting\\FileSharing.cs:line 80\r\n--- End of stack trace from previous location whe
re exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at Docker.ApiServices.Mounting.FileSharing.<ShareAsync>d__4.MoveNext() in C:\\workspaces\\stable-2.3.x\\src\\github.com\\docker\\pinata\\win\\src\\Docker.ApiServices\\Mounting\\FileSharing.cs:line 47\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at Docker.HttpApi.Controllers.FilesharingController.<ShareDirectory>d__2.MoveNext() in C:\\workspaces\\stable-2.3.x\\src\\github.com\\docker\\pinata\\win\\src\\Docker.HttpApi\\Controllers\\FilesharingController.cs:line 21\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.Exceptio
nDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at System.Threading.Tasks.TaskHelpersExtensions.<CastToObject>d__1`1.MoveNext()\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at System.Web.Http.Controllers.ApiControllerActionInvoker.<InvokeActionAsyncCore>d__1.MoveNext()\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at System.Web.Http.Controllers.ActionFilterResult.<ExecuteAsync>d__5.MoveNext()\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.Excepti
onServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at System.Web.Http.Dispatcher.HttpControllerDispatcher.<SendAsync>d__15.MoveNext()"}.
	See 'docker run --help'.
	I1117 23:14:14.815701    9004 info.go:263] docker info: {ID:L2W4:ZZCI:FAHF:ZZT6:NAWJ:ODFR:2ZBQ:YQ77:PYZZ:VR3E:WLUM:OBSV Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:40 OomKillDisable:true NGoroutines:52 SystemTime:2021-11-17 23:14:14.533922447 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:4.19.76-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://inde
x.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:6239399936 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:19.03.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7ad184331fa3e55e52b890ea95e65ba581ae3429 Expected:7ad184331fa3e55e52b890ea95e65ba581ae3429} RuncCommit:{ID:dc9208a3303feef5b3839f4323d9beb36df0a9dd Expected:dc9208a3303feef5b3839f4323d9beb36df0a9dd} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense:Community Engine Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	E1117 23:14:14.815701    9004 oci.go:197] error getting kernel modules path: Unable to locate kernel modules
	I1117 23:14:14.815701    9004 client.go:171] LocalClient.Create took 1.9784635s
	I1117 23:14:16.824833    9004 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1117 23:14:16.829099    9004 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20211117231341-9504
	W1117 23:14:16.925361    9004 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20211117231341-9504 returned with exit code 1
	I1117 23:14:16.925361    9004 retry.go:31] will retry after 178.565968ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-20211117231341-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20211117231341-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20211117231341-9504
	I1117 23:14:17.111625    9004 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20211117231341-9504
	W1117 23:14:17.203045    9004 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20211117231341-9504 returned with exit code 1
	I1117 23:14:17.203345    9004 retry.go:31] will retry after 330.246446ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-20211117231341-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20211117231341-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20211117231341-9504
	I1117 23:14:17.542144    9004 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20211117231341-9504
	W1117 23:14:17.632534    9004 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20211117231341-9504 returned with exit code 1
	I1117 23:14:17.632861    9004 retry.go:31] will retry after 460.157723ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-20211117231341-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20211117231341-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20211117231341-9504
	I1117 23:14:18.101414    9004 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20211117231341-9504
	W1117 23:14:18.188593    9004 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20211117231341-9504 returned with exit code 1
	W1117 23:14:18.188593    9004 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-20211117231341-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20211117231341-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20211117231341-9504
	
	W1117 23:14:18.188593    9004 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-20211117231341-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20211117231341-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20211117231341-9504
	I1117 23:14:18.188593    9004 start.go:129] duration metric: createHost completed in 5.3568473s
	I1117 23:14:18.196232    9004 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1117 23:14:18.203125    9004 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20211117231341-9504
	W1117 23:14:18.291267    9004 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20211117231341-9504 returned with exit code 1
	I1117 23:14:18.291527    9004 retry.go:31] will retry after 195.758538ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-20211117231341-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20211117231341-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20211117231341-9504
	I1117 23:14:18.492672    9004 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20211117231341-9504
	W1117 23:14:18.585799    9004 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20211117231341-9504 returned with exit code 1
	I1117 23:14:18.585799    9004 retry.go:31] will retry after 297.413196ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-20211117231341-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20211117231341-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20211117231341-9504
	I1117 23:14:18.888250    9004 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20211117231341-9504
	W1117 23:14:19.007778    9004 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20211117231341-9504 returned with exit code 1
	I1117 23:14:19.007778    9004 retry.go:31] will retry after 663.23513ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-20211117231341-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20211117231341-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20211117231341-9504
	I1117 23:14:19.675611    9004 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20211117231341-9504
	W1117 23:14:19.764309    9004 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20211117231341-9504 returned with exit code 1
	W1117 23:14:19.764309    9004 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-20211117231341-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20211117231341-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20211117231341-9504
	
	W1117 23:14:19.764309    9004 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-20211117231341-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20211117231341-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20211117231341-9504
	I1117 23:14:19.764309    9004 fix.go:57] fixHost completed within 23.868414s
	I1117 23:14:19.764309    9004 start.go:80] releasing machines lock for "newest-cni-20211117231341-9504", held for 23.868414s
	W1117 23:14:19.764933    9004 out.go:241] * Failed to start docker container. Running "minikube delete -p newest-cni-20211117231341-9504" may fix it: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	* Failed to start docker container. Running "minikube delete -p newest-cni-20211117231341-9504" may fix it: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	I1117 23:14:19.768287    9004 out.go:176] 
	W1117 23:14:19.768971    9004 out.go:241] X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	W1117 23:14:19.769009    9004 out.go:241] * 
	* 
	W1117 23:14:19.785063    9004 out.go:241] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1117 23:14:19.787505    9004 out.go:176] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:173: failed starting minikube -first start-. args "out/minikube-windows-amd64.exe start -p newest-cni-20211117231341-9504 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubelet.network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker --kubernetes-version=v1.22.4-rc.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/newest-cni/serial/FirstStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect newest-cni-20211117231341-9504
helpers_test.go:235: (dbg) docker inspect newest-cni-20211117231341-9504:

                                                
                                                
-- stdout --
	[
	    {
	        "CreatedAt": "2021-11-17T23:13:45Z",
	        "Driver": "local",
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "newest-cni-20211117231341-9504"
	        },
	        "Mountpoint": "/var/lib/docker/volumes/newest-cni-20211117231341-9504/_data",
	        "Name": "newest-cni-20211117231341-9504",
	        "Options": {},
	        "Scope": "local"
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p newest-cni-20211117231341-9504 -n newest-cni-20211117231341-9504

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/FirstStart
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p newest-cni-20211117231341-9504 -n newest-cni-20211117231341-9504: exit status 7 (1.8090133s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 23:14:21.808630    5704 status.go:247] status error: host: state: unknown state "newest-cni-20211117231341-9504": docker container inspect newest-cni-20211117231341-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20211117231341-9504

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-20211117231341-9504" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/newest-cni/serial/FirstStart (39.90s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (1.98s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:260: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-20211117231133-9504" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect no-preload-20211117231133-9504
helpers_test.go:235: (dbg) docker inspect no-preload-20211117231133-9504:

                                                
                                                
-- stdout --
	[
	    {
	        "CreatedAt": "2021-11-17T23:11:36Z",
	        "Driver": "local",
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "no-preload-20211117231133-9504"
	        },
	        "Mountpoint": "/var/lib/docker/volumes/no-preload-20211117231133-9504/_data",
	        "Name": "no-preload-20211117231133-9504",
	        "Options": {},
	        "Scope": "local"
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-20211117231133-9504 -n no-preload-20211117231133-9504
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-20211117231133-9504 -n no-preload-20211117231133-9504: exit status 7 (1.8572999s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 23:13:45.439460    3896 status.go:247] status error: host: state: unknown state "no-preload-20211117231133-9504": docker container inspect no-preload-20211117231133-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20211117231133-9504

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-20211117231133-9504" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (1.98s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (2.17s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:273: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-20211117231133-9504" does not exist
start_stop_delete_test.go:276: (dbg) Run:  kubectl --context no-preload-20211117231133-9504 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:276: (dbg) Non-zero exit: kubectl --context no-preload-20211117231133-9504 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (218.3176ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-20211117231133-9504" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:278: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-20211117231133-9504 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:282: addon did not load correct image. Expected to contain " k8s.gcr.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect no-preload-20211117231133-9504
helpers_test.go:235: (dbg) docker inspect no-preload-20211117231133-9504:

                                                
                                                
-- stdout --
	[
	    {
	        "CreatedAt": "2021-11-17T23:11:36Z",
	        "Driver": "local",
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "no-preload-20211117231133-9504"
	        },
	        "Mountpoint": "/var/lib/docker/volumes/no-preload-20211117231133-9504/_data",
	        "Name": "no-preload-20211117231133-9504",
	        "Options": {},
	        "Scope": "local"
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-20211117231133-9504 -n no-preload-20211117231133-9504
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-20211117231133-9504 -n no-preload-20211117231133-9504: exit status 7 (1.817467s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 23:13:47.605006    6536 status.go:247] status error: host: state: unknown state "no-preload-20211117231133-9504": docker container inspect no-preload-20211117231133-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20211117231133-9504

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-20211117231133-9504" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (2.17s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (3.64s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:289: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p no-preload-20211117231133-9504 "sudo crictl images -o json"
start_stop_delete_test.go:289: (dbg) Non-zero exit: out/minikube-windows-amd64.exe ssh -p no-preload-20211117231133-9504 "sudo crictl images -o json": exit status 80 (1.7675197s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "no-preload-20211117231133-9504": docker container inspect no-preload-20211117231133-9504 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20211117231133-9504
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                     │
	│    * If the above advice does not help, please let us know:                                                         │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                       │
	│                                                                                                                     │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                            │
	│    * Please also attach the following file to the GitHub issue:                                                     │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_ssh_2ebd0b017f5d88727e5083393ee181280e239d1d_0.log    │
	│                                                                                                                     │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:289: failed tp get images inside minikube. args "out/minikube-windows-amd64.exe ssh -p no-preload-20211117231133-9504 \"sudo crictl images -o json\"": exit status 80
start_stop_delete_test.go:289: failed to decode images json unexpected end of JSON input. output:

                                                
                                                

                                                
                                                
start_stop_delete_test.go:289: v1.22.4-rc.0 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"k8s.gcr.io/coredns/coredns:v1.8.4",
- 	"k8s.gcr.io/etcd:3.5.0-0",
- 	"k8s.gcr.io/kube-apiserver:v1.22.4-rc.0",
- 	"k8s.gcr.io/kube-controller-manager:v1.22.4-rc.0",
- 	"k8s.gcr.io/kube-proxy:v1.22.4-rc.0",
- 	"k8s.gcr.io/kube-scheduler:v1.22.4-rc.0",
- 	"k8s.gcr.io/pause:3.5",
- 	"kubernetesui/dashboard:v2.3.1",
- 	"kubernetesui/metrics-scraper:v1.0.7",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/no-preload/serial/VerifyKubernetesImages]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect no-preload-20211117231133-9504
helpers_test.go:235: (dbg) docker inspect no-preload-20211117231133-9504:

                                                
                                                
-- stdout --
	[
	    {
	        "CreatedAt": "2021-11-17T23:11:36Z",
	        "Driver": "local",
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "no-preload-20211117231133-9504"
	        },
	        "Mountpoint": "/var/lib/docker/volumes/no-preload-20211117231133-9504/_data",
	        "Name": "no-preload-20211117231133-9504",
	        "Options": {},
	        "Scope": "local"
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-20211117231133-9504 -n no-preload-20211117231133-9504
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-20211117231133-9504 -n no-preload-20211117231133-9504: exit status 7 (1.7608663s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 23:13:51.250491    6120 status.go:247] status error: host: state: unknown state "no-preload-20211117231133-9504": docker container inspect no-preload-20211117231133-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20211117231133-9504

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-20211117231133-9504" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (3.64s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (5.56s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-windows-amd64.exe pause -p no-preload-20211117231133-9504 --alsologtostderr -v=1
start_stop_delete_test.go:296: (dbg) Non-zero exit: out/minikube-windows-amd64.exe pause -p no-preload-20211117231133-9504 --alsologtostderr -v=1: exit status 80 (1.7585583s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1117 23:13:51.445889    5468 out.go:297] Setting OutFile to fd 1936 ...
	I1117 23:13:51.511574    5468 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 23:13:51.511574    5468 out.go:310] Setting ErrFile to fd 2024...
	I1117 23:13:51.511574    5468 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 23:13:51.523344    5468 out.go:304] Setting JSON to false
	I1117 23:13:51.523344    5468 mustload.go:65] Loading cluster: no-preload-20211117231133-9504
	I1117 23:13:51.524436    5468 config.go:176] Loaded profile config "no-preload-20211117231133-9504": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.4-rc.0
	I1117 23:13:51.532141    5468 cli_runner.go:115] Run: docker container inspect no-preload-20211117231133-9504 --format={{.State.Status}}
	W1117 23:13:52.986444    5468 cli_runner.go:162] docker container inspect no-preload-20211117231133-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:13:52.986444    5468 cli_runner.go:168] Completed: docker container inspect no-preload-20211117231133-9504 --format={{.State.Status}}: (1.4542922s)
	I1117 23:13:52.995544    5468 out.go:176] 
	W1117 23:13:52.995544    5468 out.go:241] X Exiting due to GUEST_STATUS: state: unknown state "no-preload-20211117231133-9504": docker container inspect no-preload-20211117231133-9504 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20211117231133-9504
	
	X Exiting due to GUEST_STATUS: state: unknown state "no-preload-20211117231133-9504": docker container inspect no-preload-20211117231133-9504 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20211117231133-9504
	
	W1117 23:13:52.995761    5468 out.go:241] * 
	* 
	W1117 23:13:53.003379    5468 out.go:241] ╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                       │
	│    * If the above advice does not help, please let us know:                                                           │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                         │
	│                                                                                                                       │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                              │
	│    * Please also attach the following file to the GitHub issue:                                                       │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_pause_8893f1c977cc86351b34571029ffce3d31854fd6_1.log    │
	│                                                                                                                       │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                       │
	│    * If the above advice does not help, please let us know:                                                           │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                         │
	│                                                                                                                       │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                              │
	│    * Please also attach the following file to the GitHub issue:                                                       │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_pause_8893f1c977cc86351b34571029ffce3d31854fd6_1.log    │
	│                                                                                                                       │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I1117 23:13:53.005232    5468 out.go:176] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:296: out/minikube-windows-amd64.exe pause -p no-preload-20211117231133-9504 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect no-preload-20211117231133-9504
helpers_test.go:235: (dbg) docker inspect no-preload-20211117231133-9504:

                                                
                                                
-- stdout --
	[
	    {
	        "CreatedAt": "2021-11-17T23:11:36Z",
	        "Driver": "local",
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "no-preload-20211117231133-9504"
	        },
	        "Mountpoint": "/var/lib/docker/volumes/no-preload-20211117231133-9504/_data",
	        "Name": "no-preload-20211117231133-9504",
	        "Options": {},
	        "Scope": "local"
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-20211117231133-9504 -n no-preload-20211117231133-9504
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-20211117231133-9504 -n no-preload-20211117231133-9504: exit status 7 (1.8146925s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 23:13:54.929381    4164 status.go:247] status error: host: state: unknown state "no-preload-20211117231133-9504": docker container inspect no-preload-20211117231133-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20211117231133-9504

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-20211117231133-9504" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect no-preload-20211117231133-9504
helpers_test.go:235: (dbg) docker inspect no-preload-20211117231133-9504:

                                                
                                                
-- stdout --
	[
	    {
	        "CreatedAt": "2021-11-17T23:11:36Z",
	        "Driver": "local",
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "no-preload-20211117231133-9504"
	        },
	        "Mountpoint": "/var/lib/docker/volumes/no-preload-20211117231133-9504/_data",
	        "Name": "no-preload-20211117231133-9504",
	        "Options": {},
	        "Scope": "local"
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-20211117231133-9504 -n no-preload-20211117231133-9504
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-20211117231133-9504 -n no-preload-20211117231133-9504: exit status 7 (1.7651286s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 23:13:56.803718   11144 status.go:247] status error: host: state: unknown state "no-preload-20211117231133-9504": docker container inspect no-preload-20211117231133-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20211117231133-9504

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-20211117231133-9504" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/no-preload/serial/Pause (5.56s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop (1.9s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:260: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-different-port-20211117231152-9504" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect default-k8s-different-port-20211117231152-9504
helpers_test.go:235: (dbg) docker inspect default-k8s-different-port-20211117231152-9504:

                                                
                                                
-- stdout --
	[
	    {
	        "CreatedAt": "2021-11-17T23:11:56Z",
	        "Driver": "local",
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "default-k8s-different-port-20211117231152-9504"
	        },
	        "Mountpoint": "/var/lib/docker/volumes/default-k8s-different-port-20211117231152-9504/_data",
	        "Name": "default-k8s-different-port-20211117231152-9504",
	        "Options": {},
	        "Scope": "local"
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p default-k8s-different-port-20211117231152-9504 -n default-k8s-different-port-20211117231152-9504
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p default-k8s-different-port-20211117231152-9504 -n default-k8s-different-port-20211117231152-9504: exit status 7 (1.7985332s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 23:14:04.706828     300 status.go:247] status error: host: state: unknown state "default-k8s-different-port-20211117231152-9504": docker container inspect default-k8s-different-port-20211117231152-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20211117231152-9504

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-different-port-20211117231152-9504" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop (1.90s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop (2.06s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:273: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-different-port-20211117231152-9504" does not exist
start_stop_delete_test.go:276: (dbg) Run:  kubectl --context default-k8s-different-port-20211117231152-9504 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:276: (dbg) Non-zero exit: kubectl --context default-k8s-different-port-20211117231152-9504 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (205.6268ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-different-port-20211117231152-9504" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:278: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-different-port-20211117231152-9504 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:282: addon did not load correct image. Expected to contain " k8s.gcr.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect default-k8s-different-port-20211117231152-9504
helpers_test.go:235: (dbg) docker inspect default-k8s-different-port-20211117231152-9504:

                                                
                                                
-- stdout --
	[
	    {
	        "CreatedAt": "2021-11-17T23:11:56Z",
	        "Driver": "local",
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "default-k8s-different-port-20211117231152-9504"
	        },
	        "Mountpoint": "/var/lib/docker/volumes/default-k8s-different-port-20211117231152-9504/_data",
	        "Name": "default-k8s-different-port-20211117231152-9504",
	        "Options": {},
	        "Scope": "local"
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p default-k8s-different-port-20211117231152-9504 -n default-k8s-different-port-20211117231152-9504
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p default-k8s-different-port-20211117231152-9504 -n default-k8s-different-port-20211117231152-9504: exit status 7 (1.7366837s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 23:14:06.767898   11216 status.go:247] status error: host: state: unknown state "default-k8s-different-port-20211117231152-9504": docker container inspect default-k8s-different-port-20211117231152-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20211117231152-9504

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-different-port-20211117231152-9504" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop (2.06s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/VerifyKubernetesImages (3.59s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:289: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p default-k8s-different-port-20211117231152-9504 "sudo crictl images -o json"
start_stop_delete_test.go:289: (dbg) Non-zero exit: out/minikube-windows-amd64.exe ssh -p default-k8s-different-port-20211117231152-9504 "sudo crictl images -o json": exit status 80 (1.726859s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "default-k8s-different-port-20211117231152-9504": docker container inspect default-k8s-different-port-20211117231152-9504 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20211117231152-9504
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                     │
	│    * If the above advice does not help, please let us know:                                                         │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                       │
	│                                                                                                                     │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                            │
	│    * Please also attach the following file to the GitHub issue:                                                     │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_ssh_2ebd0b017f5d88727e5083393ee181280e239d1d_0.log    │
	│                                                                                                                     │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:289: failed tp get images inside minikube. args "out/minikube-windows-amd64.exe ssh -p default-k8s-different-port-20211117231152-9504 \"sudo crictl images -o json\"": exit status 80
start_stop_delete_test.go:289: failed to decode images json unexpected end of JSON input. output:

                                                
                                                

                                                
                                                
start_stop_delete_test.go:289: v1.22.3 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"k8s.gcr.io/coredns/coredns:v1.8.4",
- 	"k8s.gcr.io/etcd:3.5.0-0",
- 	"k8s.gcr.io/kube-apiserver:v1.22.3",
- 	"k8s.gcr.io/kube-controller-manager:v1.22.3",
- 	"k8s.gcr.io/kube-proxy:v1.22.3",
- 	"k8s.gcr.io/kube-scheduler:v1.22.3",
- 	"k8s.gcr.io/pause:3.5",
- 	"kubernetesui/dashboard:v2.3.1",
- 	"kubernetesui/metrics-scraper:v1.0.7",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/default-k8s-different-port/serial/VerifyKubernetesImages]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect default-k8s-different-port-20211117231152-9504
helpers_test.go:235: (dbg) docker inspect default-k8s-different-port-20211117231152-9504:

                                                
                                                
-- stdout --
	[
	    {
	        "CreatedAt": "2021-11-17T23:11:56Z",
	        "Driver": "local",
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "default-k8s-different-port-20211117231152-9504"
	        },
	        "Mountpoint": "/var/lib/docker/volumes/default-k8s-different-port-20211117231152-9504/_data",
	        "Name": "default-k8s-different-port-20211117231152-9504",
	        "Options": {},
	        "Scope": "local"
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p default-k8s-different-port-20211117231152-9504 -n default-k8s-different-port-20211117231152-9504
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p default-k8s-different-port-20211117231152-9504 -n default-k8s-different-port-20211117231152-9504: exit status 7 (1.7397741s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 23:14:10.357462    6132 status.go:247] status error: host: state: unknown state "default-k8s-different-port-20211117231152-9504": docker container inspect default-k8s-different-port-20211117231152-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20211117231152-9504

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-different-port-20211117231152-9504" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/default-k8s-different-port/serial/VerifyKubernetesImages (3.59s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/Pause (5.56s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/Pause
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-windows-amd64.exe pause -p default-k8s-different-port-20211117231152-9504 --alsologtostderr -v=1
start_stop_delete_test.go:296: (dbg) Non-zero exit: out/minikube-windows-amd64.exe pause -p default-k8s-different-port-20211117231152-9504 --alsologtostderr -v=1: exit status 80 (1.7499449s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1117 23:14:10.557689    4504 out.go:297] Setting OutFile to fd 1908 ...
	I1117 23:14:10.618687    4504 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 23:14:10.618687    4504 out.go:310] Setting ErrFile to fd 1900...
	I1117 23:14:10.618687    4504 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 23:14:10.634239    4504 out.go:304] Setting JSON to false
	I1117 23:14:10.634239    4504 mustload.go:65] Loading cluster: default-k8s-different-port-20211117231152-9504
	I1117 23:14:10.635540    4504 config.go:176] Loaded profile config "default-k8s-different-port-20211117231152-9504": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.3
	I1117 23:14:10.643849    4504 cli_runner.go:115] Run: docker container inspect default-k8s-different-port-20211117231152-9504 --format={{.State.Status}}
	W1117 23:14:12.101510    4504 cli_runner.go:162] docker container inspect default-k8s-different-port-20211117231152-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:14:12.101633    4504 cli_runner.go:168] Completed: docker container inspect default-k8s-different-port-20211117231152-9504 --format={{.State.Status}}: (1.4576496s)
	I1117 23:14:12.106248    4504 out.go:176] 
	W1117 23:14:12.106500    4504 out.go:241] X Exiting due to GUEST_STATUS: state: unknown state "default-k8s-different-port-20211117231152-9504": docker container inspect default-k8s-different-port-20211117231152-9504 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20211117231152-9504
	
	X Exiting due to GUEST_STATUS: state: unknown state "default-k8s-different-port-20211117231152-9504": docker container inspect default-k8s-different-port-20211117231152-9504 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20211117231152-9504
	
	W1117 23:14:12.106537    4504 out.go:241] * 
	* 
	W1117 23:14:12.113494    4504 out.go:241] ╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                       │
	│    * If the above advice does not help, please let us know:                                                           │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                         │
	│                                                                                                                       │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                              │
	│    * Please also attach the following file to the GitHub issue:                                                       │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_pause_8893f1c977cc86351b34571029ffce3d31854fd6_1.log    │
	│                                                                                                                       │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                       │
	│    * If the above advice does not help, please let us know:                                                           │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                         │
	│                                                                                                                       │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                              │
	│    * Please also attach the following file to the GitHub issue:                                                       │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_pause_8893f1c977cc86351b34571029ffce3d31854fd6_1.log    │
	│                                                                                                                       │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I1117 23:14:12.116014    4504 out.go:176] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:296: out/minikube-windows-amd64.exe pause -p default-k8s-different-port-20211117231152-9504 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/default-k8s-different-port/serial/Pause]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect default-k8s-different-port-20211117231152-9504
helpers_test.go:235: (dbg) docker inspect default-k8s-different-port-20211117231152-9504:

                                                
                                                
-- stdout --
	[
	    {
	        "CreatedAt": "2021-11-17T23:11:56Z",
	        "Driver": "local",
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "default-k8s-different-port-20211117231152-9504"
	        },
	        "Mountpoint": "/var/lib/docker/volumes/default-k8s-different-port-20211117231152-9504/_data",
	        "Name": "default-k8s-different-port-20211117231152-9504",
	        "Options": {},
	        "Scope": "local"
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p default-k8s-different-port-20211117231152-9504 -n default-k8s-different-port-20211117231152-9504
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p default-k8s-different-port-20211117231152-9504 -n default-k8s-different-port-20211117231152-9504: exit status 7 (1.7957556s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 23:14:14.013610   10452 status.go:247] status error: host: state: unknown state "default-k8s-different-port-20211117231152-9504": docker container inspect default-k8s-different-port-20211117231152-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20211117231152-9504

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-different-port-20211117231152-9504" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/default-k8s-different-port/serial/Pause]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect default-k8s-different-port-20211117231152-9504
helpers_test.go:235: (dbg) docker inspect default-k8s-different-port-20211117231152-9504:

                                                
                                                
-- stdout --
	[
	    {
	        "CreatedAt": "2021-11-17T23:11:56Z",
	        "Driver": "local",
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "default-k8s-different-port-20211117231152-9504"
	        },
	        "Mountpoint": "/var/lib/docker/volumes/default-k8s-different-port-20211117231152-9504/_data",
	        "Name": "default-k8s-different-port-20211117231152-9504",
	        "Options": {},
	        "Scope": "local"
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p default-k8s-different-port-20211117231152-9504 -n default-k8s-different-port-20211117231152-9504
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p default-k8s-different-port-20211117231152-9504 -n default-k8s-different-port-20211117231152-9504: exit status 7 (1.7926546s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 23:14:15.914135    8316 status.go:247] status error: host: state: unknown state "default-k8s-different-port-20211117231152-9504": docker container inspect default-k8s-different-port-20211117231152-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20211117231152-9504

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-different-port-20211117231152-9504" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/default-k8s-different-port/serial/Pause (5.56s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (16.93s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:213: (dbg) Run:  out/minikube-windows-amd64.exe stop -p newest-cni-20211117231341-9504 --alsologtostderr -v=3
start_stop_delete_test.go:213: (dbg) Non-zero exit: out/minikube-windows-amd64.exe stop -p newest-cni-20211117231341-9504 --alsologtostderr -v=3: exit status 82 (15.1003285s)

                                                
                                                
-- stdout --
	* Stopping node "newest-cni-20211117231341-9504"  ...
	* Stopping node "newest-cni-20211117231341-9504"  ...
	* Stopping node "newest-cni-20211117231341-9504"  ...
	* Stopping node "newest-cni-20211117231341-9504"  ...
	* Stopping node "newest-cni-20211117231341-9504"  ...
	* Stopping node "newest-cni-20211117231341-9504"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1117 23:14:23.821241    4964 out.go:297] Setting OutFile to fd 1912 ...
	I1117 23:14:23.893506    4964 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 23:14:23.893506    4964 out.go:310] Setting ErrFile to fd 1908...
	I1117 23:14:23.893506    4964 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 23:14:23.903589    4964 out.go:304] Setting JSON to false
	I1117 23:14:23.904338    4964 daemonize_windows.go:45] trying to kill existing schedule stop for profile newest-cni-20211117231341-9504...
	I1117 23:14:23.912875    4964 ssh_runner.go:152] Run: systemctl --version
	I1117 23:14:23.916784    4964 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20211117231341-9504
	W1117 23:14:25.387736    4964 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20211117231341-9504 returned with exit code 1
	I1117 23:14:25.387736    4964 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20211117231341-9504: (1.4709413s)
	I1117 23:14:25.387996    4964 retry.go:31] will retry after 276.165072ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-20211117231341-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20211117231341-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20211117231341-9504
	I1117 23:14:25.669494    4964 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20211117231341-9504
	W1117 23:14:25.758652    4964 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20211117231341-9504 returned with exit code 1
	I1117 23:14:25.766511    4964 ssh_runner.go:152] Run: sudo service minikube-scheduled-stop stop
	I1117 23:14:25.770238    4964 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20211117231341-9504
	W1117 23:14:25.864456    4964 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20211117231341-9504 returned with exit code 1
	I1117 23:14:25.864777    4964 retry.go:31] will retry after 291.140013ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-20211117231341-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20211117231341-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20211117231341-9504
	I1117 23:14:26.161794    4964 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20211117231341-9504
	W1117 23:14:26.255882    4964 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20211117231341-9504 returned with exit code 1
	I1117 23:14:26.256174    4964 retry.go:31] will retry after 351.64282ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-20211117231341-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20211117231341-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20211117231341-9504
	I1117 23:14:26.613681    4964 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20211117231341-9504
	W1117 23:14:26.714174    4964 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20211117231341-9504 returned with exit code 1
	I1117 23:14:26.714491    4964 retry.go:31] will retry after 520.108592ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-20211117231341-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20211117231341-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20211117231341-9504
	I1117 23:14:27.239597    4964 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20211117231341-9504
	W1117 23:14:27.328797    4964 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20211117231341-9504 returned with exit code 1
	I1117 23:14:27.328966    4964 openrc.go:165] stop output: 
	E1117 23:14:27.328966    4964 daemonize_windows.go:39] error terminating scheduled stop for profile newest-cni-20211117231341-9504: stopping schedule-stop service for profile newest-cni-20211117231341-9504: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-20211117231341-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20211117231341-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20211117231341-9504
	I1117 23:14:27.329029    4964 mustload.go:65] Loading cluster: newest-cni-20211117231341-9504
	I1117 23:14:27.329968    4964 config.go:176] Loaded profile config "newest-cni-20211117231341-9504": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.4-rc.0
	I1117 23:14:27.330232    4964 stop.go:39] StopHost: newest-cni-20211117231341-9504
	I1117 23:14:27.334641    4964 out.go:176] * Stopping node "newest-cni-20211117231341-9504"  ...
	I1117 23:14:27.342711    4964 cli_runner.go:115] Run: docker container inspect newest-cni-20211117231341-9504 --format={{.State.Status}}
	W1117 23:14:27.430685    4964 cli_runner.go:162] docker container inspect newest-cni-20211117231341-9504 --format={{.State.Status}} returned with exit code 1
	W1117 23:14:27.430998    4964 stop.go:75] unable to get state: unknown state "newest-cni-20211117231341-9504": docker container inspect newest-cni-20211117231341-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20211117231341-9504
	W1117 23:14:27.430998    4964 stop.go:163] stop host returned error: ssh power off: unknown state "newest-cni-20211117231341-9504": docker container inspect newest-cni-20211117231341-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20211117231341-9504
	I1117 23:14:27.430998    4964 retry.go:31] will retry after 565.637019ms: docker container inspect newest-cni-20211117231341-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20211117231341-9504
	I1117 23:14:27.997368    4964 stop.go:39] StopHost: newest-cni-20211117231341-9504
	I1117 23:14:28.002066    4964 out.go:176] * Stopping node "newest-cni-20211117231341-9504"  ...
	I1117 23:14:28.008619    4964 cli_runner.go:115] Run: docker container inspect newest-cni-20211117231341-9504 --format={{.State.Status}}
	W1117 23:14:28.096984    4964 cli_runner.go:162] docker container inspect newest-cni-20211117231341-9504 --format={{.State.Status}} returned with exit code 1
	W1117 23:14:28.096984    4964 stop.go:75] unable to get state: unknown state "newest-cni-20211117231341-9504": docker container inspect newest-cni-20211117231341-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20211117231341-9504
	W1117 23:14:28.097202    4964 stop.go:163] stop host returned error: ssh power off: unknown state "newest-cni-20211117231341-9504": docker container inspect newest-cni-20211117231341-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20211117231341-9504
	I1117 23:14:28.097202    4964 retry.go:31] will retry after 984.778882ms: docker container inspect newest-cni-20211117231341-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20211117231341-9504
	I1117 23:14:29.082836    4964 stop.go:39] StopHost: newest-cni-20211117231341-9504
	I1117 23:14:29.086641    4964 out.go:176] * Stopping node "newest-cni-20211117231341-9504"  ...
	I1117 23:14:29.093598    4964 cli_runner.go:115] Run: docker container inspect newest-cni-20211117231341-9504 --format={{.State.Status}}
	W1117 23:14:29.187557    4964 cli_runner.go:162] docker container inspect newest-cni-20211117231341-9504 --format={{.State.Status}} returned with exit code 1
	W1117 23:14:29.187645    4964 stop.go:75] unable to get state: unknown state "newest-cni-20211117231341-9504": docker container inspect newest-cni-20211117231341-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20211117231341-9504
	W1117 23:14:29.187645    4964 stop.go:163] stop host returned error: ssh power off: unknown state "newest-cni-20211117231341-9504": docker container inspect newest-cni-20211117231341-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20211117231341-9504
	I1117 23:14:29.187746    4964 retry.go:31] will retry after 1.343181417s: docker container inspect newest-cni-20211117231341-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20211117231341-9504
	I1117 23:14:30.531192    4964 stop.go:39] StopHost: newest-cni-20211117231341-9504
	I1117 23:14:30.538500    4964 out.go:176] * Stopping node "newest-cni-20211117231341-9504"  ...
	I1117 23:14:30.546210    4964 cli_runner.go:115] Run: docker container inspect newest-cni-20211117231341-9504 --format={{.State.Status}}
	W1117 23:14:30.634886    4964 cli_runner.go:162] docker container inspect newest-cni-20211117231341-9504 --format={{.State.Status}} returned with exit code 1
	W1117 23:14:30.635116    4964 stop.go:75] unable to get state: unknown state "newest-cni-20211117231341-9504": docker container inspect newest-cni-20211117231341-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20211117231341-9504
	W1117 23:14:30.635310    4964 stop.go:163] stop host returned error: ssh power off: unknown state "newest-cni-20211117231341-9504": docker container inspect newest-cni-20211117231341-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20211117231341-9504
	I1117 23:14:30.635310    4964 retry.go:31] will retry after 2.703077529s: docker container inspect newest-cni-20211117231341-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20211117231341-9504
	I1117 23:14:33.339331    4964 stop.go:39] StopHost: newest-cni-20211117231341-9504
	I1117 23:14:33.344924    4964 out.go:176] * Stopping node "newest-cni-20211117231341-9504"  ...
	I1117 23:14:33.351662    4964 cli_runner.go:115] Run: docker container inspect newest-cni-20211117231341-9504 --format={{.State.Status}}
	W1117 23:14:33.452532    4964 cli_runner.go:162] docker container inspect newest-cni-20211117231341-9504 --format={{.State.Status}} returned with exit code 1
	W1117 23:14:33.452532    4964 stop.go:75] unable to get state: unknown state "newest-cni-20211117231341-9504": docker container inspect newest-cni-20211117231341-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20211117231341-9504
	W1117 23:14:33.452818    4964 stop.go:163] stop host returned error: ssh power off: unknown state "newest-cni-20211117231341-9504": docker container inspect newest-cni-20211117231341-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20211117231341-9504
	I1117 23:14:33.452818    4964 retry.go:31] will retry after 5.139513932s: docker container inspect newest-cni-20211117231341-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20211117231341-9504
	I1117 23:14:38.593388    4964 stop.go:39] StopHost: newest-cni-20211117231341-9504
	I1117 23:14:38.598191    4964 out.go:176] * Stopping node "newest-cni-20211117231341-9504"  ...
	I1117 23:14:38.605103    4964 cli_runner.go:115] Run: docker container inspect newest-cni-20211117231341-9504 --format={{.State.Status}}
	W1117 23:14:38.704603    4964 cli_runner.go:162] docker container inspect newest-cni-20211117231341-9504 --format={{.State.Status}} returned with exit code 1
	W1117 23:14:38.704658    4964 stop.go:75] unable to get state: unknown state "newest-cni-20211117231341-9504": docker container inspect newest-cni-20211117231341-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20211117231341-9504
	W1117 23:14:38.704658    4964 stop.go:163] stop host returned error: ssh power off: unknown state "newest-cni-20211117231341-9504": docker container inspect newest-cni-20211117231341-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20211117231341-9504
	I1117 23:14:38.707557    4964 out.go:176] 
	W1117 23:14:38.707557    4964 out.go:241] X Exiting due to GUEST_STOP_TIMEOUT: docker container inspect newest-cni-20211117231341-9504 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20211117231341-9504
	
	X Exiting due to GUEST_STOP_TIMEOUT: docker container inspect newest-cni-20211117231341-9504 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20211117231341-9504
	
	W1117 23:14:38.707557    4964 out.go:241] * 
	* 
	W1117 23:14:38.715437    4964 out.go:241] ╭──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                      │
	│    * If the above advice does not help, please let us know:                                                          │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                        │
	│                                                                                                                      │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                             │
	│    * Please also attach the following file to the GitHub issue:                                                      │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_stop_eeafc14e343686f0df1f1d4295ac2d3042636ff8_2.log    │
	│                                                                                                                      │
	╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                      │
	│    * If the above advice does not help, please let us know:                                                          │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                        │
	│                                                                                                                      │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                             │
	│    * Please also attach the following file to the GitHub issue:                                                      │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_stop_eeafc14e343686f0df1f1d4295ac2d3042636ff8_2.log    │
	│                                                                                                                      │
	╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I1117 23:14:38.718431    4964 out.go:176] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed stopping minikube - first stop-. args "out/minikube-windows-amd64.exe stop -p newest-cni-20211117231341-9504 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Stop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect newest-cni-20211117231341-9504
helpers_test.go:235: (dbg) docker inspect newest-cni-20211117231341-9504:

                                                
                                                
-- stdout --
	[
	    {
	        "CreatedAt": "2021-11-17T23:13:45Z",
	        "Driver": "local",
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "newest-cni-20211117231341-9504"
	        },
	        "Mountpoint": "/var/lib/docker/volumes/newest-cni-20211117231341-9504/_data",
	        "Name": "newest-cni-20211117231341-9504",
	        "Options": {},
	        "Scope": "local"
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p newest-cni-20211117231341-9504 -n newest-cni-20211117231341-9504
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p newest-cni-20211117231341-9504 -n newest-cni-20211117231341-9504: exit status 7 (1.7248635s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 23:14:40.551764    5976 status.go:247] status error: host: state: unknown state "newest-cni-20211117231341-9504": docker container inspect newest-cni-20211117231341-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20211117231341-9504

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-20211117231341-9504" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/newest-cni/serial/Stop (16.93s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (5.3s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:224: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p newest-cni-20211117231341-9504 -n newest-cni-20211117231341-9504
start_stop_delete_test.go:224: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p newest-cni-20211117231341-9504 -n newest-cni-20211117231341-9504: exit status 7 (1.7179624s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 23:14:42.264063    4920 status.go:247] status error: host: state: unknown state "newest-cni-20211117231341-9504": docker container inspect newest-cni-20211117231341-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20211117231341-9504

                                                
                                                
** /stderr **
start_stop_delete_test.go:224: status error: exit status 7 (may be ok)
start_stop_delete_test.go:226: expected post-stop host status to be -"Stopped"- but got *"Nonexistent"*
start_stop_delete_test.go:231: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p newest-cni-20211117231341-9504 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
start_stop_delete_test.go:231: (dbg) Done: out/minikube-windows-amd64.exe addons enable dashboard -p newest-cni-20211117231341-9504 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4: (1.719259s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect newest-cni-20211117231341-9504
helpers_test.go:235: (dbg) docker inspect newest-cni-20211117231341-9504:

                                                
                                                
-- stdout --
	[
	    {
	        "CreatedAt": "2021-11-17T23:13:45Z",
	        "Driver": "local",
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "newest-cni-20211117231341-9504"
	        },
	        "Mountpoint": "/var/lib/docker/volumes/newest-cni-20211117231341-9504/_data",
	        "Name": "newest-cni-20211117231341-9504",
	        "Options": {},
	        "Scope": "local"
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p newest-cni-20211117231341-9504 -n newest-cni-20211117231341-9504
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p newest-cni-20211117231341-9504 -n newest-cni-20211117231341-9504: exit status 7 (1.7465398s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 23:14:45.844383    1880 status.go:247] status error: host: state: unknown state "newest-cni-20211117231341-9504": docker container inspect newest-cni-20211117231341-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20211117231341-9504

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-20211117231341-9504" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (5.30s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (59.46s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:241: (dbg) Run:  out/minikube-windows-amd64.exe start -p newest-cni-20211117231341-9504 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubelet.network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker --kubernetes-version=v1.22.4-rc.0
start_stop_delete_test.go:241: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p newest-cni-20211117231341-9504 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubelet.network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker --kubernetes-version=v1.22.4-rc.0: exit status 80 (57.4755511s)

                                                
                                                
-- stdout --
	* [newest-cni-20211117231341-9504] minikube v1.24.0 on Microsoft Windows 10 Enterprise N 10.0.18363 Build 18363
	  - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=12739
	* Using the docker driver based on existing profile
	* Starting control plane node newest-cni-20211117231341-9504 in cluster newest-cni-20211117231341-9504
	* Pulling base image ...
	* docker "newest-cni-20211117231341-9504" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* docker "newest-cni-20211117231341-9504" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1117 23:14:46.041086    1752 out.go:297] Setting OutFile to fd 1936 ...
	I1117 23:14:46.107677    1752 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 23:14:46.107677    1752 out.go:310] Setting ErrFile to fd 1948...
	I1117 23:14:46.107677    1752 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 23:14:46.117390    1752 out.go:304] Setting JSON to false
	I1117 23:14:46.119228    1752 start.go:112] hostinfo: {"hostname":"minikube2","uptime":80201,"bootTime":1637110685,"procs":128,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.18363 Build 18363","kernelVersion":"10.0.18363 Build 18363","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"29dced62-21fb-45d8-a34e-472b66ced002"}
	W1117 23:14:46.119228    1752 start.go:120] gopshost.Virtualization returned error: not implemented yet
	I1117 23:14:46.125469    1752 out.go:176] * [newest-cni-20211117231341-9504] minikube v1.24.0 on Microsoft Windows 10 Enterprise N 10.0.18363 Build 18363
	I1117 23:14:46.126507    1752 notify.go:174] Checking for updates...
	I1117 23:14:46.129274    1752 out.go:176]   - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	I1117 23:14:46.130701    1752 out.go:176]   - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	I1117 23:14:46.133673    1752 out.go:176]   - MINIKUBE_LOCATION=12739
	I1117 23:14:46.134852    1752 config.go:176] Loaded profile config "newest-cni-20211117231341-9504": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.4-rc.0
	I1117 23:14:46.135070    1752 driver.go:343] Setting default libvirt URI to qemu:///system
	I1117 23:14:47.644614    1752 docker.go:132] docker version: linux-19.03.12
	I1117 23:14:47.648783    1752 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I1117 23:14:47.988906    1752 info.go:263] docker info: {ID:L2W4:ZZCI:FAHF:ZZT6:NAWJ:ODFR:2ZBQ:YQ77:PYZZ:VR3E:WLUM:OBSV Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:40 OomKillDisable:true NGoroutines:52 SystemTime:2021-11-17 23:14:47.728706598 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:4.19.76-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://inde
x.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:6239399936 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:19.03.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7ad184331fa3e55e52b890ea95e65ba581ae3429 Expected:7ad184331fa3e55e52b890ea95e65ba581ae3429} RuncCommit:{ID:dc9208a3303feef5b3839f4323d9beb36df0a9dd Expected:dc9208a3303feef5b3839f4323d9beb36df0a9dd} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense:Community Engine Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I1117 23:14:47.993104    1752 out.go:176] * Using the docker driver based on existing profile
	I1117 23:14:47.993104    1752 start.go:280] selected driver: docker
	I1117 23:14:47.993104    1752 start.go:775] validating driver "docker" against &{Name:newest-cni-20211117231341-9504 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.4-rc.0 ClusterName:newest-cni-20211117231341-9504 Namespace:default APIServerName:minikubeCA APIS
erverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:network-plugin Value:cni} {Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.22.4-rc.0 ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.3.1@sha256:ec27f462cf1946220f5a9ace416a84a57c18f98c777876a8054405d1428cc92e MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s Sc
heduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host}
	I1117 23:14:47.993784    1752 start.go:786] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I1117 23:14:48.106896    1752 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I1117 23:14:48.439936    1752 info.go:263] docker info: {ID:L2W4:ZZCI:FAHF:ZZT6:NAWJ:ODFR:2ZBQ:YQ77:PYZZ:VR3E:WLUM:OBSV Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:40 OomKillDisable:true NGoroutines:52 SystemTime:2021-11-17 23:14:48.183715558 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:4.19.76-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://inde
x.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:6239399936 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:19.03.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7ad184331fa3e55e52b890ea95e65ba581ae3429 Expected:7ad184331fa3e55e52b890ea95e65ba581ae3429} RuncCommit:{ID:dc9208a3303feef5b3839f4323d9beb36df0a9dd Expected:dc9208a3303feef5b3839f4323d9beb36df0a9dd} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense:Community Engine Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I1117 23:14:48.439936    1752 start_flags.go:777] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1117 23:14:48.439936    1752 cni.go:93] Creating CNI manager for ""
	I1117 23:14:48.439936    1752 cni.go:167] CNI unnecessary in this configuration, recommending no CNI
	I1117 23:14:48.439936    1752 start_flags.go:282] config:
	{Name:newest-cni-20211117231341-9504 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.4-rc.0 ClusterName:newest-cni-20211117231341-9504 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CR
ISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:network-plugin Value:cni} {Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.22.4-rc.0 ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.3.1@sha256:ec27f462cf1946220f5a9ace416a84a57c18f98c777876a8054405d1428cc92e MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:fal
se ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host}
	I1117 23:14:48.444552    1752 out.go:176] * Starting control plane node newest-cni-20211117231341-9504 in cluster newest-cni-20211117231341-9504
	I1117 23:14:48.444552    1752 cache.go:118] Beginning downloading kic base image for docker with docker
	I1117 23:14:48.447828    1752 out.go:176] * Pulling base image ...
	I1117 23:14:48.447828    1752 preload.go:132] Checking if preload exists for k8s version v1.22.4-rc.0 and runtime docker
	I1117 23:14:48.447828    1752 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c in local docker daemon
	I1117 23:14:48.447828    1752 preload.go:148] Found local preload: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v14-v1.22.4-rc.0-docker-overlay2-amd64.tar.lz4
	I1117 23:14:48.447828    1752 cache.go:57] Caching tarball of preloaded images
	I1117 23:14:48.448694    1752 preload.go:174] Found C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v14-v1.22.4-rc.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1117 23:14:48.448694    1752 cache.go:60] Finished verifying existence of preloaded tar for  v1.22.4-rc.0 on docker
	I1117 23:14:48.449290    1752 profile.go:147] Saving config to C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\newest-cni-20211117231341-9504\config.json ...
	I1117 23:14:48.548558    1752 image.go:79] Found gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c in local docker daemon, skipping pull
	I1117 23:14:48.548558    1752 cache.go:140] gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c exists in daemon, skipping load
	I1117 23:14:48.548688    1752 cache.go:206] Successfully downloaded all kic artifacts
	I1117 23:14:48.548817    1752 start.go:313] acquiring machines lock for newest-cni-20211117231341-9504: {Name:mkb4f6b61af8e77a295e231eb5be8b44810e2cdb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 23:14:48.548886    1752 start.go:317] acquired machines lock for "newest-cni-20211117231341-9504" in 52.4µs
	I1117 23:14:48.548886    1752 start.go:93] Skipping create...Using existing machine configuration
	I1117 23:14:48.548886    1752 fix.go:55] fixHost starting: 
	I1117 23:14:48.556508    1752 cli_runner.go:115] Run: docker container inspect newest-cni-20211117231341-9504 --format={{.State.Status}}
	W1117 23:14:48.651448    1752 cli_runner.go:162] docker container inspect newest-cni-20211117231341-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:14:48.651448    1752 fix.go:108] recreateIfNeeded on newest-cni-20211117231341-9504: state= err=unknown state "newest-cni-20211117231341-9504": docker container inspect newest-cni-20211117231341-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20211117231341-9504
	I1117 23:14:48.651448    1752 fix.go:113] machineExists: false. err=machine does not exist
	I1117 23:14:48.654901    1752 out.go:176] * docker "newest-cni-20211117231341-9504" container is missing, will recreate.
	I1117 23:14:48.655415    1752 delete.go:124] DEMOLISHING newest-cni-20211117231341-9504 ...
	I1117 23:14:48.662241    1752 cli_runner.go:115] Run: docker container inspect newest-cni-20211117231341-9504 --format={{.State.Status}}
	W1117 23:14:48.754646    1752 cli_runner.go:162] docker container inspect newest-cni-20211117231341-9504 --format={{.State.Status}} returned with exit code 1
	W1117 23:14:48.754771    1752 stop.go:75] unable to get state: unknown state "newest-cni-20211117231341-9504": docker container inspect newest-cni-20211117231341-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20211117231341-9504
	I1117 23:14:48.754771    1752 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "newest-cni-20211117231341-9504": docker container inspect newest-cni-20211117231341-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20211117231341-9504
	I1117 23:14:48.762598    1752 cli_runner.go:115] Run: docker container inspect newest-cni-20211117231341-9504 --format={{.State.Status}}
	W1117 23:14:48.849471    1752 cli_runner.go:162] docker container inspect newest-cni-20211117231341-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:14:48.849722    1752 delete.go:82] Unable to get host status for newest-cni-20211117231341-9504, assuming it has already been deleted: state: unknown state "newest-cni-20211117231341-9504": docker container inspect newest-cni-20211117231341-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20211117231341-9504
	I1117 23:14:48.854022    1752 cli_runner.go:115] Run: docker container inspect -f {{.Id}} newest-cni-20211117231341-9504
	W1117 23:14:48.957918    1752 cli_runner.go:162] docker container inspect -f {{.Id}} newest-cni-20211117231341-9504 returned with exit code 1
	I1117 23:14:48.958138    1752 kic.go:360] could not find the container newest-cni-20211117231341-9504 to remove it. will try anyways
	I1117 23:14:48.961991    1752 cli_runner.go:115] Run: docker container inspect newest-cni-20211117231341-9504 --format={{.State.Status}}
	W1117 23:14:49.059888    1752 cli_runner.go:162] docker container inspect newest-cni-20211117231341-9504 --format={{.State.Status}} returned with exit code 1
	W1117 23:14:49.059983    1752 oci.go:83] error getting container status, will try to delete anyways: unknown state "newest-cni-20211117231341-9504": docker container inspect newest-cni-20211117231341-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20211117231341-9504
	I1117 23:14:49.063542    1752 cli_runner.go:115] Run: docker exec --privileged -t newest-cni-20211117231341-9504 /bin/bash -c "sudo init 0"
	W1117 23:14:49.160387    1752 cli_runner.go:162] docker exec --privileged -t newest-cni-20211117231341-9504 /bin/bash -c "sudo init 0" returned with exit code 1
	I1117 23:14:49.160545    1752 oci.go:658] error shutdown newest-cni-20211117231341-9504: docker exec --privileged -t newest-cni-20211117231341-9504 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: newest-cni-20211117231341-9504
	I1117 23:14:50.164904    1752 cli_runner.go:115] Run: docker container inspect newest-cni-20211117231341-9504 --format={{.State.Status}}
	W1117 23:14:50.259893    1752 cli_runner.go:162] docker container inspect newest-cni-20211117231341-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:14:50.259982    1752 oci.go:670] temporary error verifying shutdown: unknown state "newest-cni-20211117231341-9504": docker container inspect newest-cni-20211117231341-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20211117231341-9504
	I1117 23:14:50.259982    1752 oci.go:672] temporary error: container newest-cni-20211117231341-9504 status is  but expect it to be exited
	I1117 23:14:50.260138    1752 retry.go:31] will retry after 552.330144ms: couldn't verify container is exited. %v: unknown state "newest-cni-20211117231341-9504": docker container inspect newest-cni-20211117231341-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20211117231341-9504
	I1117 23:14:50.817610    1752 cli_runner.go:115] Run: docker container inspect newest-cni-20211117231341-9504 --format={{.State.Status}}
	W1117 23:14:50.918399    1752 cli_runner.go:162] docker container inspect newest-cni-20211117231341-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:14:50.918708    1752 oci.go:670] temporary error verifying shutdown: unknown state "newest-cni-20211117231341-9504": docker container inspect newest-cni-20211117231341-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20211117231341-9504
	I1117 23:14:50.918834    1752 oci.go:672] temporary error: container newest-cni-20211117231341-9504 status is  but expect it to be exited
	I1117 23:14:50.918958    1752 retry.go:31] will retry after 1.080381816s: couldn't verify container is exited. %v: unknown state "newest-cni-20211117231341-9504": docker container inspect newest-cni-20211117231341-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20211117231341-9504
	I1117 23:14:52.005177    1752 cli_runner.go:115] Run: docker container inspect newest-cni-20211117231341-9504 --format={{.State.Status}}
	W1117 23:14:52.094009    1752 cli_runner.go:162] docker container inspect newest-cni-20211117231341-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:14:52.094226    1752 oci.go:670] temporary error verifying shutdown: unknown state "newest-cni-20211117231341-9504": docker container inspect newest-cni-20211117231341-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20211117231341-9504
	I1117 23:14:52.094279    1752 oci.go:672] temporary error: container newest-cni-20211117231341-9504 status is  but expect it to be exited
	I1117 23:14:52.094279    1752 retry.go:31] will retry after 1.31013006s: couldn't verify container is exited. %v: unknown state "newest-cni-20211117231341-9504": docker container inspect newest-cni-20211117231341-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20211117231341-9504
	I1117 23:14:53.409291    1752 cli_runner.go:115] Run: docker container inspect newest-cni-20211117231341-9504 --format={{.State.Status}}
	W1117 23:14:53.501222    1752 cli_runner.go:162] docker container inspect newest-cni-20211117231341-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:14:53.501222    1752 oci.go:670] temporary error verifying shutdown: unknown state "newest-cni-20211117231341-9504": docker container inspect newest-cni-20211117231341-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20211117231341-9504
	I1117 23:14:53.501222    1752 oci.go:672] temporary error: container newest-cni-20211117231341-9504 status is  but expect it to be exited
	I1117 23:14:53.501222    1752 retry.go:31] will retry after 1.582392691s: couldn't verify container is exited. %v: unknown state "newest-cni-20211117231341-9504": docker container inspect newest-cni-20211117231341-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20211117231341-9504
	I1117 23:14:55.087784    1752 cli_runner.go:115] Run: docker container inspect newest-cni-20211117231341-9504 --format={{.State.Status}}
	W1117 23:14:55.178092    1752 cli_runner.go:162] docker container inspect newest-cni-20211117231341-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:14:55.178197    1752 oci.go:670] temporary error verifying shutdown: unknown state "newest-cni-20211117231341-9504": docker container inspect newest-cni-20211117231341-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20211117231341-9504
	I1117 23:14:55.178197    1752 oci.go:672] temporary error: container newest-cni-20211117231341-9504 status is  but expect it to be exited
	I1117 23:14:55.178356    1752 retry.go:31] will retry after 2.340488664s: couldn't verify container is exited. %v: unknown state "newest-cni-20211117231341-9504": docker container inspect newest-cni-20211117231341-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20211117231341-9504
	I1117 23:14:57.524689    1752 cli_runner.go:115] Run: docker container inspect newest-cni-20211117231341-9504 --format={{.State.Status}}
	W1117 23:14:57.622290    1752 cli_runner.go:162] docker container inspect newest-cni-20211117231341-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:14:57.622378    1752 oci.go:670] temporary error verifying shutdown: unknown state "newest-cni-20211117231341-9504": docker container inspect newest-cni-20211117231341-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20211117231341-9504
	I1117 23:14:57.622378    1752 oci.go:672] temporary error: container newest-cni-20211117231341-9504 status is  but expect it to be exited
	I1117 23:14:57.622378    1752 retry.go:31] will retry after 4.506218855s: couldn't verify container is exited. %v: unknown state "newest-cni-20211117231341-9504": docker container inspect newest-cni-20211117231341-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20211117231341-9504
	I1117 23:15:02.133483    1752 cli_runner.go:115] Run: docker container inspect newest-cni-20211117231341-9504 --format={{.State.Status}}
	W1117 23:15:02.223871    1752 cli_runner.go:162] docker container inspect newest-cni-20211117231341-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:15:02.224060    1752 oci.go:670] temporary error verifying shutdown: unknown state "newest-cni-20211117231341-9504": docker container inspect newest-cni-20211117231341-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20211117231341-9504
	I1117 23:15:02.224252    1752 oci.go:672] temporary error: container newest-cni-20211117231341-9504 status is  but expect it to be exited
	I1117 23:15:02.224375    1752 retry.go:31] will retry after 3.221479586s: couldn't verify container is exited. %v: unknown state "newest-cni-20211117231341-9504": docker container inspect newest-cni-20211117231341-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20211117231341-9504
	I1117 23:15:05.451083    1752 cli_runner.go:115] Run: docker container inspect newest-cni-20211117231341-9504 --format={{.State.Status}}
	W1117 23:15:05.562029    1752 cli_runner.go:162] docker container inspect newest-cni-20211117231341-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:15:05.562029    1752 oci.go:670] temporary error verifying shutdown: unknown state "newest-cni-20211117231341-9504": docker container inspect newest-cni-20211117231341-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20211117231341-9504
	I1117 23:15:05.562029    1752 oci.go:672] temporary error: container newest-cni-20211117231341-9504 status is  but expect it to be exited
	I1117 23:15:05.562329    1752 oci.go:87] couldn't shut down newest-cni-20211117231341-9504 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "newest-cni-20211117231341-9504": docker container inspect newest-cni-20211117231341-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20211117231341-9504
	 
	I1117 23:15:05.566676    1752 cli_runner.go:115] Run: docker rm -f -v newest-cni-20211117231341-9504
	W1117 23:15:05.657701    1752 cli_runner.go:162] docker rm -f -v newest-cni-20211117231341-9504 returned with exit code 1
	W1117 23:15:05.659114    1752 delete.go:139] delete failed (probably ok) <nil>
	I1117 23:15:05.659192    1752 fix.go:120] Sleeping 1 second for extra luck!
	I1117 23:15:06.659593    1752 start.go:126] createHost starting for "" (driver="docker")
	I1117 23:15:06.663985    1752 out.go:203] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I1117 23:15:06.664891    1752 start.go:160] libmachine.API.Create for "newest-cni-20211117231341-9504" (driver="docker")
	I1117 23:15:06.664891    1752 client.go:168] LocalClient.Create starting
	I1117 23:15:06.665454    1752 main.go:130] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I1117 23:15:06.665726    1752 main.go:130] libmachine: Decoding PEM data...
	I1117 23:15:06.665726    1752 main.go:130] libmachine: Parsing certificate...
	I1117 23:15:06.665955    1752 main.go:130] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I1117 23:15:06.666211    1752 main.go:130] libmachine: Decoding PEM data...
	I1117 23:15:06.666211    1752 main.go:130] libmachine: Parsing certificate...
	I1117 23:15:06.671627    1752 cli_runner.go:115] Run: docker network inspect newest-cni-20211117231341-9504 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1117 23:15:06.765253    1752 cli_runner.go:162] docker network inspect newest-cni-20211117231341-9504 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1117 23:15:06.769271    1752 network_create.go:254] running [docker network inspect newest-cni-20211117231341-9504] to gather additional debugging logs...
	I1117 23:15:06.769271    1752 cli_runner.go:115] Run: docker network inspect newest-cni-20211117231341-9504
	W1117 23:15:06.866661    1752 cli_runner.go:162] docker network inspect newest-cni-20211117231341-9504 returned with exit code 1
	I1117 23:15:06.866741    1752 network_create.go:257] error running [docker network inspect newest-cni-20211117231341-9504]: docker network inspect newest-cni-20211117231341-9504: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: newest-cni-20211117231341-9504
	I1117 23:15:06.866741    1752 network_create.go:259] output of [docker network inspect newest-cni-20211117231341-9504]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: newest-cni-20211117231341-9504
	
	** /stderr **
	I1117 23:15:06.871168    1752 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1117 23:15:06.980386    1752 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc000006c50] misses:0}
	I1117 23:15:06.980959    1752 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1117 23:15:06.980959    1752 network_create.go:106] attempt to create docker network newest-cni-20211117231341-9504 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1117 23:15:06.984051    1752 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true newest-cni-20211117231341-9504
	I1117 23:15:07.195259    1752 network_create.go:90] docker network newest-cni-20211117231341-9504 192.168.49.0/24 created
	I1117 23:15:07.195259    1752 kic.go:106] calculated static IP "192.168.49.2" for the "newest-cni-20211117231341-9504" container
	I1117 23:15:07.203334    1752 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
	I1117 23:15:07.296713    1752 cli_runner.go:115] Run: docker volume create newest-cni-20211117231341-9504 --label name.minikube.sigs.k8s.io=newest-cni-20211117231341-9504 --label created_by.minikube.sigs.k8s.io=true
	I1117 23:15:07.388420    1752 oci.go:102] Successfully created a docker volume newest-cni-20211117231341-9504
	I1117 23:15:07.392005    1752 cli_runner.go:115] Run: docker run --rm --name newest-cni-20211117231341-9504-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-20211117231341-9504 --entrypoint /usr/bin/test -v newest-cni-20211117231341-9504:/var gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -d /var/lib
	I1117 23:15:08.240977    1752 oci.go:106] Successfully prepared a docker volume newest-cni-20211117231341-9504
	I1117 23:15:08.241319    1752 preload.go:132] Checking if preload exists for k8s version v1.22.4-rc.0 and runtime docker
	I1117 23:15:08.241354    1752 kic.go:179] Starting extracting preloaded images to volume ...
	I1117 23:15:08.245690    1752 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I1117 23:15:08.246616    1752 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v14-v1.22.4-rc.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-20211117231341-9504:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir
	W1117 23:15:08.354821    1752 cli_runner.go:162] docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v14-v1.22.4-rc.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-20211117231341-9504:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir returned with exit code 125
	I1117 23:15:08.354821    1752 kic.go:186] Unable to extract preloaded tarball to volume: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v14-v1.22.4-rc.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-20211117231341-9504:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: status code not OK but 500: {"Message":"Unhandled exception: The notification platform is unavailable.\r\n\r\nThe notification platform is unavailable.\r\n","StackTrace":"   at Windows.UI.Notifications.ToastNotificationManager.CreateToastNotifier(String applicationId)\r\n   at Docker.WPF.PromptShareDirectory.<PromptUserAsync>d__0.MoveNext() in C:\\workspaces\\stable-2.3.x\\src\\github.com\\docker\\pinata\\win\\src\\Docker.WPF\\PromptShareDirectory.cs:line 25\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at Docker.ApiServices.Mounting.FileSharing.<DoShareAsync>d__6.MoveNext() in C:\\workspaces\\stable-2.3.x\\src\\github.com\\docker\\pinata\\win\\src\\Docker.ApiServices\\Mounting\\FileSharing.cs:line 80\r\n--- End of stack trace from previous location whe
re exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at Docker.ApiServices.Mounting.FileSharing.<ShareAsync>d__4.MoveNext() in C:\\workspaces\\stable-2.3.x\\src\\github.com\\docker\\pinata\\win\\src\\Docker.ApiServices\\Mounting\\FileSharing.cs:line 47\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at Docker.HttpApi.Controllers.FilesharingController.<ShareDirectory>d__2.MoveNext() in C:\\workspaces\\stable-2.3.x\\src\\github.com\\docker\\pinata\\win\\src\\Docker.HttpApi\\Controllers\\FilesharingController.cs:line 21\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.Exceptio
nDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at System.Threading.Tasks.TaskHelpersExtensions.<CastToObject>d__1`1.MoveNext()\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at System.Web.Http.Controllers.ApiControllerActionInvoker.<InvokeActionAsyncCore>d__1.MoveNext()\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at System.Web.Http.Controllers.ActionFilterResult.<ExecuteAsync>d__5.MoveNext()\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.Excepti
onServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at System.Web.Http.Dispatcher.HttpControllerDispatcher.<SendAsync>d__15.MoveNext()"}.
	See 'docker run --help'.
	I1117 23:15:08.596964    1752 info.go:263] docker info: {ID:L2W4:ZZCI:FAHF:ZZT6:NAWJ:ODFR:2ZBQ:YQ77:PYZZ:VR3E:WLUM:OBSV Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:41 OomKillDisable:true NGoroutines:53 SystemTime:2021-11-17 23:15:08.336520669 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:4.19.76-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://inde
x.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:6239399936 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:19.03.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7ad184331fa3e55e52b890ea95e65ba581ae3429 Expected:7ad184331fa3e55e52b890ea95e65ba581ae3429} RuncCommit:{ID:dc9208a3303feef5b3839f4323d9beb36df0a9dd Expected:dc9208a3303feef5b3839f4323d9beb36df0a9dd} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense:Community Engine Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	E1117 23:15:08.597256    1752 oci.go:197] error getting kernel modules path: Unable to locate kernel modules
	I1117 23:15:08.597365    1752 client.go:171] LocalClient.Create took 1.9324595s
	I1117 23:15:10.605254    1752 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1117 23:15:10.608960    1752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20211117231341-9504
	W1117 23:15:10.698729    1752 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20211117231341-9504 returned with exit code 1
	I1117 23:15:10.699312    1752 retry.go:31] will retry after 149.242379ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-20211117231341-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20211117231341-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20211117231341-9504
	I1117 23:15:10.854492    1752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20211117231341-9504
	W1117 23:15:10.947895    1752 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20211117231341-9504 returned with exit code 1
	I1117 23:15:10.948158    1752 retry.go:31] will retry after 300.341948ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-20211117231341-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20211117231341-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20211117231341-9504
	I1117 23:15:11.255457    1752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20211117231341-9504
	W1117 23:15:11.346112    1752 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20211117231341-9504 returned with exit code 1
	I1117 23:15:11.346112    1752 retry.go:31] will retry after 571.057104ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-20211117231341-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20211117231341-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20211117231341-9504
	I1117 23:15:11.923108    1752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20211117231341-9504
	W1117 23:15:12.018245    1752 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20211117231341-9504 returned with exit code 1
	W1117 23:15:12.018416    1752 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-20211117231341-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20211117231341-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20211117231341-9504
	
	W1117 23:15:12.018416    1752 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-20211117231341-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20211117231341-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20211117231341-9504
	I1117 23:15:12.018416    1752 start.go:129] duration metric: createHost completed in 5.3587834s
	I1117 23:15:12.026209    1752 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1117 23:15:12.029471    1752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20211117231341-9504
	W1117 23:15:12.115887    1752 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20211117231341-9504 returned with exit code 1
	I1117 23:15:12.116128    1752 retry.go:31] will retry after 178.565968ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-20211117231341-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20211117231341-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20211117231341-9504
	I1117 23:15:12.299344    1752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20211117231341-9504
	W1117 23:15:12.386843    1752 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20211117231341-9504 returned with exit code 1
	I1117 23:15:12.386843    1752 retry.go:31] will retry after 330.246446ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-20211117231341-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20211117231341-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20211117231341-9504
	I1117 23:15:12.722068    1752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20211117231341-9504
	W1117 23:15:12.817579    1752 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20211117231341-9504 returned with exit code 1
	I1117 23:15:12.817981    1752 retry.go:31] will retry after 460.157723ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-20211117231341-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20211117231341-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20211117231341-9504
	I1117 23:15:13.282660    1752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20211117231341-9504
	W1117 23:15:13.378303    1752 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20211117231341-9504 returned with exit code 1
	W1117 23:15:13.378597    1752 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-20211117231341-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20211117231341-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20211117231341-9504
	
	W1117 23:15:13.378597    1752 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-20211117231341-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20211117231341-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20211117231341-9504
	I1117 23:15:13.378597    1752 fix.go:57] fixHost completed within 24.829524s
	I1117 23:15:13.378597    1752 start.go:80] releasing machines lock for "newest-cni-20211117231341-9504", held for 24.829524s
	W1117 23:15:13.378597    1752 start.go:532] error starting host: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	W1117 23:15:13.379164    1752 out.go:241] ! StartHost failed, but will try again: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	! StartHost failed, but will try again: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	I1117 23:15:13.379164    1752 start.go:547] Will try again in 5 seconds ...
	I1117 23:15:18.379662    1752 start.go:313] acquiring machines lock for newest-cni-20211117231341-9504: {Name:mkb4f6b61af8e77a295e231eb5be8b44810e2cdb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 23:15:18.379662    1752 start.go:317] acquired machines lock for "newest-cni-20211117231341-9504" in 0s
	I1117 23:15:18.380363    1752 start.go:93] Skipping create...Using existing machine configuration
	I1117 23:15:18.380392    1752 fix.go:55] fixHost starting: 
	I1117 23:15:18.388780    1752 cli_runner.go:115] Run: docker container inspect newest-cni-20211117231341-9504 --format={{.State.Status}}
	W1117 23:15:18.475560    1752 cli_runner.go:162] docker container inspect newest-cni-20211117231341-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:15:18.475712    1752 fix.go:108] recreateIfNeeded on newest-cni-20211117231341-9504: state= err=unknown state "newest-cni-20211117231341-9504": docker container inspect newest-cni-20211117231341-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20211117231341-9504
	I1117 23:15:18.475943    1752 fix.go:113] machineExists: false. err=machine does not exist
	I1117 23:15:18.480673    1752 out.go:176] * docker "newest-cni-20211117231341-9504" container is missing, will recreate.
	I1117 23:15:18.480740    1752 delete.go:124] DEMOLISHING newest-cni-20211117231341-9504 ...
	I1117 23:15:18.486397    1752 cli_runner.go:115] Run: docker container inspect newest-cni-20211117231341-9504 --format={{.State.Status}}
	W1117 23:15:18.570653    1752 cli_runner.go:162] docker container inspect newest-cni-20211117231341-9504 --format={{.State.Status}} returned with exit code 1
	W1117 23:15:18.571084    1752 stop.go:75] unable to get state: unknown state "newest-cni-20211117231341-9504": docker container inspect newest-cni-20211117231341-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20211117231341-9504
	I1117 23:15:18.571180    1752 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "newest-cni-20211117231341-9504": docker container inspect newest-cni-20211117231341-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20211117231341-9504
	I1117 23:15:18.579744    1752 cli_runner.go:115] Run: docker container inspect newest-cni-20211117231341-9504 --format={{.State.Status}}
	W1117 23:15:18.679736    1752 cli_runner.go:162] docker container inspect newest-cni-20211117231341-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:15:18.679981    1752 delete.go:82] Unable to get host status for newest-cni-20211117231341-9504, assuming it has already been deleted: state: unknown state "newest-cni-20211117231341-9504": docker container inspect newest-cni-20211117231341-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20211117231341-9504
	I1117 23:15:18.684047    1752 cli_runner.go:115] Run: docker container inspect -f {{.Id}} newest-cni-20211117231341-9504
	W1117 23:15:18.767647    1752 cli_runner.go:162] docker container inspect -f {{.Id}} newest-cni-20211117231341-9504 returned with exit code 1
	I1117 23:15:18.767647    1752 kic.go:360] could not find the container newest-cni-20211117231341-9504 to remove it. will try anyways
	I1117 23:15:18.772173    1752 cli_runner.go:115] Run: docker container inspect newest-cni-20211117231341-9504 --format={{.State.Status}}
	W1117 23:15:18.856730    1752 cli_runner.go:162] docker container inspect newest-cni-20211117231341-9504 --format={{.State.Status}} returned with exit code 1
	W1117 23:15:18.857085    1752 oci.go:83] error getting container status, will try to delete anyways: unknown state "newest-cni-20211117231341-9504": docker container inspect newest-cni-20211117231341-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20211117231341-9504
	I1117 23:15:18.861052    1752 cli_runner.go:115] Run: docker exec --privileged -t newest-cni-20211117231341-9504 /bin/bash -c "sudo init 0"
	W1117 23:15:18.948399    1752 cli_runner.go:162] docker exec --privileged -t newest-cni-20211117231341-9504 /bin/bash -c "sudo init 0" returned with exit code 1
	I1117 23:15:18.948658    1752 oci.go:658] error shutdown newest-cni-20211117231341-9504: docker exec --privileged -t newest-cni-20211117231341-9504 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: newest-cni-20211117231341-9504
	I1117 23:15:19.954330    1752 cli_runner.go:115] Run: docker container inspect newest-cni-20211117231341-9504 --format={{.State.Status}}
	W1117 23:15:20.042436    1752 cli_runner.go:162] docker container inspect newest-cni-20211117231341-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:15:20.042637    1752 oci.go:670] temporary error verifying shutdown: unknown state "newest-cni-20211117231341-9504": docker container inspect newest-cni-20211117231341-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20211117231341-9504
	I1117 23:15:20.042637    1752 oci.go:672] temporary error: container newest-cni-20211117231341-9504 status is  but expect it to be exited
	I1117 23:15:20.042717    1752 retry.go:31] will retry after 391.517075ms: couldn't verify container is exited. %v: unknown state "newest-cni-20211117231341-9504": docker container inspect newest-cni-20211117231341-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20211117231341-9504
	I1117 23:15:20.438967    1752 cli_runner.go:115] Run: docker container inspect newest-cni-20211117231341-9504 --format={{.State.Status}}
	W1117 23:15:20.527118    1752 cli_runner.go:162] docker container inspect newest-cni-20211117231341-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:15:20.527198    1752 oci.go:670] temporary error verifying shutdown: unknown state "newest-cni-20211117231341-9504": docker container inspect newest-cni-20211117231341-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20211117231341-9504
	I1117 23:15:20.527198    1752 oci.go:672] temporary error: container newest-cni-20211117231341-9504 status is  but expect it to be exited
	I1117 23:15:20.527313    1752 retry.go:31] will retry after 594.826393ms: couldn't verify container is exited. %v: unknown state "newest-cni-20211117231341-9504": docker container inspect newest-cni-20211117231341-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20211117231341-9504
	I1117 23:15:21.127517    1752 cli_runner.go:115] Run: docker container inspect newest-cni-20211117231341-9504 --format={{.State.Status}}
	W1117 23:15:21.211310    1752 cli_runner.go:162] docker container inspect newest-cni-20211117231341-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:15:21.211310    1752 oci.go:670] temporary error verifying shutdown: unknown state "newest-cni-20211117231341-9504": docker container inspect newest-cni-20211117231341-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20211117231341-9504
	I1117 23:15:21.211571    1752 oci.go:672] temporary error: container newest-cni-20211117231341-9504 status is  but expect it to be exited
	I1117 23:15:21.211571    1752 retry.go:31] will retry after 1.326470261s: couldn't verify container is exited. %v: unknown state "newest-cni-20211117231341-9504": docker container inspect newest-cni-20211117231341-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20211117231341-9504
	I1117 23:15:22.542855    1752 cli_runner.go:115] Run: docker container inspect newest-cni-20211117231341-9504 --format={{.State.Status}}
	W1117 23:15:22.631667    1752 cli_runner.go:162] docker container inspect newest-cni-20211117231341-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:15:22.631941    1752 oci.go:670] temporary error verifying shutdown: unknown state "newest-cni-20211117231341-9504": docker container inspect newest-cni-20211117231341-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20211117231341-9504
	I1117 23:15:22.632003    1752 oci.go:672] temporary error: container newest-cni-20211117231341-9504 status is  but expect it to be exited
	I1117 23:15:22.632003    1752 retry.go:31] will retry after 1.212558276s: couldn't verify container is exited. %v: unknown state "newest-cni-20211117231341-9504": docker container inspect newest-cni-20211117231341-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20211117231341-9504
	I1117 23:15:23.850354    1752 cli_runner.go:115] Run: docker container inspect newest-cni-20211117231341-9504 --format={{.State.Status}}
	W1117 23:15:23.938262    1752 cli_runner.go:162] docker container inspect newest-cni-20211117231341-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:15:23.938262    1752 oci.go:670] temporary error verifying shutdown: unknown state "newest-cni-20211117231341-9504": docker container inspect newest-cni-20211117231341-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20211117231341-9504
	I1117 23:15:23.938616    1752 oci.go:672] temporary error: container newest-cni-20211117231341-9504 status is  but expect it to be exited
	I1117 23:15:23.938616    1752 retry.go:31] will retry after 1.779941781s: couldn't verify container is exited. %v: unknown state "newest-cni-20211117231341-9504": docker container inspect newest-cni-20211117231341-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20211117231341-9504
	I1117 23:15:25.723812    1752 cli_runner.go:115] Run: docker container inspect newest-cni-20211117231341-9504 --format={{.State.Status}}
	W1117 23:15:25.821449    1752 cli_runner.go:162] docker container inspect newest-cni-20211117231341-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:15:25.821882    1752 oci.go:670] temporary error verifying shutdown: unknown state "newest-cni-20211117231341-9504": docker container inspect newest-cni-20211117231341-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20211117231341-9504
	I1117 23:15:25.821932    1752 oci.go:672] temporary error: container newest-cni-20211117231341-9504 status is  but expect it to be exited
	I1117 23:15:25.822004    1752 retry.go:31] will retry after 3.268621161s: couldn't verify container is exited. %v: unknown state "newest-cni-20211117231341-9504": docker container inspect newest-cni-20211117231341-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20211117231341-9504
	I1117 23:15:29.095455    1752 cli_runner.go:115] Run: docker container inspect newest-cni-20211117231341-9504 --format={{.State.Status}}
	W1117 23:15:29.190242    1752 cli_runner.go:162] docker container inspect newest-cni-20211117231341-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:15:29.190376    1752 oci.go:670] temporary error verifying shutdown: unknown state "newest-cni-20211117231341-9504": docker container inspect newest-cni-20211117231341-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20211117231341-9504
	I1117 23:15:29.190376    1752 oci.go:672] temporary error: container newest-cni-20211117231341-9504 status is  but expect it to be exited
	I1117 23:15:29.190376    1752 retry.go:31] will retry after 6.097818893s: couldn't verify container is exited. %v: unknown state "newest-cni-20211117231341-9504": docker container inspect newest-cni-20211117231341-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20211117231341-9504
	I1117 23:15:35.293490    1752 cli_runner.go:115] Run: docker container inspect newest-cni-20211117231341-9504 --format={{.State.Status}}
	W1117 23:15:35.382051    1752 cli_runner.go:162] docker container inspect newest-cni-20211117231341-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:15:35.382051    1752 oci.go:670] temporary error verifying shutdown: unknown state "newest-cni-20211117231341-9504": docker container inspect newest-cni-20211117231341-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20211117231341-9504
	I1117 23:15:35.382051    1752 oci.go:672] temporary error: container newest-cni-20211117231341-9504 status is  but expect it to be exited
	I1117 23:15:35.382051    1752 oci.go:87] couldn't shut down newest-cni-20211117231341-9504 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "newest-cni-20211117231341-9504": docker container inspect newest-cni-20211117231341-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20211117231341-9504
	 
	I1117 23:15:35.386701    1752 cli_runner.go:115] Run: docker rm -f -v newest-cni-20211117231341-9504
	W1117 23:15:35.475674    1752 cli_runner.go:162] docker rm -f -v newest-cni-20211117231341-9504 returned with exit code 1
	W1117 23:15:35.476659    1752 delete.go:139] delete failed (probably ok) <nil>
	I1117 23:15:35.476659    1752 fix.go:120] Sleeping 1 second for extra luck!
	I1117 23:15:36.476822    1752 start.go:126] createHost starting for "" (driver="docker")
	I1117 23:15:36.481166    1752 out.go:203] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I1117 23:15:36.481468    1752 start.go:160] libmachine.API.Create for "newest-cni-20211117231341-9504" (driver="docker")
	I1117 23:15:36.481554    1752 client.go:168] LocalClient.Create starting
	I1117 23:15:36.482045    1752 main.go:130] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I1117 23:15:36.482338    1752 main.go:130] libmachine: Decoding PEM data...
	I1117 23:15:36.482449    1752 main.go:130] libmachine: Parsing certificate...
	I1117 23:15:36.482617    1752 main.go:130] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I1117 23:15:36.482617    1752 main.go:130] libmachine: Decoding PEM data...
	I1117 23:15:36.482617    1752 main.go:130] libmachine: Parsing certificate...
	I1117 23:15:36.487447    1752 cli_runner.go:115] Run: docker network inspect newest-cni-20211117231341-9504 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1117 23:15:36.583948    1752 network_create.go:67] Found existing network {name:newest-cni-20211117231341-9504 subnet:0xc000eee660 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 168 49 1] mtu:1500}
	I1117 23:15:36.583948    1752 kic.go:106] calculated static IP "192.168.49.2" for the "newest-cni-20211117231341-9504" container
	I1117 23:15:36.591115    1752 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
	I1117 23:15:36.687608    1752 cli_runner.go:115] Run: docker volume create newest-cni-20211117231341-9504 --label name.minikube.sigs.k8s.io=newest-cni-20211117231341-9504 --label created_by.minikube.sigs.k8s.io=true
	I1117 23:15:36.777266    1752 oci.go:102] Successfully created a docker volume newest-cni-20211117231341-9504
	I1117 23:15:36.780789    1752 cli_runner.go:115] Run: docker run --rm --name newest-cni-20211117231341-9504-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-20211117231341-9504 --entrypoint /usr/bin/test -v newest-cni-20211117231341-9504:/var gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -d /var/lib
	I1117 23:15:37.620361    1752 oci.go:106] Successfully prepared a docker volume newest-cni-20211117231341-9504
	I1117 23:15:37.620551    1752 preload.go:132] Checking if preload exists for k8s version v1.22.4-rc.0 and runtime docker
	I1117 23:15:37.620642    1752 kic.go:179] Starting extracting preloaded images to volume ...
	I1117 23:15:37.626593    1752 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I1117 23:15:37.627911    1752 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v14-v1.22.4-rc.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-20211117231341-9504:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir
	W1117 23:15:37.742259    1752 cli_runner.go:162] docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v14-v1.22.4-rc.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-20211117231341-9504:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir returned with exit code 125
	I1117 23:15:37.742343    1752 kic.go:186] Unable to extract preloaded tarball to volume: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v14-v1.22.4-rc.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-20211117231341-9504:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: status code not OK but 500: {"Message":"Unhandled exception: The notification platform is unavailable.\r\n\r\nThe notification platform is unavailable.\r\n","StackTrace":"   at Windows.UI.Notifications.ToastNotificationManager.CreateToastNotifier(String applicationId)\r\n   at Docker.WPF.PromptShareDirectory.<PromptUserAsync>d__0.MoveNext() in C:\\workspaces\\stable-2.3.x\\src\\github.com\\docker\\pinata\\win\\src\\Docker.WPF\\PromptShareDirectory.cs:line 25\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at Docker.ApiServices.Mounting.FileSharing.<DoShareAsync>d__6.MoveNext() in C:\\workspaces\\stable-2.3.x\\src\\github.com\\docker\\pinata\\win\\src\\Docker.ApiServices\\Mounting\\FileSharing.cs:line 80\r\n--- End of stack trace from previous location whe
re exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at Docker.ApiServices.Mounting.FileSharing.<ShareAsync>d__4.MoveNext() in C:\\workspaces\\stable-2.3.x\\src\\github.com\\docker\\pinata\\win\\src\\Docker.ApiServices\\Mounting\\FileSharing.cs:line 47\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at Docker.HttpApi.Controllers.FilesharingController.<ShareDirectory>d__2.MoveNext() in C:\\workspaces\\stable-2.3.x\\src\\github.com\\docker\\pinata\\win\\src\\Docker.HttpApi\\Controllers\\FilesharingController.cs:line 21\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.Exceptio
nDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at System.Threading.Tasks.TaskHelpersExtensions.<CastToObject>d__1`1.MoveNext()\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at System.Web.Http.Controllers.ApiControllerActionInvoker.<InvokeActionAsyncCore>d__1.MoveNext()\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at System.Web.Http.Controllers.ActionFilterResult.<ExecuteAsync>d__5.MoveNext()\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.Excepti
onServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at System.Web.Http.Dispatcher.HttpControllerDispatcher.<SendAsync>d__15.MoveNext()"}.
	See 'docker run --help'.
	I1117 23:15:37.958509    1752 info.go:263] docker info: {ID:L2W4:ZZCI:FAHF:ZZT6:NAWJ:ODFR:2ZBQ:YQ77:PYZZ:VR3E:WLUM:OBSV Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:40 OomKillDisable:true NGoroutines:52 SystemTime:2021-11-17 23:15:37.704379757 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:4.19.76-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://inde
x.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:6239399936 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:19.03.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7ad184331fa3e55e52b890ea95e65ba581ae3429 Expected:7ad184331fa3e55e52b890ea95e65ba581ae3429} RuncCommit:{ID:dc9208a3303feef5b3839f4323d9beb36df0a9dd Expected:dc9208a3303feef5b3839f4323d9beb36df0a9dd} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense:Community Engine Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	E1117 23:15:37.958975    1752 oci.go:197] error getting kernel modules path: Unable to locate kernel modules
	I1117 23:15:37.959041    1752 client.go:171] LocalClient.Create took 1.4774756s
	I1117 23:15:39.968623    1752 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1117 23:15:39.972324    1752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20211117231341-9504
	W1117 23:15:40.058659    1752 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20211117231341-9504 returned with exit code 1
	I1117 23:15:40.058903    1752 retry.go:31] will retry after 198.278561ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-20211117231341-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20211117231341-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20211117231341-9504
	I1117 23:15:40.265933    1752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20211117231341-9504
	W1117 23:15:40.363777    1752 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20211117231341-9504 returned with exit code 1
	I1117 23:15:40.363941    1752 retry.go:31] will retry after 298.905961ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-20211117231341-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20211117231341-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20211117231341-9504
	I1117 23:15:40.667994    1752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20211117231341-9504
	W1117 23:15:40.756624    1752 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20211117231341-9504 returned with exit code 1
	I1117 23:15:40.756880    1752 retry.go:31] will retry after 704.572333ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-20211117231341-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20211117231341-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20211117231341-9504
	I1117 23:15:41.467759    1752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20211117231341-9504
	W1117 23:15:41.556764    1752 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20211117231341-9504 returned with exit code 1
	W1117 23:15:41.556764    1752 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-20211117231341-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20211117231341-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20211117231341-9504
	
	W1117 23:15:41.556764    1752 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-20211117231341-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20211117231341-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20211117231341-9504
	I1117 23:15:41.556764    1752 start.go:129] duration metric: createHost completed in 5.0796957s
	I1117 23:15:41.564675    1752 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1117 23:15:41.568628    1752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20211117231341-9504
	W1117 23:15:41.656715    1752 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20211117231341-9504 returned with exit code 1
	I1117 23:15:41.657021    1752 retry.go:31] will retry after 341.333754ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-20211117231341-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20211117231341-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20211117231341-9504
	I1117 23:15:42.003602    1752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20211117231341-9504
	W1117 23:15:42.094461    1752 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20211117231341-9504 returned with exit code 1
	I1117 23:15:42.094680    1752 retry.go:31] will retry after 448.769687ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-20211117231341-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20211117231341-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20211117231341-9504
	I1117 23:15:42.549225    1752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20211117231341-9504
	W1117 23:15:42.639509    1752 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20211117231341-9504 returned with exit code 1
	I1117 23:15:42.639798    1752 retry.go:31] will retry after 575.898922ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-20211117231341-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20211117231341-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20211117231341-9504
	I1117 23:15:43.220374    1752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20211117231341-9504
	W1117 23:15:43.311035    1752 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20211117231341-9504 returned with exit code 1
	W1117 23:15:43.311035    1752 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-20211117231341-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20211117231341-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20211117231341-9504
	
	W1117 23:15:43.311035    1752 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-20211117231341-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20211117231341-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20211117231341-9504
	I1117 23:15:43.311035    1752 fix.go:57] fixHost completed within 24.930456s
	I1117 23:15:43.311035    1752 start.go:80] releasing machines lock for "newest-cni-20211117231341-9504", held for 24.9311864s
	W1117 23:15:43.311679    1752 out.go:241] * Failed to start docker container. Running "minikube delete -p newest-cni-20211117231341-9504" may fix it: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	* Failed to start docker container. Running "minikube delete -p newest-cni-20211117231341-9504" may fix it: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	I1117 23:15:43.317104    1752 out.go:176] 
	W1117 23:15:43.317344    1752 out.go:241] X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	W1117 23:15:43.317449    1752 out.go:241] * 
	* 
	W1117 23:15:43.318585    1752 out.go:241] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1117 23:15:43.320676    1752 out.go:176] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:244: failed to start minikube post-stop. args "out/minikube-windows-amd64.exe start -p newest-cni-20211117231341-9504 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubelet.network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker --kubernetes-version=v1.22.4-rc.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/newest-cni/serial/SecondStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect newest-cni-20211117231341-9504
helpers_test.go:235: (dbg) docker inspect newest-cni-20211117231341-9504:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "newest-cni-20211117231341-9504",
	        "Id": "121678c45985b86b48c3533cec7d490b96bd47af74ec40c6bf0ae6c1fe306895",
	        "Created": "2021-11-17T23:15:07.073141686Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.49.0/24",
	                    "Gateway": "192.168.49.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p newest-cni-20211117231341-9504 -n newest-cni-20211117231341-9504
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p newest-cni-20211117231341-9504 -n newest-cni-20211117231341-9504: exit status 7 (1.7555653s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 23:15:45.300577   11540 status.go:247] status error: host: state: unknown state "newest-cni-20211117231341-9504": docker container inspect newest-cni-20211117231341-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20211117231341-9504

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-20211117231341-9504" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/newest-cni/serial/SecondStart (59.46s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (3.59s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:289: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p newest-cni-20211117231341-9504 "sudo crictl images -o json"
start_stop_delete_test.go:289: (dbg) Non-zero exit: out/minikube-windows-amd64.exe ssh -p newest-cni-20211117231341-9504 "sudo crictl images -o json": exit status 80 (1.7428578s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "newest-cni-20211117231341-9504": docker container inspect newest-cni-20211117231341-9504 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20211117231341-9504
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                     │
	│    * If the above advice does not help, please let us know:                                                         │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                       │
	│                                                                                                                     │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                            │
	│    * Please also attach the following file to the GitHub issue:                                                     │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_ssh_2ebd0b017f5d88727e5083393ee181280e239d1d_0.log    │
	│                                                                                                                     │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:289: failed tp get images inside minikube. args "out/minikube-windows-amd64.exe ssh -p newest-cni-20211117231341-9504 \"sudo crictl images -o json\"": exit status 80
start_stop_delete_test.go:289: failed to decode images json unexpected end of JSON input. output:

                                                
                                                

                                                
                                                
start_stop_delete_test.go:289: v1.22.4-rc.0 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"k8s.gcr.io/coredns/coredns:v1.8.4",
- 	"k8s.gcr.io/etcd:3.5.0-0",
- 	"k8s.gcr.io/kube-apiserver:v1.22.4-rc.0",
- 	"k8s.gcr.io/kube-controller-manager:v1.22.4-rc.0",
- 	"k8s.gcr.io/kube-proxy:v1.22.4-rc.0",
- 	"k8s.gcr.io/kube-scheduler:v1.22.4-rc.0",
- 	"k8s.gcr.io/pause:3.5",
- 	"kubernetesui/dashboard:v2.3.1",
- 	"kubernetesui/metrics-scraper:v1.0.7",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/newest-cni/serial/VerifyKubernetesImages]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect newest-cni-20211117231341-9504
helpers_test.go:235: (dbg) docker inspect newest-cni-20211117231341-9504:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "newest-cni-20211117231341-9504",
	        "Id": "121678c45985b86b48c3533cec7d490b96bd47af74ec40c6bf0ae6c1fe306895",
	        "Created": "2021-11-17T23:15:07.073141686Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.49.0/24",
	                    "Gateway": "192.168.49.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p newest-cni-20211117231341-9504 -n newest-cni-20211117231341-9504
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p newest-cni-20211117231341-9504 -n newest-cni-20211117231341-9504: exit status 7 (1.747738s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 23:15:48.897363   12016 status.go:247] status error: host: state: unknown state "newest-cni-20211117231341-9504": docker container inspect newest-cni-20211117231341-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20211117231341-9504

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-20211117231341-9504" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (3.59s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (5.49s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-windows-amd64.exe pause -p newest-cni-20211117231341-9504 --alsologtostderr -v=1
start_stop_delete_test.go:296: (dbg) Non-zero exit: out/minikube-windows-amd64.exe pause -p newest-cni-20211117231341-9504 --alsologtostderr -v=1: exit status 80 (1.7527431s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1117 23:15:49.108474   11060 out.go:297] Setting OutFile to fd 1900 ...
	I1117 23:15:49.178762   11060 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 23:15:49.178762   11060 out.go:310] Setting ErrFile to fd 1740...
	I1117 23:15:49.178762   11060 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 23:15:49.189707   11060 out.go:304] Setting JSON to false
	I1117 23:15:49.190019   11060 mustload.go:65] Loading cluster: newest-cni-20211117231341-9504
	I1117 23:15:49.190323   11060 config.go:176] Loaded profile config "newest-cni-20211117231341-9504": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.4-rc.0
	I1117 23:15:49.198869   11060 cli_runner.go:115] Run: docker container inspect newest-cni-20211117231341-9504 --format={{.State.Status}}
	W1117 23:15:50.631757   11060 cli_runner.go:162] docker container inspect newest-cni-20211117231341-9504 --format={{.State.Status}} returned with exit code 1
	I1117 23:15:50.631962   11060 cli_runner.go:168] Completed: docker container inspect newest-cni-20211117231341-9504 --format={{.State.Status}}: (1.4328769s)
	I1117 23:15:50.638351   11060 out.go:176] 
	W1117 23:15:50.638981   11060 out.go:241] X Exiting due to GUEST_STATUS: state: unknown state "newest-cni-20211117231341-9504": docker container inspect newest-cni-20211117231341-9504 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20211117231341-9504
	
	X Exiting due to GUEST_STATUS: state: unknown state "newest-cni-20211117231341-9504": docker container inspect newest-cni-20211117231341-9504 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20211117231341-9504
	
	W1117 23:15:50.638981   11060 out.go:241] * 
	* 
	W1117 23:15:50.648308   11060 out.go:241] ╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                       │
	│    * If the above advice does not help, please let us know:                                                           │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                         │
	│                                                                                                                       │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                              │
	│    * Please also attach the following file to the GitHub issue:                                                       │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_pause_8893f1c977cc86351b34571029ffce3d31854fd6_1.log    │
	│                                                                                                                       │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                       │
	│    * If the above advice does not help, please let us know:                                                           │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                         │
	│                                                                                                                       │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                              │
	│    * Please also attach the following file to the GitHub issue:                                                       │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_pause_8893f1c977cc86351b34571029ffce3d31854fd6_1.log    │
	│                                                                                                                       │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I1117 23:15:50.650539   11060 out.go:176] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:296: out/minikube-windows-amd64.exe pause -p newest-cni-20211117231341-9504 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect newest-cni-20211117231341-9504
helpers_test.go:235: (dbg) docker inspect newest-cni-20211117231341-9504:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "newest-cni-20211117231341-9504",
	        "Id": "121678c45985b86b48c3533cec7d490b96bd47af74ec40c6bf0ae6c1fe306895",
	        "Created": "2021-11-17T23:15:07.073141686Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.49.0/24",
	                    "Gateway": "192.168.49.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p newest-cni-20211117231341-9504 -n newest-cni-20211117231341-9504
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p newest-cni-20211117231341-9504 -n newest-cni-20211117231341-9504: exit status 7 (1.7759558s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 23:15:52.534814   10592 status.go:247] status error: host: state: unknown state "newest-cni-20211117231341-9504": docker container inspect newest-cni-20211117231341-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20211117231341-9504

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-20211117231341-9504" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect newest-cni-20211117231341-9504
helpers_test.go:235: (dbg) docker inspect newest-cni-20211117231341-9504:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "newest-cni-20211117231341-9504",
	        "Id": "121678c45985b86b48c3533cec7d490b96bd47af74ec40c6bf0ae6c1fe306895",
	        "Created": "2021-11-17T23:15:07.073141686Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.49.0/24",
	                    "Gateway": "192.168.49.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p newest-cni-20211117231341-9504 -n newest-cni-20211117231341-9504
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p newest-cni-20211117231341-9504 -n newest-cni-20211117231341-9504: exit status 7 (1.7505169s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 23:15:54.386594   11880 status.go:247] status error: host: state: unknown state "newest-cni-20211117231341-9504": docker container inspect newest-cni-20211117231341-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20211117231341-9504

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-20211117231341-9504" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (5.49s)

                                                
                                    

Test pass (60/234)

Order passed test Duration
3 TestDownloadOnly/v1.14.0/json-events 13.28
4 TestDownloadOnly/v1.14.0/preload-exists 0.14
7 TestDownloadOnly/v1.14.0/kubectl 0
8 TestDownloadOnly/v1.14.0/LogsDuration 0.72
10 TestDownloadOnly/v1.22.3/json-events 13.59
11 TestDownloadOnly/v1.22.3/preload-exists 0
14 TestDownloadOnly/v1.22.3/kubectl 0
15 TestDownloadOnly/v1.22.3/LogsDuration 0.35
17 TestDownloadOnly/v1.22.4-rc.0/json-events 12.52
18 TestDownloadOnly/v1.22.4-rc.0/preload-exists 0.12
21 TestDownloadOnly/v1.22.4-rc.0/kubectl 0
22 TestDownloadOnly/v1.22.4-rc.0/LogsDuration 0.51
23 TestDownloadOnly/DeleteAll 3.11
24 TestDownloadOnly/DeleteAlwaysSucceeds 2.52
25 TestDownloadOnlyKic 23.31
39 TestErrorSpam/start 7.91
40 TestErrorSpam/status 5.36
41 TestErrorSpam/pause 5.41
42 TestErrorSpam/unpause 5.35
43 TestErrorSpam/stop 45.38
46 TestFunctional/serial/CopySyncFile 0.03
55 TestFunctional/serial/CacheCmd/cache/add_local 3.32
68 TestFunctional/parallel/ConfigCmd 1.87
70 TestFunctional/parallel/DryRun 5.05
71 TestFunctional/parallel/InternationalLanguage 2.36
76 TestFunctional/parallel/AddonsCmd 2.22
91 TestFunctional/parallel/Version/short 0.37
97 TestFunctional/parallel/ProfileCmd/profile_not_create 3.82
98 TestFunctional/parallel/ProfileCmd/profile_list 2.29
99 TestFunctional/parallel/ProfileCmd/profile_json_output 2.22
101 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
108 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
111 TestFunctional/parallel/ImageCommands/Setup 1.9
114 TestFunctional/parallel/ImageCommands/ImageRemove 3.51
116 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 2.55
117 TestFunctional/delete_addon-resizer_images 0.28
118 TestFunctional/delete_my-image_image 0.09
119 TestFunctional/delete_minikube_cached_images 0.09
125 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 1.8
138 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
139 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
144 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
145 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
152 TestErrorJSONOutput 2.44
155 TestKicCustomNetwork/use_default_bridge_network 215.65
156 TestKicExistingNetwork 217.61
157 TestMainNoArgs 0.29
164 TestMountStart/serial/DeleteFirst 2.99
193 TestRunningBinaryUpgrade 253.16
210 TestNoKubernetes/serial/VerifyK8sNotRunning 1.9
211 TestNoKubernetes/serial/ProfileList 4.36
214 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 3.01
215 TestStoppedBinaryUpgrade/Setup 0.59
216 TestStoppedBinaryUpgrade/Upgrade 168.32
228 TestStoppedBinaryUpgrade/MinikubeLogs 4.47
240 TestPause/serial/DeletePaused 3.48
289 TestStartStop/group/newest-cni/serial/DeployApp 0
290 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.81
294 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
295 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
x
+
TestDownloadOnly/v1.14.0/json-events (13.28s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.14.0/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-20211117222633-9504 --force --alsologtostderr --kubernetes-version=v1.14.0 --container-runtime=docker --driver=docker
aaa_download_only_test.go:69: (dbg) Done: out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-20211117222633-9504 --force --alsologtostderr --kubernetes-version=v1.14.0 --container-runtime=docker --driver=docker: (13.2782025s)
--- PASS: TestDownloadOnly/v1.14.0/json-events (13.28s)

                                                
                                    
x
+
TestDownloadOnly/v1.14.0/preload-exists (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.14.0/preload-exists
--- PASS: TestDownloadOnly/v1.14.0/preload-exists (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.14.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.14.0/kubectl
--- PASS: TestDownloadOnly/v1.14.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.14.0/LogsDuration (0.72s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.14.0/LogsDuration
aaa_download_only_test.go:171: (dbg) Run:  out/minikube-windows-amd64.exe logs -p download-only-20211117222633-9504
aaa_download_only_test.go:171: (dbg) Non-zero exit: out/minikube-windows-amd64.exe logs -p download-only-20211117222633-9504: exit status 85 (715.3013ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------|---------|------|---------|------------|----------|
	| Command | Args | Profile | User | Version | Start Time | End Time |
	|---------|------|---------|------|---------|------------|----------|
	|---------|------|---------|------|---------|------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/11/17 22:26:35
	Running on machine: minikube2
	Binary: Built with gc go1.17.3 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1117 22:26:35.252267   11168 out.go:297] Setting OutFile to fd 612 ...
	I1117 22:26:35.322047   11168 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 22:26:35.322047   11168 out.go:310] Setting ErrFile to fd 608...
	I1117 22:26:35.322047   11168 out.go:344] TERM=,COLORTERM=, which probably does not support color
	W1117 22:26:35.331068   11168 root.go:293] Error reading config file at C:\Users\jenkins.minikube2\minikube-integration\.minikube\config\config.json: open C:\Users\jenkins.minikube2\minikube-integration\.minikube\config\config.json: The system cannot find the path specified.
	I1117 22:26:35.334843   11168 out.go:304] Setting JSON to true
	I1117 22:26:35.337303   11168 start.go:112] hostinfo: {"hostname":"minikube2","uptime":77311,"bootTime":1637110684,"procs":127,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.18363 Build 18363","kernelVersion":"10.0.18363 Build 18363","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"29dced62-21fb-45d8-a34e-472b66ced002"}
	W1117 22:26:35.337303   11168 start.go:120] gopshost.Virtualization returned error: not implemented yet
	W1117 22:26:35.495746   11168 preload.go:294] Failed to list preload files: open C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball: The system cannot find the file specified.
	I1117 22:26:35.495932   11168 notify.go:174] Checking for updates...
	I1117 22:26:35.501360   11168 driver.go:343] Setting default libvirt URI to qemu:///system
	I1117 22:26:37.039732   11168 docker.go:132] docker version: linux-19.03.12
	I1117 22:26:37.044526   11168 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I1117 22:26:37.428058   11168 info.go:263] docker info: {ID:L2W4:ZZCI:FAHF:ZZT6:NAWJ:ODFR:2ZBQ:YQ77:PYZZ:VR3E:WLUM:OBSV Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:39 OomKillDisable:true NGoroutines:50 SystemTime:2021-11-17 22:26:37.126704747 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:4.19.76-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://inde
x.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:6239399936 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:19.03.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7ad184331fa3e55e52b890ea95e65ba581ae3429 Expected:7ad184331fa3e55e52b890ea95e65ba581ae3429} RuncCommit:{ID:dc9208a3303feef5b3839f4323d9beb36df0a9dd Expected:dc9208a3303feef5b3839f4323d9beb36df0a9dd} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense:Community Engine Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I1117 22:26:37.516957   11168 start.go:280] selected driver: docker
	I1117 22:26:37.516957   11168 start.go:775] validating driver "docker" against <nil>
	I1117 22:26:37.532257   11168 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I1117 22:26:37.877253   11168 info.go:263] docker info: {ID:L2W4:ZZCI:FAHF:ZZT6:NAWJ:ODFR:2ZBQ:YQ77:PYZZ:VR3E:WLUM:OBSV Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:39 OomKillDisable:true NGoroutines:50 SystemTime:2021-11-17 22:26:37.612489825 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:4.19.76-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://inde
x.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:6239399936 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:19.03.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7ad184331fa3e55e52b890ea95e65ba581ae3429 Expected:7ad184331fa3e55e52b890ea95e65ba581ae3429} RuncCommit:{ID:dc9208a3303feef5b3839f4323d9beb36df0a9dd Expected:dc9208a3303feef5b3839f4323d9beb36df0a9dd} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense:Community Engine Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I1117 22:26:37.877253   11168 start_flags.go:268] no existing cluster config was found, will generate one from the flags 
	I1117 22:26:37.928879   11168 start_flags.go:349] Using suggested 5902MB memory alloc based on sys=65534MB, container=5950MB
	I1117 22:26:37.929432   11168 start_flags.go:740] Wait components to verify : map[apiserver:true system_pods:true]
	I1117 22:26:37.929432   11168 cni.go:93] Creating CNI manager for ""
	I1117 22:26:37.929565   11168 cni.go:167] CNI unnecessary in this configuration, recommending no CNI
	I1117 22:26:37.929565   11168 start_flags.go:282] config:
	{Name:download-only-20211117222633-9504 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c Memory:5902 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.14.0 ClusterName:download-only-20211117222633-9504 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker C
RISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host}
	I1117 22:26:37.933864   11168 cache.go:118] Beginning downloading kic base image for docker with docker
	I1117 22:26:37.936464   11168 preload.go:132] Checking if preload exists for k8s version v1.14.0 and runtime docker
	I1117 22:26:37.936588   11168 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c in local docker daemon
	I1117 22:26:37.984309   11168 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/preloaded-images-k8s-v14-v1.14.0-docker-overlay2-amd64.tar.lz4
	I1117 22:26:37.984409   11168 cache.go:57] Caching tarball of preloaded images
	I1117 22:26:37.984528   11168 preload.go:132] Checking if preload exists for k8s version v1.14.0 and runtime docker
	I1117 22:26:37.987835   11168 preload.go:238] getting checksum for preloaded-images-k8s-v14-v1.14.0-docker-overlay2-amd64.tar.lz4 ...
	I1117 22:26:38.038738   11168 cache.go:146] Downloading gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c to local cache
	I1117 22:26:38.038930   11168 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\kicbase_v0.0.28@sha256_4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c.tar
	I1117 22:26:38.039106   11168 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\kicbase_v0.0.28@sha256_4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c.tar
	I1117 22:26:38.039216   11168 image.go:59] Checking for gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c in local cache directory
	I1117 22:26:38.039987   11168 image.go:119] Writing gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c to local cache
	I1117 22:26:38.054682   11168 download.go:100] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/preloaded-images-k8s-v14-v1.14.0-docker-overlay2-amd64.tar.lz4?checksum=md5:ec855295d74f2fe00733f44cbe6bc00d -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v14-v1.14.0-docker-overlay2-amd64.tar.lz4
	I1117 22:26:43.734021   11168 preload.go:248] saving checksum for preloaded-images-k8s-v14-v1.14.0-docker-overlay2-amd64.tar.lz4 ...
	I1117 22:26:43.734326   11168 preload.go:255] verifying checksumm of C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v14-v1.14.0-docker-overlay2-amd64.tar.lz4 ...
	I1117 22:26:44.884280   11168 cache.go:149] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c as a tarball
	I1117 22:26:45.094465   11168 cache.go:60] Finished verifying existence of preloaded tar for  v1.14.0 on docker
	I1117 22:26:45.095344   11168 profile.go:147] Saving config to C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\download-only-20211117222633-9504\config.json ...
	I1117 22:26:45.095344   11168 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\download-only-20211117222633-9504\config.json: {Name:mked268ce510f8f15a024af2691dd4417220e4b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1117 22:26:45.096554   11168 preload.go:132] Checking if preload exists for k8s version v1.14.0 and runtime docker
	I1117 22:26:45.097172   11168 download.go:100] Downloading: https://storage.googleapis.com/kubernetes-release/release/v1.14.0/bin/windows/amd64/kubectl.exe?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.14.0/bin/windows/amd64/kubectl.exe.sha1 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\windows\v1.14.0/kubectl.exe
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-20211117222633-9504"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:172: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.14.0/LogsDuration (0.72s)

                                                
                                    
x
+
TestDownloadOnly/v1.22.3/json-events (13.59s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.22.3/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-20211117222633-9504 --force --alsologtostderr --kubernetes-version=v1.22.3 --container-runtime=docker --driver=docker
aaa_download_only_test.go:69: (dbg) Done: out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-20211117222633-9504 --force --alsologtostderr --kubernetes-version=v1.22.3 --container-runtime=docker --driver=docker: (13.5923636s)
--- PASS: TestDownloadOnly/v1.22.3/json-events (13.59s)

                                                
                                    
x
+
TestDownloadOnly/v1.22.3/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.22.3/preload-exists
--- PASS: TestDownloadOnly/v1.22.3/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.22.3/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.22.3/kubectl
--- PASS: TestDownloadOnly/v1.22.3/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.22.3/LogsDuration (0.35s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.22.3/LogsDuration
aaa_download_only_test.go:171: (dbg) Run:  out/minikube-windows-amd64.exe logs -p download-only-20211117222633-9504
aaa_download_only_test.go:171: (dbg) Non-zero exit: out/minikube-windows-amd64.exe logs -p download-only-20211117222633-9504: exit status 85 (349.2973ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------|---------|------|---------|------------|----------|
	| Command | Args | Profile | User | Version | Start Time | End Time |
	|---------|------|---------|------|---------|------------|----------|
	|---------|------|---------|------|---------|------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/11/17 22:26:48
	Running on machine: minikube2
	Binary: Built with gc go1.17.3 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1117 22:26:48.051192    9452 out.go:297] Setting OutFile to fd 664 ...
	I1117 22:26:48.117299    9452 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 22:26:48.117299    9452 out.go:310] Setting ErrFile to fd 656...
	I1117 22:26:48.117299    9452 out.go:344] TERM=,COLORTERM=, which probably does not support color
	W1117 22:26:48.128022    9452 root.go:293] Error reading config file at C:\Users\jenkins.minikube2\minikube-integration\.minikube\config\config.json: open C:\Users\jenkins.minikube2\minikube-integration\.minikube\config\config.json: The system cannot find the file specified.
	I1117 22:26:48.128556    9452 out.go:304] Setting JSON to true
	I1117 22:26:48.130347    9452 start.go:112] hostinfo: {"hostname":"minikube2","uptime":77323,"bootTime":1637110685,"procs":127,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.18363 Build 18363","kernelVersion":"10.0.18363 Build 18363","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"29dced62-21fb-45d8-a34e-472b66ced002"}
	W1117 22:26:48.131267    9452 start.go:120] gopshost.Virtualization returned error: not implemented yet
	I1117 22:26:48.135401    9452 notify.go:174] Checking for updates...
	I1117 22:26:48.140569    9452 config.go:176] Loaded profile config "download-only-20211117222633-9504": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.14.0
	W1117 22:26:48.141072    9452 start.go:683] api.Load failed for download-only-20211117222633-9504: filestore "download-only-20211117222633-9504": Docker machine "download-only-20211117222633-9504" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1117 22:26:48.141072    9452 driver.go:343] Setting default libvirt URI to qemu:///system
	W1117 22:26:48.141453    9452 start.go:683] api.Load failed for download-only-20211117222633-9504: filestore "download-only-20211117222633-9504": Docker machine "download-only-20211117222633-9504" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1117 22:26:49.641403    9452 docker.go:132] docker version: linux-19.03.12
	I1117 22:26:49.646028    9452 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I1117 22:26:50.085072    9452 info.go:263] docker info: {ID:L2W4:ZZCI:FAHF:ZZT6:NAWJ:ODFR:2ZBQ:YQ77:PYZZ:VR3E:WLUM:OBSV Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:39 OomKillDisable:true NGoroutines:50 SystemTime:2021-11-17 22:26:49.733661993 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:4.19.76-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://inde
x.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:6239399936 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:19.03.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7ad184331fa3e55e52b890ea95e65ba581ae3429 Expected:7ad184331fa3e55e52b890ea95e65ba581ae3429} RuncCommit:{ID:dc9208a3303feef5b3839f4323d9beb36df0a9dd Expected:dc9208a3303feef5b3839f4323d9beb36df0a9dd} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense:Community Engine Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I1117 22:26:50.406796    9452 start.go:280] selected driver: docker
	I1117 22:26:50.407026    9452 start.go:775] validating driver "docker" against &{Name:download-only-20211117222633-9504 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c Memory:5902 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.14.0 ClusterName:download-only-20211117222633-9504 Namespace:default APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.14.0 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host}
	I1117 22:26:50.421501    9452 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I1117 22:26:50.751447    9452 info.go:263] docker info: {ID:L2W4:ZZCI:FAHF:ZZT6:NAWJ:ODFR:2ZBQ:YQ77:PYZZ:VR3E:WLUM:OBSV Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:39 OomKillDisable:true NGoroutines:50 SystemTime:2021-11-17 22:26:50.499426195 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:4.19.76-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://inde
x.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:6239399936 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:19.03.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7ad184331fa3e55e52b890ea95e65ba581ae3429 Expected:7ad184331fa3e55e52b890ea95e65ba581ae3429} RuncCommit:{ID:dc9208a3303feef5b3839f4323d9beb36df0a9dd Expected:dc9208a3303feef5b3839f4323d9beb36df0a9dd} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense:Community Engine Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I1117 22:26:50.811196    9452 cni.go:93] Creating CNI manager for ""
	I1117 22:26:50.811267    9452 cni.go:167] CNI unnecessary in this configuration, recommending no CNI
	I1117 22:26:50.811267    9452 start_flags.go:282] config:
	{Name:download-only-20211117222633-9504 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c Memory:5902 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.3 ClusterName:download-only-20211117222633-9504 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker C
RISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.14.0 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host}
	I1117 22:26:50.814889    9452 cache.go:118] Beginning downloading kic base image for docker with docker
	I1117 22:26:50.817710    9452 preload.go:132] Checking if preload exists for k8s version v1.22.3 and runtime docker
	I1117 22:26:50.817710    9452 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c in local docker daemon
	I1117 22:26:50.871138    9452 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4
	I1117 22:26:50.871138    9452 cache.go:57] Caching tarball of preloaded images
	I1117 22:26:50.872392    9452 preload.go:132] Checking if preload exists for k8s version v1.22.3 and runtime docker
	I1117 22:26:50.874826    9452 preload.go:238] getting checksum for preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4 ...
	I1117 22:26:50.908097    9452 cache.go:146] Downloading gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c to local cache
	I1117 22:26:50.908097    9452 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\kicbase_v0.0.28@sha256_4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c.tar
	I1117 22:26:50.908097    9452 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\kicbase_v0.0.28@sha256_4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c.tar
	I1117 22:26:50.908097    9452 image.go:59] Checking for gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c in local cache directory
	I1117 22:26:50.908097    9452 image.go:62] Found gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c in local cache directory, skipping pull
	I1117 22:26:50.908097    9452 image.go:103] gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c exists in cache, skipping pull
	I1117 22:26:50.908097    9452 cache.go:149] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c as a tarball
	I1117 22:26:50.938760    9452 download.go:100] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4?checksum=md5:b55c92a19bc9eceed8b554be67ddf54e -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4
	I1117 22:26:56.939792    9452 preload.go:248] saving checksum for preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4 ...
	I1117 22:26:56.940721    9452 preload.go:255] verifying checksumm of C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4 ...
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-20211117222633-9504"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:172: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.22.3/LogsDuration (0.35s)

                                                
                                    
x
+
TestDownloadOnly/v1.22.4-rc.0/json-events (12.52s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.22.4-rc.0/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-20211117222633-9504 --force --alsologtostderr --kubernetes-version=v1.22.4-rc.0 --container-runtime=docker --driver=docker
aaa_download_only_test.go:69: (dbg) Done: out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-20211117222633-9504 --force --alsologtostderr --kubernetes-version=v1.22.4-rc.0 --container-runtime=docker --driver=docker: (12.5190773s)
--- PASS: TestDownloadOnly/v1.22.4-rc.0/json-events (12.52s)

                                                
                                    
x
+
TestDownloadOnly/v1.22.4-rc.0/preload-exists (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.22.4-rc.0/preload-exists
--- PASS: TestDownloadOnly/v1.22.4-rc.0/preload-exists (0.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.22.4-rc.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.22.4-rc.0/kubectl
--- PASS: TestDownloadOnly/v1.22.4-rc.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.22.4-rc.0/LogsDuration (0.51s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.22.4-rc.0/LogsDuration
aaa_download_only_test.go:171: (dbg) Run:  out/minikube-windows-amd64.exe logs -p download-only-20211117222633-9504
aaa_download_only_test.go:171: (dbg) Non-zero exit: out/minikube-windows-amd64.exe logs -p download-only-20211117222633-9504: exit status 85 (512.2211ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------|---------|------|---------|------------|----------|
	| Command | Args | Profile | User | Version | Start Time | End Time |
	|---------|------|---------|------|---------|------------|----------|
	|---------|------|---------|------|---------|------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/11/17 22:27:02
	Running on machine: minikube2
	Binary: Built with gc go1.17.3 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1117 22:27:01.978908    3304 out.go:297] Setting OutFile to fd 692 ...
	I1117 22:27:02.048036    3304 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 22:27:02.048036    3304 out.go:310] Setting ErrFile to fd 708...
	I1117 22:27:02.048036    3304 out.go:344] TERM=,COLORTERM=, which probably does not support color
	W1117 22:27:02.058101    3304 root.go:293] Error reading config file at C:\Users\jenkins.minikube2\minikube-integration\.minikube\config\config.json: open C:\Users\jenkins.minikube2\minikube-integration\.minikube\config\config.json: The system cannot find the file specified.
	I1117 22:27:02.059039    3304 out.go:304] Setting JSON to true
	I1117 22:27:02.061135    3304 start.go:112] hostinfo: {"hostname":"minikube2","uptime":77337,"bootTime":1637110685,"procs":127,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.18363 Build 18363","kernelVersion":"10.0.18363 Build 18363","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"29dced62-21fb-45d8-a34e-472b66ced002"}
	W1117 22:27:02.061135    3304 start.go:120] gopshost.Virtualization returned error: not implemented yet
	I1117 22:27:02.290889    3304 notify.go:174] Checking for updates...
	I1117 22:27:02.383509    3304 config.go:176] Loaded profile config "download-only-20211117222633-9504": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.3
	W1117 22:27:02.384580    3304 start.go:683] api.Load failed for download-only-20211117222633-9504: filestore "download-only-20211117222633-9504": Docker machine "download-only-20211117222633-9504" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1117 22:27:02.384747    3304 driver.go:343] Setting default libvirt URI to qemu:///system
	W1117 22:27:02.384747    3304 start.go:683] api.Load failed for download-only-20211117222633-9504: filestore "download-only-20211117222633-9504": Docker machine "download-only-20211117222633-9504" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1117 22:27:03.892279    3304 docker.go:132] docker version: linux-19.03.12
	I1117 22:27:03.895657    3304 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I1117 22:27:04.220068    3304 info.go:263] docker info: {ID:L2W4:ZZCI:FAHF:ZZT6:NAWJ:ODFR:2ZBQ:YQ77:PYZZ:VR3E:WLUM:OBSV Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:39 OomKillDisable:true NGoroutines:50 SystemTime:2021-11-17 22:27:03.975130316 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:4.19.76-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://inde
x.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:6239399936 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:19.03.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7ad184331fa3e55e52b890ea95e65ba581ae3429 Expected:7ad184331fa3e55e52b890ea95e65ba581ae3429} RuncCommit:{ID:dc9208a3303feef5b3839f4323d9beb36df0a9dd Expected:dc9208a3303feef5b3839f4323d9beb36df0a9dd} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense:Community Engine Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I1117 22:27:04.237224    3304 start.go:280] selected driver: docker
	I1117 22:27:04.237224    3304 start.go:775] validating driver "docker" against &{Name:download-only-20211117222633-9504 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c Memory:5902 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.3 ClusterName:download-only-20211117222633-9504 Namespace:default APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.22.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host}
	I1117 22:27:04.251100    3304 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I1117 22:27:04.600630    3304 info.go:263] docker info: {ID:L2W4:ZZCI:FAHF:ZZT6:NAWJ:ODFR:2ZBQ:YQ77:PYZZ:VR3E:WLUM:OBSV Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:39 OomKillDisable:true NGoroutines:50 SystemTime:2021-11-17 22:27:04.332196189 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:4.19.76-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://inde
x.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:6239399936 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:19.03.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7ad184331fa3e55e52b890ea95e65ba581ae3429 Expected:7ad184331fa3e55e52b890ea95e65ba581ae3429} RuncCommit:{ID:dc9208a3303feef5b3839f4323d9beb36df0a9dd Expected:dc9208a3303feef5b3839f4323d9beb36df0a9dd} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense:Community Engine Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I1117 22:27:04.654045    3304 cni.go:93] Creating CNI manager for ""
	I1117 22:27:04.654127    3304 cni.go:167] CNI unnecessary in this configuration, recommending no CNI
	I1117 22:27:04.654127    3304 start_flags.go:282] config:
	{Name:download-only-20211117222633-9504 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c Memory:5902 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.4-rc.0 ClusterName:download-only-20211117222633-9504 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:doc
ker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.22.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host}
	I1117 22:27:04.812319    3304 cache.go:118] Beginning downloading kic base image for docker with docker
	I1117 22:27:04.914972    3304 preload.go:132] Checking if preload exists for k8s version v1.22.4-rc.0 and runtime docker
	I1117 22:27:04.914972    3304 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c in local docker daemon
	I1117 22:27:04.955688    3304 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/preloaded-images-k8s-v14-v1.22.4-rc.0-docker-overlay2-amd64.tar.lz4
	I1117 22:27:04.955688    3304 cache.go:57] Caching tarball of preloaded images
	I1117 22:27:04.955688    3304 preload.go:132] Checking if preload exists for k8s version v1.22.4-rc.0 and runtime docker
	I1117 22:27:04.959129    3304 preload.go:238] getting checksum for preloaded-images-k8s-v14-v1.22.4-rc.0-docker-overlay2-amd64.tar.lz4 ...
	I1117 22:27:05.011712    3304 cache.go:146] Downloading gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c to local cache
	I1117 22:27:05.011712    3304 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\kicbase_v0.0.28@sha256_4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c.tar
	I1117 22:27:05.011974    3304 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\kicbase_v0.0.28@sha256_4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c.tar
	I1117 22:27:05.011974    3304 image.go:59] Checking for gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c in local cache directory
	I1117 22:27:05.011974    3304 image.go:62] Found gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c in local cache directory, skipping pull
	I1117 22:27:05.011974    3304 image.go:103] gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c exists in cache, skipping pull
	I1117 22:27:05.012417    3304 cache.go:149] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c as a tarball
	I1117 22:27:05.017013    3304 download.go:100] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/preloaded-images-k8s-v14-v1.22.4-rc.0-docker-overlay2-amd64.tar.lz4?checksum=md5:8bc3d17fd8aad78343e2b84f0cac75d1 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v14-v1.22.4-rc.0-docker-overlay2-amd64.tar.lz4
	I1117 22:27:10.486498    3304 preload.go:248] saving checksum for preloaded-images-k8s-v14-v1.22.4-rc.0-docker-overlay2-amd64.tar.lz4 ...
	I1117 22:27:10.486978    3304 preload.go:255] verifying checksumm of C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v14-v1.22.4-rc.0-docker-overlay2-amd64.tar.lz4 ...
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-20211117222633-9504"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:172: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.22.4-rc.0/LogsDuration (0.51s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (3.11s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:189: (dbg) Run:  out/minikube-windows-amd64.exe delete --all
aaa_download_only_test.go:189: (dbg) Done: out/minikube-windows-amd64.exe delete --all: (3.1080053s)
--- PASS: TestDownloadOnly/DeleteAll (3.11s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (2.52s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:201: (dbg) Run:  out/minikube-windows-amd64.exe delete -p download-only-20211117222633-9504
aaa_download_only_test.go:201: (dbg) Done: out/minikube-windows-amd64.exe delete -p download-only-20211117222633-9504: (2.5235046s)
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (2.52s)

                                                
                                    
x
+
TestDownloadOnlyKic (23.31s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:226: (dbg) Run:  out/minikube-windows-amd64.exe start --download-only -p download-docker-20211117222722-9504 --force --alsologtostderr --driver=docker
aaa_download_only_test.go:226: (dbg) Done: out/minikube-windows-amd64.exe start --download-only -p download-docker-20211117222722-9504 --force --alsologtostderr --driver=docker: (19.2014667s)
helpers_test.go:175: Cleaning up "download-docker-20211117222722-9504" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p download-docker-20211117222722-9504
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p download-docker-20211117222722-9504: (2.6620503s)
--- PASS: TestDownloadOnlyKic (23.31s)

                                                
                                    
x
+
TestErrorSpam/start (7.91s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:214: Cleaning up 1 logfile(s) ...
error_spam_test.go:157: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-20211117222914-9504 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-20211117222914-9504 start --dry-run
error_spam_test.go:157: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-20211117222914-9504 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-20211117222914-9504 start --dry-run: (2.6547428s)
error_spam_test.go:157: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-20211117222914-9504 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-20211117222914-9504 start --dry-run
error_spam_test.go:157: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-20211117222914-9504 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-20211117222914-9504 start --dry-run: (2.604332s)
error_spam_test.go:180: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-20211117222914-9504 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-20211117222914-9504 start --dry-run
error_spam_test.go:180: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-20211117222914-9504 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-20211117222914-9504 start --dry-run: (2.6523352s)
--- PASS: TestErrorSpam/start (7.91s)

                                                
                                    
x
+
TestErrorSpam/status (5.36s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:214: Cleaning up 0 logfile(s) ...
error_spam_test.go:157: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-20211117222914-9504 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-20211117222914-9504 status
error_spam_test.go:157: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p nospam-20211117222914-9504 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-20211117222914-9504 status: exit status 7 (1.7511875s)

                                                
                                                
-- stdout --
	nospam-20211117222914-9504
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 22:30:01.218535    6964 status.go:258] status error: host: state: unknown state "nospam-20211117222914-9504": docker container inspect nospam-20211117222914-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: nospam-20211117222914-9504
	E1117 22:30:01.218535    6964 status.go:261] The "nospam-20211117222914-9504" host does not exist!

                                                
                                                
** /stderr **
error_spam_test.go:159: "out/minikube-windows-amd64.exe -p nospam-20211117222914-9504 --log_dir C:\\Users\\jenkins.minikube2\\AppData\\Local\\Temp\\nospam-20211117222914-9504 status" failed: exit status 7
error_spam_test.go:157: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-20211117222914-9504 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-20211117222914-9504 status
error_spam_test.go:157: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p nospam-20211117222914-9504 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-20211117222914-9504 status: exit status 7 (1.805068s)

                                                
                                                
-- stdout --
	nospam-20211117222914-9504
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 22:30:03.025308    2020 status.go:258] status error: host: state: unknown state "nospam-20211117222914-9504": docker container inspect nospam-20211117222914-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: nospam-20211117222914-9504
	E1117 22:30:03.025372    2020 status.go:261] The "nospam-20211117222914-9504" host does not exist!

                                                
                                                
** /stderr **
error_spam_test.go:159: "out/minikube-windows-amd64.exe -p nospam-20211117222914-9504 --log_dir C:\\Users\\jenkins.minikube2\\AppData\\Local\\Temp\\nospam-20211117222914-9504 status" failed: exit status 7
error_spam_test.go:180: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-20211117222914-9504 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-20211117222914-9504 status
error_spam_test.go:180: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p nospam-20211117222914-9504 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-20211117222914-9504 status: exit status 7 (1.7978327s)

                                                
                                                
-- stdout --
	nospam-20211117222914-9504
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 22:30:04.821997   12100 status.go:258] status error: host: state: unknown state "nospam-20211117222914-9504": docker container inspect nospam-20211117222914-9504 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: nospam-20211117222914-9504
	E1117 22:30:04.821997   12100 status.go:261] The "nospam-20211117222914-9504" host does not exist!

                                                
                                                
** /stderr **
error_spam_test.go:182: "out/minikube-windows-amd64.exe -p nospam-20211117222914-9504 --log_dir C:\\Users\\jenkins.minikube2\\AppData\\Local\\Temp\\nospam-20211117222914-9504 status" failed: exit status 7
--- PASS: TestErrorSpam/status (5.36s)

                                                
                                    
x
+
TestErrorSpam/pause (5.41s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:214: Cleaning up 0 logfile(s) ...
error_spam_test.go:157: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-20211117222914-9504 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-20211117222914-9504 pause
error_spam_test.go:157: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p nospam-20211117222914-9504 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-20211117222914-9504 pause: exit status 80 (1.8401087s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "nospam-20211117222914-9504": docker container inspect nospam-20211117222914-9504 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: nospam-20211117222914-9504
	
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                        │
	│    * If the above advice does not help, please let us know:                                                            │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                          │
	│                                                                                                                        │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                               │
	│    * Please also attach the following file to the GitHub issue:                                                        │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_delete_05e3a674b6e518bcc2eafc8a77eb8b77017a009c_9.log    │
	│                                                                                                                        │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:159: "out/minikube-windows-amd64.exe -p nospam-20211117222914-9504 --log_dir C:\\Users\\jenkins.minikube2\\AppData\\Local\\Temp\\nospam-20211117222914-9504 pause" failed: exit status 80
error_spam_test.go:157: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-20211117222914-9504 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-20211117222914-9504 pause
error_spam_test.go:157: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p nospam-20211117222914-9504 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-20211117222914-9504 pause: exit status 80 (1.7976353s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "nospam-20211117222914-9504": docker container inspect nospam-20211117222914-9504 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: nospam-20211117222914-9504
	
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                        │
	│    * If the above advice does not help, please let us know:                                                            │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                          │
	│                                                                                                                        │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                               │
	│    * Please also attach the following file to the GitHub issue:                                                        │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_delete_05e3a674b6e518bcc2eafc8a77eb8b77017a009c_9.log    │
	│                                                                                                                        │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:159: "out/minikube-windows-amd64.exe -p nospam-20211117222914-9504 --log_dir C:\\Users\\jenkins.minikube2\\AppData\\Local\\Temp\\nospam-20211117222914-9504 pause" failed: exit status 80
error_spam_test.go:180: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-20211117222914-9504 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-20211117222914-9504 pause
error_spam_test.go:180: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p nospam-20211117222914-9504 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-20211117222914-9504 pause: exit status 80 (1.7743063s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "nospam-20211117222914-9504": docker container inspect nospam-20211117222914-9504 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: nospam-20211117222914-9504
	
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                        │
	│    * If the above advice does not help, please let us know:                                                            │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                          │
	│                                                                                                                        │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                               │
	│    * Please also attach the following file to the GitHub issue:                                                        │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_delete_05e3a674b6e518bcc2eafc8a77eb8b77017a009c_9.log    │
	│                                                                                                                        │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:182: "out/minikube-windows-amd64.exe -p nospam-20211117222914-9504 --log_dir C:\\Users\\jenkins.minikube2\\AppData\\Local\\Temp\\nospam-20211117222914-9504 pause" failed: exit status 80
--- PASS: TestErrorSpam/pause (5.41s)

                                                
                                    
x
+
TestErrorSpam/unpause (5.35s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:214: Cleaning up 0 logfile(s) ...
error_spam_test.go:157: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-20211117222914-9504 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-20211117222914-9504 unpause
error_spam_test.go:157: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p nospam-20211117222914-9504 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-20211117222914-9504 unpause: exit status 80 (1.7756892s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "nospam-20211117222914-9504": docker container inspect nospam-20211117222914-9504 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: nospam-20211117222914-9504
	
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                        │
	│    * If the above advice does not help, please let us know:                                                            │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                          │
	│                                                                                                                        │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                               │
	│    * Please also attach the following file to the GitHub issue:                                                        │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_delete_05e3a674b6e518bcc2eafc8a77eb8b77017a009c_9.log    │
	│                                                                                                                        │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:159: "out/minikube-windows-amd64.exe -p nospam-20211117222914-9504 --log_dir C:\\Users\\jenkins.minikube2\\AppData\\Local\\Temp\\nospam-20211117222914-9504 unpause" failed: exit status 80
error_spam_test.go:157: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-20211117222914-9504 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-20211117222914-9504 unpause
error_spam_test.go:157: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p nospam-20211117222914-9504 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-20211117222914-9504 unpause: exit status 80 (1.7588758s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "nospam-20211117222914-9504": docker container inspect nospam-20211117222914-9504 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: nospam-20211117222914-9504
	
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                        │
	│    * If the above advice does not help, please let us know:                                                            │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                          │
	│                                                                                                                        │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                               │
	│    * Please also attach the following file to the GitHub issue:                                                        │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_delete_05e3a674b6e518bcc2eafc8a77eb8b77017a009c_9.log    │
	│                                                                                                                        │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:159: "out/minikube-windows-amd64.exe -p nospam-20211117222914-9504 --log_dir C:\\Users\\jenkins.minikube2\\AppData\\Local\\Temp\\nospam-20211117222914-9504 unpause" failed: exit status 80
error_spam_test.go:180: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-20211117222914-9504 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-20211117222914-9504 unpause
error_spam_test.go:180: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p nospam-20211117222914-9504 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-20211117222914-9504 unpause: exit status 80 (1.8100387s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "nospam-20211117222914-9504": docker container inspect nospam-20211117222914-9504 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: nospam-20211117222914-9504
	
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                        │
	│    * If the above advice does not help, please let us know:                                                            │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                          │
	│                                                                                                                        │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                               │
	│    * Please also attach the following file to the GitHub issue:                                                        │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_delete_05e3a674b6e518bcc2eafc8a77eb8b77017a009c_9.log    │
	│                                                                                                                        │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:182: "out/minikube-windows-amd64.exe -p nospam-20211117222914-9504 --log_dir C:\\Users\\jenkins.minikube2\\AppData\\Local\\Temp\\nospam-20211117222914-9504 unpause" failed: exit status 80
--- PASS: TestErrorSpam/unpause (5.35s)

                                                
                                    
x
+
TestErrorSpam/stop (45.38s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:214: Cleaning up 0 logfile(s) ...
error_spam_test.go:157: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-20211117222914-9504 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-20211117222914-9504 stop
error_spam_test.go:157: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p nospam-20211117222914-9504 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-20211117222914-9504 stop: exit status 82 (15.1254777s)

                                                
                                                
-- stdout --
	* Stopping node "nospam-20211117222914-9504"  ...
	* Stopping node "nospam-20211117222914-9504"  ...
	* Stopping node "nospam-20211117222914-9504"  ...
	* Stopping node "nospam-20211117222914-9504"  ...
	* Stopping node "nospam-20211117222914-9504"  ...
	* Stopping node "nospam-20211117222914-9504"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 22:30:19.308589    6540 daemonize_windows.go:39] error terminating scheduled stop for profile nospam-20211117222914-9504: stopping schedule-stop service for profile nospam-20211117222914-9504: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "nospam-20211117222914-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" nospam-20211117222914-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: nospam-20211117222914-9504
	X Exiting due to GUEST_STOP_TIMEOUT: docker container inspect nospam-20211117222914-9504 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: nospam-20211117222914-9504
	
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                        │
	│    * If the above advice does not help, please let us know:                                                            │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                          │
	│                                                                                                                        │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                               │
	│    * Please also attach the following file to the GitHub issue:                                                        │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_delete_05e3a674b6e518bcc2eafc8a77eb8b77017a009c_9.log    │
	│                                                                                                                        │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:159: "out/minikube-windows-amd64.exe -p nospam-20211117222914-9504 --log_dir C:\\Users\\jenkins.minikube2\\AppData\\Local\\Temp\\nospam-20211117222914-9504 stop" failed: exit status 82
error_spam_test.go:157: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-20211117222914-9504 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-20211117222914-9504 stop
error_spam_test.go:157: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p nospam-20211117222914-9504 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-20211117222914-9504 stop: exit status 82 (15.080541s)

                                                
                                                
-- stdout --
	* Stopping node "nospam-20211117222914-9504"  ...
	* Stopping node "nospam-20211117222914-9504"  ...
	* Stopping node "nospam-20211117222914-9504"  ...
	* Stopping node "nospam-20211117222914-9504"  ...
	* Stopping node "nospam-20211117222914-9504"  ...
	* Stopping node "nospam-20211117222914-9504"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 22:30:34.408627    3964 daemonize_windows.go:39] error terminating scheduled stop for profile nospam-20211117222914-9504: stopping schedule-stop service for profile nospam-20211117222914-9504: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "nospam-20211117222914-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" nospam-20211117222914-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: nospam-20211117222914-9504
	X Exiting due to GUEST_STOP_TIMEOUT: docker container inspect nospam-20211117222914-9504 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: nospam-20211117222914-9504
	
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                        │
	│    * If the above advice does not help, please let us know:                                                            │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                          │
	│                                                                                                                        │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                               │
	│    * Please also attach the following file to the GitHub issue:                                                        │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_delete_05e3a674b6e518bcc2eafc8a77eb8b77017a009c_9.log    │
	│                                                                                                                        │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:159: "out/minikube-windows-amd64.exe -p nospam-20211117222914-9504 --log_dir C:\\Users\\jenkins.minikube2\\AppData\\Local\\Temp\\nospam-20211117222914-9504 stop" failed: exit status 82
error_spam_test.go:180: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-20211117222914-9504 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-20211117222914-9504 stop
error_spam_test.go:180: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p nospam-20211117222914-9504 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-20211117222914-9504 stop: exit status 82 (15.1721368s)

                                                
                                                
-- stdout --
	* Stopping node "nospam-20211117222914-9504"  ...
	* Stopping node "nospam-20211117222914-9504"  ...
	* Stopping node "nospam-20211117222914-9504"  ...
	* Stopping node "nospam-20211117222914-9504"  ...
	* Stopping node "nospam-20211117222914-9504"  ...
	* Stopping node "nospam-20211117222914-9504"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 22:30:49.568969    6424 daemonize_windows.go:39] error terminating scheduled stop for profile nospam-20211117222914-9504: stopping schedule-stop service for profile nospam-20211117222914-9504: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "nospam-20211117222914-9504": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" nospam-20211117222914-9504: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: nospam-20211117222914-9504
	X Exiting due to GUEST_STOP_TIMEOUT: docker container inspect nospam-20211117222914-9504 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: nospam-20211117222914-9504
	
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                        │
	│    * If the above advice does not help, please let us know:                                                            │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                          │
	│                                                                                                                        │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                               │
	│    * Please also attach the following file to the GitHub issue:                                                        │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_delete_05e3a674b6e518bcc2eafc8a77eb8b77017a009c_9.log    │
	│                                                                                                                        │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:182: "out/minikube-windows-amd64.exe -p nospam-20211117222914-9504 --log_dir C:\\Users\\jenkins.minikube2\\AppData\\Local\\Temp\\nospam-20211117222914-9504 stop" failed: exit status 82
--- PASS: TestErrorSpam/stop (45.38s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1633: local sync path: C:\Users\jenkins.minikube2\minikube-integration\.minikube\files\etc\test\nested\copy\9504\hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.03s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (3.32s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1014: (dbg) Run:  docker build -t minikube-local-cache-test:functional-20211117223105-9504 C:\Users\jenkins.minikube2\AppData\Local\Temp\functional-20211117223105-95042975638736
functional_test.go:1026: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20211117223105-9504 cache add minikube-local-cache-test:functional-20211117223105-9504
functional_test.go:1026: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20211117223105-9504 cache add minikube-local-cache-test:functional-20211117223105-9504: (2.392892s)
functional_test.go:1031: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20211117223105-9504 cache delete minikube-local-cache-test:functional-20211117223105-9504
functional_test.go:1020: (dbg) Run:  docker rmi minikube-local-cache-test:functional-20211117223105-9504
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (3.32s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (1.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1136: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20211117223105-9504 config unset cpus

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1136: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20211117223105-9504 config get cpus

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1136: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-20211117223105-9504 config get cpus: exit status 14 (293.1497ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1136: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20211117223105-9504 config set cpus 2
functional_test.go:1136: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20211117223105-9504 config get cpus
functional_test.go:1136: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20211117223105-9504 config unset cpus
functional_test.go:1136: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20211117223105-9504 config get cpus
functional_test.go:1136: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-20211117223105-9504 config get cpus: exit status 14 (284.9533ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (1.87s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (5.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:912: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-20211117223105-9504 --dry-run --memory 250MB --alsologtostderr --driver=docker

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:912: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p functional-20211117223105-9504 --dry-run --memory 250MB --alsologtostderr --driver=docker: exit status 23 (2.3924156s)

                                                
                                                
-- stdout --
	* [functional-20211117223105-9504] minikube v1.24.0 on Microsoft Windows 10 Enterprise N 10.0.18363 Build 18363
	  - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=12739
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1117 22:34:39.664703    3160 out.go:297] Setting OutFile to fd 324 ...
	I1117 22:34:39.733248    3160 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 22:34:39.733248    3160 out.go:310] Setting ErrFile to fd 924...
	I1117 22:34:39.733248    3160 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 22:34:39.743919    3160 out.go:304] Setting JSON to false
	I1117 22:34:39.746367    3160 start.go:112] hostinfo: {"hostname":"minikube2","uptime":77795,"bootTime":1637110684,"procs":130,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.18363 Build 18363","kernelVersion":"10.0.18363 Build 18363","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"29dced62-21fb-45d8-a34e-472b66ced002"}
	W1117 22:34:39.746516    3160 start.go:120] gopshost.Virtualization returned error: not implemented yet
	I1117 22:34:39.770211    3160 out.go:176] * [functional-20211117223105-9504] minikube v1.24.0 on Microsoft Windows 10 Enterprise N 10.0.18363 Build 18363
	I1117 22:34:39.776039    3160 out.go:176]   - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	I1117 22:34:39.779706    3160 out.go:176]   - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	I1117 22:34:39.784290    3160 out.go:176]   - MINIKUBE_LOCATION=12739
	I1117 22:34:39.784861    3160 config.go:176] Loaded profile config "functional-20211117223105-9504": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.3
	I1117 22:34:39.787841    3160 driver.go:343] Setting default libvirt URI to qemu:///system
	I1117 22:34:41.418126    3160 docker.go:132] docker version: linux-19.03.12
	I1117 22:34:41.420128    3160 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I1117 22:34:41.777750    3160 info.go:263] docker info: {ID:L2W4:ZZCI:FAHF:ZZT6:NAWJ:ODFR:2ZBQ:YQ77:PYZZ:VR3E:WLUM:OBSV Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:40 OomKillDisable:true NGoroutines:52 SystemTime:2021-11-17 22:34:41.506747051 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:4.19.76-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://inde
x.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:6239399936 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:19.03.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7ad184331fa3e55e52b890ea95e65ba581ae3429 Expected:7ad184331fa3e55e52b890ea95e65ba581ae3429} RuncCommit:{ID:dc9208a3303feef5b3839f4323d9beb36df0a9dd Expected:dc9208a3303feef5b3839f4323d9beb36df0a9dd} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense:Community Engine Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I1117 22:34:41.781620    3160 out.go:176] * Using the docker driver based on existing profile
	I1117 22:34:41.781620    3160 start.go:280] selected driver: docker
	I1117 22:34:41.781620    3160 start.go:775] validating driver "docker" against &{Name:functional-20211117223105-9504 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.3 ClusterName:functional-20211117223105-9504 Namespace:default APIServerName:minikubeCA APIServer
Names:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.22.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host}
	I1117 22:34:41.781620    3160 start.go:786] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I1117 22:34:41.832916    3160 out.go:176] 
	W1117 22:34:41.832916    3160 out.go:241] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1117 22:34:41.836979    3160 out.go:176] 

                                                
                                                
** /stderr **
functional_test.go:929: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-20211117223105-9504 --dry-run --alsologtostderr -v=1 --driver=docker

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:929: (dbg) Done: out/minikube-windows-amd64.exe start -p functional-20211117223105-9504 --dry-run --alsologtostderr -v=1 --driver=docker: (2.6529798s)
--- PASS: TestFunctional/parallel/DryRun (5.05s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (2.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:954: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-20211117223105-9504 --dry-run --memory 250MB --alsologtostderr --driver=docker

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:954: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p functional-20211117223105-9504 --dry-run --memory 250MB --alsologtostderr --driver=docker: exit status 23 (2.359141s)

                                                
                                                
-- stdout --
	* [functional-20211117223105-9504] minikube v1.24.0 sur Microsoft Windows 10 Enterprise N 10.0.18363 Build 18363
	  - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=12739
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1117 22:34:31.943758    4220 out.go:297] Setting OutFile to fd 884 ...
	I1117 22:34:32.011535    4220 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 22:34:32.011535    4220 out.go:310] Setting ErrFile to fd 1004...
	I1117 22:34:32.011535    4220 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 22:34:32.029543    4220 out.go:304] Setting JSON to false
	I1117 22:34:32.031554    4220 start.go:112] hostinfo: {"hostname":"minikube2","uptime":77787,"bootTime":1637110685,"procs":132,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.18363 Build 18363","kernelVersion":"10.0.18363 Build 18363","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"29dced62-21fb-45d8-a34e-472b66ced002"}
	W1117 22:34:32.031554    4220 start.go:120] gopshost.Virtualization returned error: not implemented yet
	I1117 22:34:32.035591    4220 out.go:176] * [functional-20211117223105-9504] minikube v1.24.0 sur Microsoft Windows 10 Enterprise N 10.0.18363 Build 18363
	I1117 22:34:32.040123    4220 out.go:176]   - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	I1117 22:34:32.042351    4220 out.go:176]   - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	I1117 22:34:32.044860    4220 out.go:176]   - MINIKUBE_LOCATION=12739
	I1117 22:34:32.045897    4220 config.go:176] Loaded profile config "functional-20211117223105-9504": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.3
	I1117 22:34:32.046744    4220 driver.go:343] Setting default libvirt URI to qemu:///system
	I1117 22:34:33.669749    4220 docker.go:132] docker version: linux-19.03.12
	I1117 22:34:33.674269    4220 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I1117 22:34:34.023703    4220 info.go:263] docker info: {ID:L2W4:ZZCI:FAHF:ZZT6:NAWJ:ODFR:2ZBQ:YQ77:PYZZ:VR3E:WLUM:OBSV Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:40 OomKillDisable:true NGoroutines:52 SystemTime:2021-11-17 22:34:33.753366594 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:4.19.76-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://inde
x.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:6239399936 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:19.03.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7ad184331fa3e55e52b890ea95e65ba581ae3429 Expected:7ad184331fa3e55e52b890ea95e65ba581ae3429} RuncCommit:{ID:dc9208a3303feef5b3839f4323d9beb36df0a9dd Expected:dc9208a3303feef5b3839f4323d9beb36df0a9dd} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense:Community Engine Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I1117 22:34:34.027854    4220 out.go:176] * Utilisation du pilote docker basé sur le profil existant
	I1117 22:34:34.027854    4220 start.go:280] selected driver: docker
	I1117 22:34:34.027854    4220 start.go:775] validating driver "docker" against &{Name:functional-20211117223105-9504 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.3 ClusterName:functional-20211117223105-9504 Namespace:default APIServerName:minikubeCA APIServer
Names:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.22.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host}
	I1117 22:34:34.027854    4220 start.go:786] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I1117 22:34:34.082265    4220 out.go:176] 
	W1117 22:34:34.083264    4220 out.go:241] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1117 22:34:34.086270    4220 out.go:176] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (2.36s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (2.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1482: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20211117223105-9504 addons list

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1482: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20211117223105-9504 addons list: (1.9270956s)
functional_test.go:1494: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20211117223105-9504 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (2.22s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2037: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20211117223105-9504 version --short
--- PASS: TestFunctional/parallel/Version/short (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (3.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1213: (dbg) Run:  out/minikube-windows-amd64.exe profile lis

                                                
                                                
=== CONT  TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1213: (dbg) Done: out/minikube-windows-amd64.exe profile lis: (1.8635835s)
functional_test.go:1218: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json

                                                
                                                
=== CONT  TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1218: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (1.9546297s)
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (3.82s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (2.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1253: (dbg) Run:  out/minikube-windows-amd64.exe profile list

                                                
                                                
=== CONT  TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1253: (dbg) Done: out/minikube-windows-amd64.exe profile list: (1.9879329s)
functional_test.go:1258: Took "1.9879329s" to run "out/minikube-windows-amd64.exe profile list"
functional_test.go:1267: (dbg) Run:  out/minikube-windows-amd64.exe profile list -l

                                                
                                                
=== CONT  TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1272: Took "300.144ms" to run "out/minikube-windows-amd64.exe profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (2.29s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (2.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1304: (dbg) Run:  out/minikube-windows-amd64.exe profile list -o json

                                                
                                                
=== CONT  TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1304: (dbg) Done: out/minikube-windows-amd64.exe profile list -o json: (1.9332072s)
functional_test.go:1309: Took "1.9332072s" to run "out/minikube-windows-amd64.exe profile list -o json"
functional_test.go:1317: (dbg) Run:  out/minikube-windows-amd64.exe profile list -o json --light
functional_test.go:1322: Took "284.9612ms" to run "out/minikube-windows-amd64.exe profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (2.22s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:127: (dbg) daemon: [out/minikube-windows-amd64.exe -p functional-20211117223105-9504 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:369: (dbg) stopping [out/minikube-windows-amd64.exe -p functional-20211117223105-9504 tunnel --alsologtostderr] ...
helpers_test.go:506: unable to kill pid 6248: TerminateProcess: Access is denied.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:298: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/Setup
functional_test.go:298: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (1.7921909s)
functional_test.go:303: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-20211117223105-9504
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.90s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (3.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:333: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20211117223105-9504 image rm gcr.io/google-containers/addon-resizer:functional-20211117223105-9504
functional_test.go:333: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20211117223105-9504 image rm gcr.io/google-containers/addon-resizer:functional-20211117223105-9504: (1.7588953s)
functional_test.go:389: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20211117223105-9504 image ls
functional_test.go:389: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20211117223105-9504 image ls: (1.7487676s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (3.51s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (2.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:360: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-20211117223105-9504
functional_test.go:365: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20211117223105-9504 image save --daemon gcr.io/google-containers/addon-resizer:functional-20211117223105-9504
functional_test.go:365: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20211117223105-9504 image save --daemon gcr.io/google-containers/addon-resizer:functional-20211117223105-9504: (2.3557319s)
functional_test.go:370: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-20211117223105-9504
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (2.55s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.28s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:184: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:184: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-20211117223105-9504
--- PASS: TestFunctional/delete_addon-resizer_images (0.28s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.09s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:192: (dbg) Run:  docker rmi -f localhost/my-image:functional-20211117223105-9504
--- PASS: TestFunctional/delete_my-image_image (0.09s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.09s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:200: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-20211117223105-9504
--- PASS: TestFunctional/delete_minikube_cached_images (0.09s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (1.8s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:80: (dbg) Run:  out/minikube-windows-amd64.exe -p ingress-addon-legacy-20211117223942-9504 addons enable ingress-dns --alsologtostderr -v=5
ingress_addon_legacy_test.go:80: (dbg) Done: out/minikube-windows-amd64.exe -p ingress-addon-legacy-20211117223942-9504 addons enable ingress-dns --alsologtostderr -v=5: (1.7974633s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (1.80s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (2.44s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:149: (dbg) Run:  out/minikube-windows-amd64.exe start -p json-output-error-20211117224135-9504 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:149: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p json-output-error-20211117224135-9504 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (296.288ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"19ebade0-bda1-435c-870e-1a3237009816","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-20211117224135-9504] minikube v1.24.0 on Microsoft Windows 10 Enterprise N 10.0.18363 Build 18363","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"7e44e346-63f1-48ff-a7f4-b742cadc4c7f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=C:\\Users\\jenkins.minikube2\\minikube-integration\\kubeconfig"}}
	{"specversion":"1.0","id":"cb13fb2c-b65b-4828-a354-bfe0b1c2c9b3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube"}}
	{"specversion":"1.0","id":"1b062ced-f895-499d-8a32-c746a6cbbf75","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=12739"}}
	{"specversion":"1.0","id":"4aec4ac6-817b-4b49-ac50-984e389aac80","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on windows/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-20211117224135-9504" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p json-output-error-20211117224135-9504
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p json-output-error-20211117224135-9504: (2.1425597s)
--- PASS: TestErrorJSONOutput (2.44s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (215.65s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:58: (dbg) Run:  out/minikube-windows-amd64.exe start -p docker-network-20211117224519-9504 --network=bridge
kic_custom_network_test.go:58: (dbg) Done: out/minikube-windows-amd64.exe start -p docker-network-20211117224519-9504 --network=bridge: (2m44.8907452s)
kic_custom_network_test.go:102: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-20211117224519-9504" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p docker-network-20211117224519-9504
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p docker-network-20211117224519-9504: (50.6562307s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (215.65s)

                                                
                                    
x
+
TestKicExistingNetwork (217.61s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:102: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:94: (dbg) Run:  out/minikube-windows-amd64.exe start -p existing-network-20211117224855-9504 --network=existing-network
kic_custom_network_test.go:94: (dbg) Done: out/minikube-windows-amd64.exe start -p existing-network-20211117224855-9504 --network=existing-network: (2m45.5639271s)
helpers_test.go:175: Cleaning up "existing-network-20211117224855-9504" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p existing-network-20211117224855-9504
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p existing-network-20211117224855-9504: (51.347037s)
--- PASS: TestKicExistingNetwork (217.61s)

                                                
                                    
x
+
TestMainNoArgs (0.29s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-windows-amd64.exe
--- PASS: TestMainNoArgs (0.29s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (2.99s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:130: (dbg) Run:  out/minikube-windows-amd64.exe delete -p mount-start-1-20211117225233-9504 --alsologtostderr -v=5
pause_test.go:130: (dbg) Done: out/minikube-windows-amd64.exe delete -p mount-start-1-20211117225233-9504 --alsologtostderr -v=5: (2.9883191s)
--- PASS: TestMountStart/serial/DeleteFirst (2.99s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (253.16s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:127: (dbg) Run:  C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube-v1.9.0.2567017859.exe start -p running-upgrade-20211117230442-9504 --memory=2200 --vm-driver=docker

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:127: (dbg) Done: C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube-v1.9.0.2567017859.exe start -p running-upgrade-20211117230442-9504 --memory=2200 --vm-driver=docker: (3m11.6860703s)
version_upgrade_test.go:137: (dbg) Run:  out/minikube-windows-amd64.exe start -p running-upgrade-20211117230442-9504 --memory=2200 --alsologtostderr -v=1 --driver=docker

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:137: (dbg) Done: out/minikube-windows-amd64.exe start -p running-upgrade-20211117230442-9504 --memory=2200 --alsologtostderr -v=1 --driver=docker: (53.6560979s)
helpers_test.go:175: Cleaning up "running-upgrade-20211117230442-9504" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p running-upgrade-20211117230442-9504

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p running-upgrade-20211117230442-9504: (6.2674767s)
--- PASS: TestRunningBinaryUpgrade (253.16s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (1.9s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:89: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p NoKubernetes-20211117230313-9504 "sudo systemctl is-active --quiet service kubelet"

                                                
                                                
=== CONT  TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:89: (dbg) Non-zero exit: out/minikube-windows-amd64.exe ssh -p NoKubernetes-20211117230313-9504 "sudo systemctl is-active --quiet service kubelet": exit status 80 (1.8947024s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "NoKubernetes-20211117230313-9504": docker container inspect NoKubernetes-20211117230313-9504 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: NoKubernetes-20211117230313-9504
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                     │
	│    * If the above advice does not help, please let us know:                                                         │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                       │
	│                                                                                                                     │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                            │
	│    * Please also attach the following file to the GitHub issue:                                                     │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_ssh_ef99f3f3976bdc9ede40cba20b814885e47e2c2a_0.log    │
	│                                                                                                                     │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (1.90s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (4.36s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:111: (dbg) Run:  out/minikube-windows-amd64.exe profile list

                                                
                                                
=== CONT  TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:111: (dbg) Done: out/minikube-windows-amd64.exe profile list: (2.2411652s)
no_kubernetes_test.go:121: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output=json
no_kubernetes_test.go:121: (dbg) Done: out/minikube-windows-amd64.exe profile list --output=json: (2.1220304s)
--- PASS: TestNoKubernetes/serial/ProfileList (4.36s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (3.01s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:89: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p NoKubernetes-20211117230313-9504 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:89: (dbg) Non-zero exit: out/minikube-windows-amd64.exe ssh -p NoKubernetes-20211117230313-9504 "sudo systemctl is-active --quiet service kubelet": exit status 80 (3.0126172s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "NoKubernetes-20211117230313-9504": docker container inspect NoKubernetes-20211117230313-9504 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: NoKubernetes-20211117230313-9504
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                     │
	│    * If the above advice does not help, please let us know:                                                         │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                       │
	│                                                                                                                     │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                            │
	│    * Please also attach the following file to the GitHub issue:                                                     │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_ssh_ef99f3f3976bdc9ede40cba20b814885e47e2c2a_0.log    │
	│                                                                                                                     │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (3.01s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.59s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.59s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (168.32s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:190: (dbg) Run:  C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube-v1.9.0.676631.exe start -p stopped-upgrade-20211117230646-9504 --memory=2200 --vm-driver=docker

                                                
                                                
=== CONT  TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:190: (dbg) Done: C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube-v1.9.0.676631.exe start -p stopped-upgrade-20211117230646-9504 --memory=2200 --vm-driver=docker: (1m42.7156573s)
version_upgrade_test.go:199: (dbg) Run:  C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube-v1.9.0.676631.exe -p stopped-upgrade-20211117230646-9504 stop
version_upgrade_test.go:199: (dbg) Done: C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube-v1.9.0.676631.exe -p stopped-upgrade-20211117230646-9504 stop: (13.4095293s)
version_upgrade_test.go:205: (dbg) Run:  out/minikube-windows-amd64.exe start -p stopped-upgrade-20211117230646-9504 --memory=2200 --alsologtostderr -v=1 --driver=docker

                                                
                                                
=== CONT  TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:205: (dbg) Done: out/minikube-windows-amd64.exe start -p stopped-upgrade-20211117230646-9504 --memory=2200 --alsologtostderr -v=1 --driver=docker: (52.1916842s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (168.32s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (4.47s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:213: (dbg) Run:  out/minikube-windows-amd64.exe logs -p stopped-upgrade-20211117230646-9504

                                                
                                                
=== CONT  TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:213: (dbg) Done: out/minikube-windows-amd64.exe logs -p stopped-upgrade-20211117230646-9504: (4.4687296s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (4.47s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (3.48s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:130: (dbg) Run:  out/minikube-windows-amd64.exe delete -p pause-20211117230855-9504 --alsologtostderr -v=5
pause_test.go:130: (dbg) Done: out/minikube-windows-amd64.exe delete -p pause-20211117230855-9504 --alsologtostderr -v=5: (3.4829932s)
--- PASS: TestPause/serial/DeletePaused (3.48s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.81s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:190: (dbg) Run:  out/minikube-windows-amd64.exe addons enable metrics-server -p newest-cni-20211117231341-9504 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:190: (dbg) Done: out/minikube-windows-amd64.exe addons enable metrics-server -p newest-cni-20211117231341-9504 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.8102284s)
start_stop_delete_test.go:196: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.81s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:258: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:269: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    

Test skip (20/234)

x
+
TestDownloadOnly/v1.14.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.14.0/cached-images
aaa_download_only_test.go:119: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.14.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.14.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.14.0/binaries
aaa_download_only_test.go:138: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.14.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.22.3/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.22.3/cached-images
aaa_download_only_test.go:119: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.22.3/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.22.3/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.22.3/binaries
aaa_download_only_test.go:138: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.22.3/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.22.4-rc.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.22.4-rc.0/cached-images
aaa_download_only_test.go:119: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.22.4-rc.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.22.4-rc.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.22.4-rc.0/binaries
aaa_download_only_test.go:138: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.22.4-rc.0/binaries (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:114: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:187: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (300s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:847: (dbg) daemon: [out/minikube-windows-amd64.exe dashboard --url --port 36195 -p functional-20211117223105-9504 --alsologtostderr -v=1]

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:858: output didn't produce a URL
functional_test.go:852: (dbg) stopping [out/minikube-windows-amd64.exe dashboard --url --port 36195 -p functional-20211117223105-9504 --alsologtostderr -v=1] ...
helpers_test.go:488: unable to find parent, assuming dead: process does not exist
--- SKIP: TestFunctional/parallel/DashboardCmd (300.00s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd
=== PAUSE TestFunctional/parallel/MountCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd
functional_test_mount_test.go:58: skipping: mount broken on windows: https://github.com/kubernetes/minikube/issues/8303
--- SKIP: TestFunctional/parallel/MountCmd (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:491: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:193: skipping: access direct test is broken on windows: https://github.com/kubernetes/minikube/issues/8304
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:97: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:97: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:97: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:35: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestScheduledStopUnix (0s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:77: test only runs on unix
--- SKIP: TestScheduledStopUnix (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel (2.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel
net_test.go:77: flannel is not yet compatible with Docker driver: iptables v1.8.3 (legacy): Couldn't load target `CNI-x': No such file or directory
helpers_test.go:175: Cleaning up "flannel-20211117230313-9504" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p flannel-20211117230313-9504
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p flannel-20211117230313-9504: (2.3110074s)
--- SKIP: TestNetworkPlugins/group/flannel (2.31s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (2.14s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-20211117231131-9504" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p disable-driver-mounts-20211117231131-9504
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p disable-driver-mounts-20211117231131-9504: (2.1432062s)
--- SKIP: TestStartStop/group/disable-driver-mounts (2.14s)

                                                
                                    
Copied to clipboard